Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology Books Media Book Reviews

Arguing A.I. 443

Are intelligent machines transforming life as we know it? Or is A.I. yet another overhyped, self-serving fantasy by deluded scientists and technocrats talking mostly to one another, foisting their ill-conceived, poorly-engineered creations on an unsuspecting public? The discussion has rarely been better framed than in software-culture writer Sam Williams's short, readable and smartly-organized new paperback book Arguing A.I.: The Battle for Twenty-first Century Science," published by atRandom.com, the e-book division of Random House.
Arguing A.I.
author Sam Williams
pages 94
publisher Random House
rating 8
reviewer Jon Katz
ISBN 0-8129-9180-X (pbk)
summary perspectives on the A.I. debate

In some ways, the author argues, the debate over A.I. is undergoing a profound revolution. What was once a discussion largely confined to tech and academic circles has mushroomed into a more mainstream brawl as a growing number of engineers and lay authors vent on the acceleration of modern technology and the future of humanity. Given the explosive growth of the Net, the near-continuous increases in computing power and much-publicized A.I. breakthroughs like Deep Blue's 1997 victory over chess champion Gary Kasparov, the question is no longer whether artificial intelligence will reach the level of human intelligence: It's when.

As the title suggests, Williams's book is less about A.I. itself than about the increasingly ferocious debates raging through the scientific community about it. The conflicts surrounding A.I., Williams suggests, may be the most significant since the titanic battles over evolution a century ago. In fact, Williams is among those who've argued that the A.I. debate is really an extension of the same fight. Artifically intelligent machines are already changing human evolution, many argue, even evolving inevitably into life-forms and species all their own. A growing number of critics and skeptics also argue that A.I. proponents are moving too quickly, failing to take into account the mind-boggling cultural and philosophical problems being raised by their new, still-imperfect technologies.

Williams traces the contemporary birth of A.I. -- via Hilbert and Turing -- on to the living pioneer credited with coining the term (John McCarthy), and talks to several of the principals guiding the A.I. debate today, like Ray Kurzweil, Jaron Lanier and Bill Joy.

This is a necessary book. It's one you could actually recommend to students, journalists, friends, parents, anybody trying to grasp the issues and implications of A.I., surely one of the most significant technologies human beings will face in the 21st Century. Even if A.I.'s impact on life is being overstated, it's poorly understood by the public. So Williams walks us through inventor Kurzweil's almost radical optimism about A.I. and the future -- especially his claims that human society is rapidly approaching the evolutionary equivalent of a new species, a fusion of humans and intelligent machines. This is the point of no return when it comes to artificial intelligence, Kurzweil claims. "The progress will ultimately become so fast that it will rupture our ability to follow it. It will literally get out of our control. The illusion that we have our hand on the plug will be dispelled."

But Williams also introduces some of the people that don't see this as a good thing -- or even a likely development. Bill Joy is more pessimistic, as he made clear in his now famous article in the April 2000 issue of Wired, "Why The Future Doesn't Need Us." The piece thrilled technophobic intellectuals and journalists because it came from a software entrepeneur and reaffirmed something they desperately wanted to believe: technology -- especially genetics, bio-tech and robotics -- is out of control and likely to generate as much evil as good in the future. Joy sees little in the modern history of software development to suggest the emergence of sentient machines. His experience has led him to believe that it's difficult to build things that are reliable.

Jaron Lanier, whom Williams also interviews, coined the term virtual reality and once likened A.I. research to alchemy. Lanier accuses many in the A.I. firmament of choosing faith and hyperbole over science and reality. He likens the current tech obsession with A.I. to medieval scholars' attempts to prove the existence of God through Aristotelian logic. In their rush to endorse the concept of thinking machines, warns Lanier, many authors are putting scientific faith before scientific skepticism.

Williams does a skillful job of presenting these different points of view without intruding on them. It might have been nice to hear more of Williams's own thoughts and perspective, since he's one of the few journalists with this much understanding an access to so many principals in the A.I. discussion. On the other hand, he might not have been wise not to wade in amongst these A.I. heavyweights and their raging debate. "Arguing A.I." is as timely a book about technology as you're likely to come across, and, perhaps more surprisingly, highly readable.

This discussion has been archived. No new comments can be posted.

Arguing A.I.

Comments Filter:
  • I don't think that ACHEIVING A.I. is as important as all of the technological advances we will make along the way. It will be these advances in technology that will help the most in our day-to-day tasks, not having a robot that thinks like a person. We already have plenty of those...they're called humans.
    • by SirWhoopass ( 108232 ) on Tuesday February 05, 2002 @11:56AM (#2955596)
      The problem with AI is that it always seems unsuccsessful. Any time an AI technology matures and becomes useful it is no longer considered "AI". Computer vision (face recognition), expert systems, even many modern strategy games would be considered amazing AI advances a few decades ago. They all arose because of AI research. Once they matured, however, they were no longer considered AI.

      AI won't be considered successful until we build HAL or Data, but the journey so far has been very useful.

      • The reason these technologies are no longer considered "AI" is that they never were actual artifical intelligence.
        When the original researchers in AI began, they saw that the bottom-up approach had a huge number of issues. So they ended up spliting into the computer vision, modeling, logic, etc.. groups. The idea was that if we could figure out all of these individally, we could bring them together and show real intelligence. The problem is that as these individual technologies become more mature, the path for putting them back together is gone. We're seeing that this isn't the way to model real intelligence.
        There is a group [msu.edu], involving some major players, that is looking at other methods though. Personally this seems like a more viable approach.
      • Regarding the subject of your comment: if you call humans "intelligent", and you do not subscribe to the argument that there is some "soul" or non-physical essence that gives us consciousness, how can one believe that we won't ever achieve AI? It seems illogical to assume that humans will never be capable of duplicating something already in existence. The real question, if you ask me, is if we will find a way to do so that wholly differs from the organic model that has evolved on Earth, or whether we will just end up creating imitations (through emulation on an electronic platform or actual biological construction) of ourselves.
        • The point seems to me to be that, no matter how close to human a built machine would be, people would still insist that it's Not Really AI, and if you tried to explain otherwise, they'd either stick their fingers in their ears or insist upon tests that cannot be satisfied even in the case of humans. This will all be really stupid, of course, but that's what people will do.

      • I agree with "receding horizon" comment of S.W. that onece you've built it, it doesn't seem that intelligent anymore. However, I suggest the essential aspect of humans are that we are language animals (to paraphrase Steven Pinker). Therefore, where a computer exhibits useful & creative conversation, I will consider that to be A.I. This doesn't mean the 'parrot programs' like the Eliza psychologist that just reflect stock phrase back at you based on keywords in your input. I mean some true understanding, perhaps a dash of emotional insight, and saying something new and interesting (the creative part). A few expert systems can discuss narrow topics fairly well, but not much else, and are boring. Natural language understanding and creation has been an important objective of A.I. and C.S. for a half century, with very limited and disappointing results.
        • The horizon at which AI would be recognized as such actually began receding approximately 1600, when the philosopher/mathematician Blaise Pascal designed the first mechanical calculator. Prior to that, it was generally thought that calculation, like other forms of reasoning, was uniquely human. Then Pascal's family put him to work keeping the books on their business (wine-selling?). Bored stiff, he figured out how to use gears, levers, and ratchets to add. Oops, it doesn't take intelligence to do arithmetic.

          The second AI challenge may have been chess-playing. (There was a chess-playing machine on display around the same time, but there was a midget inside...) Computer programs reached grand-master level about 30 years ago, and specially-built machines can contend with human champions now. But that isn't intelligence either. The Deep Blue chess machine does NOT think things out like humans, but rather uses very simple heuristics to identify obviously bad moves, and traces out all the reasonable moves for 10 levels or more. Someday a computer will be able to play all possible chess games out within it's memory -- it will be the perfect chessplayer, and with no more real intelligence than Pascal's gears.

          Various other useful AI accomplishments are similar to Deep Blue in how they relate to intelligencs. An example where I have a bit of experience: automated visual inspection is a substitute for human inspectors, who get bored as hundreds of perfect parts go by and don't see the one bad one in the lot. It is not nearly as effective as a human who is paying attention, it often seems maddeningly stupid to the programmers and operators who have to deal with all the false alarms, but it doesn't get bored... Another example is the damned Microsoft paperclip help system -- it started out as a dog, but that implied too much intelligence, and now it just smirks at you while answering the wrong question.

          The _real_ AI challenge is the Turing test: hold up a conversation well enough that the humans in the chat room don't suspect it's a computer. This is very, very, very tough, and useful mainly as a publicity stunt. People don't want a computer that can simulate a human -- they want it to get the work done, without all the emotional issues you get with humans.

          At least one science fiction author (Melissa Scott?) has taken to calling it "Artificial Stupidity." That's a much more practical goal; besides it better expresses what we really want (smart enough to work, too stupid to unionize), and avoids the misleading expectations that come from "Artificial Intelligence".
      • The problem with AI is that it always seems unsuccsessful. Any time an AI technology matures and becomes useful it is no longer considered "AI". Computer vision (face recognition), expert systems, even many modern strategy games would be considered amazing AI advances a few decades ago. They all arose because of AI research. Once they matured, however, they were no longer considered AI.

        The reason it is unsuccessful is the confusion caused by the different meanings of the phrase AI.

        Often AI just means research on a specific problem that humans are currently much better at solving than machines. Of course once the research is complete and the machine is better, it is no longer AI under this definition.

        Now if the solution is largely motivated by what we know about how humans work then perhaps there is still a glimmer of AI in the research. However, this is a hard argument to make since we don't know how the brain works. In fact, often there are many reasons to think the solution isn't similar to the brain. There are many ways to skin a cat. For example, I doubt human chess masters search a game tree with alpha-beta pruning, however, this is a way for computers to solve the problem that, with todays hardware, gives them superior performance.

        AI won't be considered successful until we build HAL or Data, but the journey so far has been very useful.

        This is a different notion of AI. It fits more into the natural definition of AI, where AI is the creation of human intelligence. In this case, you need the whole enchilada (or at least a interesting percentage) to get intelligence. You can't just pick and choose certain problems. This definition is more in line with the Turing Test. Unfortunately this is a very hard problem for obvious reasons. At one time more people worked on this problem, but when nobody got good results, the funding started to dry up. That's why people switched to the previous definition.

        Some people still work on the grand AI problem, but as another poster pointed out, it is generally on a small piece with a story about how it can be connected to other pieces to create a real AI. Generally they pick a piece that might be commercially useful in its own right such as vision or linguistics. Again this helps with funding. Unfortunately, I don't think anyone works on tying these systems together. (Probably because there would be a whole mess of problems if they tried.)

    • "Singularity", the time when AI exceeds humans, will be the most important event in human history. Imagine the prospect of science accelerating exponentially as machines build faster, smarter machines. It's unfortunate that it's still a long way off.

      Robots replacing humans in day to day tasks is a process begun quite a while ago, and will proceed. But it's really not that exciting. Lots of people will end up "no-jobbed", but society will adapt. We'll find better things to do than sweeping - like thinking.

      It's when machines start thinking better than we do that things will really change.
  • by nixadmin ( 553533 ) on Tuesday February 05, 2002 @11:49AM (#2955549)
    One thing that's always bothered me about the AI debate is that the thinking for a long time has centered around how to model intelligence on silicon. To me the true marvel of the mind is the holographic quality of intelligence and the way in which the physical form of the brain influences, and is shaped by, the quality and nature of one's thoughts. It will be exciting to see what part the new polymers can play in this research.
    • That is patently untrue. The AI debate has absolutely nothing to do with hardware. A general purpose computer based on silicon is used because it is a general purpose computer and can be used to model any computational task.
      • A million people with pencils and paper can also be used to model any computational task. Are you proposing that such a system could somehow create a new self-aware intelligence independent of any of the individual pencil pushers? It's hard to imagine, as the physical embodiment of such a system is nothing more than patterns of graphite scribbled on paper.

        A Turing machine (which is computationally equivalent both silicon computers and paper-and-pencil algorithms) has been proven to be able compute a certain subset of mathematical proofs. I have doubts that this necessarily implies that it can model every phenomenon in the physical universe. It is possible that a brain uses some to-be-discovered process that goes beyond a simple Turing machine.

    • by gwernol ( 167574 ) on Tuesday February 05, 2002 @01:45PM (#2956411)

      One thing that's always bothered me about the AI debate is that the thinking for a long time has centered around how to model intelligence on silicon.



      Actually this is not true, for example an early AI system was constructed to play tic-tac-toe on a computer using matchboxes and marbles. No silicon at all.

      One of the fundmental results of computing (discovered by Alan Turing, the first researcher in the field of AI) is that there is a basic set of computable functions. It doesn't matter what hardware you use, the set of things you can compute is ultimately the same. An interesting question is whether human-like intelligence is a combination of functions from the computable set or not. People like Roger Penrose argue that there is something more than computable functions going on in the human brain (he calls it the "divine spark"). In my opinion that's nonsense.

      If an AI system can be built using computable functions it doesn't matter what hardware you execute it on (apart from perfromance issues). The results will be the same.

      To me the true marvel of the mind is the holographic quality of intelligence and the way in which the physical form of the brain influences, and is shaped by, the quality and nature of one's thoughts.



      You should look into neural net research. This uses massively parallel networks of artificial neurons to simulate the real structure of the brain. Its an important branch of AI research. Of course neural networks can be completely simulated on traditional computer hardware. Again, the hardware is not the key, its totally down to the software you run.



      By the way, what do you mean "holographic" nature of intelligence. I don't understand what you are trying to imply with this term.



      It will be exciting to see what part the new polymers can play in this research.



      In my opinion, none, except perhaps to give us faster computers. They can do nothing to change the fundamental computations that are taking place.

  • wrong topic (Score:3, Insightful)

    by gTsiros ( 205624 ) on Tuesday February 05, 2002 @11:51AM (#2955562)
    This isn't about technology. This is about philosophy. The question that arises is:
    is a machine that to a human appears to be human, human?
    • Re:wrong topic (Score:4, Interesting)

      by Grab ( 126025 ) on Tuesday February 05, 2002 @12:03PM (#2955640) Homepage
      The movie DARYL said this even better:-

      "A robot becomes human when you can't tell the difference any more".

      That one film influenced me more than all the other sci-fi films I ever saw as a kid. It's the only one that really got that concept and went for it. OK, Asimov did it first ("Bicentennial Man") but cinema still hadn't really got there.

      Grab.
      • Re:wrong topic (Score:2, Interesting)

        by mi ( 197448 )
        "A robot becomes human when you can't tell the difference any more".

        Arguably, that's exactly when a human becomes robot...

    • Re:wrong topic (Score:2, Insightful)

      by transient ( 232842 )
      is a machine that to a human appears to be human, human?

      and perhaps more importantly, does it matter?
  • I'm doubtful (Score:3, Insightful)

    by TrollMan 5000 ( 454685 ) on Tuesday February 05, 2002 @11:51AM (#2955566)
    Or is A.I. yet another overhyped, self-serving fantasy by deluded scientists and technocrats talking mostly to one another, foisting their ill-conceived, poorly-engineered creations on an unsuspecting public?

    I tend to agree. I'd like to see something using AI play in a poker game. Can AI ever simulate bluffing? Or analyze the expressions on the other player's faces to determine if perhaps that they are bluffing, and call the bluff? Human intelligence can do thiss, but I'm not sure if something this complex exists now, or ever will.

    Chess is one thing. It follows a certain set of rules. Even conversation does, but it also invloves human expression like the bluffing example. But to to play out a scenario given a unique situation, machines are not up to the task yet.
    • Bluffing is pretty easy when you have complete control over your appearance. Bluffing is such an art in humans mostly because novices are so bad at it (they sweat, look around differently). And when to bluff is something you could write a good algorithm for (not exactly a big chore for a highly advanced intelligence).

      Analyzing another face might be hard, but it's infinitely easier than passing a Turing Test. Have you ever heard of a lie detector? See any parallels? With a little work, I'm sure something like this could be put together using only today's technology.

      If a machine as smart and adaptable as Data existed, it would bankrupt Riker - easy.
    • There was a significant amount of research done in AI Poker about a decade ago. Sorry, no references.

      One of the interesting things about the instance where Big Blue beat Kasparov was how it happened. Kasparov became freaked out, saying that the moves were like a human player and not a machine. Whether they were or not, or even whether "like a human player" is a meaningful concept, is not the point. The point is that, effectively, Big Blue psyched Kasparov out.

      • The point is that, effectively, Big Blue psyched Kasparov out.

        What's more accurately said is that the programmers used Big Blue to psyched Kasparov out. I doubt there was a routine in Big Blue called "Psych_out_Kasparov".

        • What's more accurately said is that the programmers used Big Blue to psyched Kasparov out. I doubt there was a routine in Big Blue called


          No that's less accurate. Big Blue psyched out Kasparov. The programmers did nothing once play began, they taught it to play, however once it was playing it's actions and choices were it's own, the programmers no longer had any role whatsoever.

          • If a man sets a clamour mine and later someone trips the switch, would you say the clamour mine is guilty of murder?
            • In a sense yes. The mine what killed the person who stepped on it. But that's also where the division between AI and a tool comes in. A mine's decision process is a single step: "have I been triggered or not", while chess playing is a multi-tiered process where decisions are made between thousands of choices of gray.
              I think your analogy is an over simplification of the matter at hand. Is a murder responsible for his actions when the decision was made by a subset of the neurons in his brain, when it's possible that one and only one triggering neuron pushed him over the edge? Or are his parents, since they are the ones who created him and "set" the mine?
          • Re:AI in Poker (Score:3, Interesting)

            by DaoudaW ( 533025 )
            The programmers did nothing once play beganActually that's not true. Part of the controversy surrounding the match was that the programming team, including some grandmasters, were constantly tweaking Big Blue, even during games. In addition, they reserved the right to select from Big Blues top choices. The match was far from the Man vs Machine match that was marketed.
    • So, we're too complex for a fairly "new" field. I doubt "God" engineered us in a few decades.

      Don't knock on AI until you understand it. Everything in the world can be simulated with an algorithm; it's just a matter of how many millions, billions, or trillions of lines of code it takes.
    • Re:I'm doubtful (Score:2, Insightful)

      by fiftyfly ( 516990 )
      I tend to agree. I'd like to see something using AI play in a poker game. Can AI ever simulate bluffing? Or analyze the expressions on the other player's faces to determine if perhaps that they are bluffing, and call the bluff? Human intelligence can do thiss, but I'm not sure if something this complex exists now, or ever will.

      hmmm.
      How good, do you think, would your human intelligence be at figuring, say, a dolphin's bluff? Or some completely alien intelligence? What about a hypothetical being with little or no physical being/experience, like a computer?

      Personally, I think you'd fail miserably. I've had the good fortune to come to know a Persian family rather well (over the last 10-ish years). I have immense dificulty knowing when Hooshang is "yanking my chain", simply because my cultural heritage doesn't happen to share a whoe lot with that of a nomadic theocracy.

      seems a bit much to expect competancy from the other side of the fence, eh?

  • Two things (Score:2, Funny)

    by ackthpt ( 218170 )
    AI is vastly underestimated on the impact it will have in the future.


    AI will most likely see first use in the phone-sex industry. Think about it. Adult entertainment is the first to embrace advancements in technology.


    To see where AI is going you have to stop staring at the algorithms, take a step back, and see what mundane things you'd like someone else to look after for you.


    "Hi, Honey, I'm home!"

    "You're certainly home early!"

    "Well, we had a change in staffing at work."

    "Oh, no! Don't tell me you were replaced by a computer?!?"

    "No, they replaced my computer with a cyborg, now my job is to have a deep philosophical discussion with it to boot it up each morning."

  • My thoughts (Score:3, Interesting)

    by Wind_Walker ( 83965 ) on Tuesday February 05, 2002 @11:55AM (#2955592) Homepage Journal
    You know, I've done quite a bit of thinking on the matter of AI, and I've come to the following predictions:
    • Within 50 years, there will be a computer that will pass the Turing Test. For those of you who don't know (and I hope nobody is in this category on Slashdot :-) the Turing Test is basically making a computer indistinguishable from a human being. A tester will ask the computer questions, and will be unable to determine whether a computer is answering the questions or whether a human is mimicing a computer.
    • Within 50 years after that (100 years total), computers will be able to parse speech flawlessly, so voice recognition will finally end up being plausible. Computers will understand the nuances of speech and will be able to change homonyms (here and hear) based on the context of the sentence.
    • Within 50 years of that (150 years total) we'll have computers that can respond to voice commands like in Star Trek. The computer will not only understand the syntax of language, but it will be able to determine, on its own, the difference between a question asked in conversation and a question asked to the computer in conversation.
    Of course, these are just random guesses on my part, but I really think that they're reasonable. Give me your thoughts, please.
    • All this stuff was supposed to be accomplished in the last 50 years. I guess, like any computer project is behind schedule...

    • I think the latter two seem reasonable, but it seems to me that passing the Turing Test is the most difficult of the three. Taking your example of the computer on Star Trek, it could parse speech, and it could probably maintain a conversation for a while, but it had no understanding of emotion, poetry, or art in general. It would be forced to answer a question about such topics with "I don't know what you mean" (or "Does not compute"). After getting the same (or similar responses) several times, I would begin to suspect that I was not talking to a human. I think we will have good voice recognition and generation a good bit before we have an AI that can pass the Turing Test.
      • Taking your example of the computer on Star Trek, it could parse speech, and it could probably maintain a conversation for a while, but it had no understanding of emotion, poetry, or art in general. Um, that leads to the question of how many geeks would pass a Turing test... ;-)
      • ...it could probably maintain a conversation for a while, but it had no understanding of emotion, poetry, or art in general. It would be forced to answer a question about such topics with "I don't know what you mean"...

        On the other hand, if you were to ask a question that required an understanding of emotion (other than anger or ego), poetry, or art on /. what percentage of the time would you get "I don't know what you mean" as an answer.

        A program could have Roget's Rhyming Dictionary hard coded and probably do a better job of analizing poetry than I could. Scan in ten years of _Poet's Life_ magazine add a nice randomizing hack that keyed off of your questions and it could "talk" (or at least parrot back) poetry analysis better than I ever want to. However, I don't think such a program would be "intelligent".

        I think we are similar to the engineers that designed Deep Thought in Douglas Adams' book. We are asking the equivalent of "What's the meaning of life" but we don't really know what the question is...

    • by Tenebrious1 ( 530949 ) on Tuesday February 05, 2002 @12:19PM (#2955757) Homepage
      I no expert, but I think you've got it backwards.

      First, computers will recognize voice commands. Well, there are already programs that do this like Dragon, so we're almost there anyway. The point now is that you are still giving keyword commands to a computer, and as it is refined, you'll better recognition of specific commands, and questions that can be filtered from within conversations. Giving commands to a computer is easier than open ended questions to the computer.

      Second, we'll solve the natural language problem, or at least enough to provide flawless voice recognition that you speak of. It will be capable mainly of handling accents and bad grammar.

      Lastly, a computer will pass the Turing test. Unless a computer can understand the intricassies of the english language, there will be people who will be able to tell by the way the answer is phrased. If you solve the NLP or get far enough for a computer to analyze and spit back poetry, then you got the Turing test licked.

    • We've had computers pass the Turing test already, in a limited subject discussion [harvard.edu]. We've also had humans fail it. I don't think there is anything which could stop us creating a general Turing test program today. However, I don't think that the Turing test is going to be a good indicator of useful AI, because just like the page I quoted says, "People are easily fooled".
    • Eh? A computer that could pass the Turing Test would surely be able to respond to commands, so that once you have useful voice recognition, just pipe its output to the Turing Test passer...no need to wait another fifty years!

      I'm hoping that we'll advance much faster than you think (see discussions of Vinge's Singularity). (Heck, I just cut the time down by a third, just by using the Unix tools philosophy. :-)

  • by Aldern ( 34174 ) on Tuesday February 05, 2002 @11:59AM (#2955615) Homepage

    It's always seemed funny to me how the technologists take this field, which is tied irrevocably to philosophy, and ignore everything the philosophers say about it. For example, has there ever been a good refutation of Searle's Chinese Room argument?

    Another of Searle's arguments is pretty damning as well; those that pursue strong AI are, in fact, favoring a form of dualism. For them the mind is completely separate from the brain, an idea that has been pretty much discarded by the thinking public. Why is it, when computers are concerned, that the mind is no longer a product of a brain?

    • what's the chinese room argument?

      The mind is what the brain creates through its functions. the brain is an organ. its job is to store and process information. if it's not doing that (i.e. I'm dead or in a mechanically-sustained state, a coma), do I have a mind? the two are interdependent.

      Anyone who is not a creationist type "humans are special" is going to consider the brain just to be an organ. Or so I thought.
    • I, for one, have yet to hear a compelling version of the Chinese Room argument. The version I have heard has a non-Chinese speaking human in a room, with a list of rules (in a language the human understands) for processing Chinese characters, which he uses to generate additional Chinese characters. The human dutifully does this, and in the process, ends up reading a story in Chinese and then answering questions (also in Chinese) about it, all unknowingly. Searle (or his caricatures, anyways) then point triumphantly to the man, proclaim "but he doesn't know Chinese!!!", and then sit back smugly as though they had refuted something important.

      It is totally obvious to me, anyways, that the man is not required to know Chinese any more than my Pentium III is required to know LISP -- the man is the one component of a system which, as a whole, evidently does understand Chinese.

      As for the mind/brain connection, this seems to be the same misunderstanding -- the mind is software, and one of the open questions is the degree to which this software is platform-dependent. Searle (again, perhaps only Searle's caricatures) seems to think, more or less axiomatically, that the mind can only run on the meat-machine, but seems to offer no evidence.

      I welcome more sophisticated versions of Searle's arguments, if you've got 'em.

      -- A.
      • Not exactly.

        The specific point Searle is making is based on a presumption that abstract symbol manipulation (the kind that computers perform) is "neither constituitive of nor sufficient for semantics." This is where most of the attacks have gone after, but to my knowledge unsuccesfully.

        You are correct in saying the man is not required to know Chinese any more than the processor knows LISP. But do you say the system - the processor and software code - "understands" LISP? Of course not - it can process it, yes (manipulate the symbols). Does it "understand" in our traditional use of that word? No. Then comes the analogy to the sytem of the man in the room and the rules themselves. Somehow this "system" understands Chinese? Not in the least - it is merely able to manipulate symbols in a manner that satisfies an external observer.

        To say that consciousness can be created simply by instantiating a program is (according to Searle) a flawed proposition. He never said that machine consciousness is impossible as a whole, and he never said that human meat-machines are the only possible consciousness; he merely said that a program cannot be.

        BTW, the quote is from "Is the Brain's Mind a Computer Program?" by Searle, in Scientific American, January 1990. It went a long with a attempted refutation by the Churchlands, and it's a more clear illustration of the principal than Searle's original paper.
      • It is totally obvious to me, anyways, that the man is not required to know Chinese any more than my Pentium III is required to know LISP -- the man is the one component of a system which, as a whole, evidently does understand Chinese

        This is called the "Systems Reply" and is anticipated and refuted in the original Chinese Room paper ("Minds, Brains and Programs"). It is always a touchstone of geek arrogance that they believe themselves to have come up with a new and definitive refutation of Searle, and it's always this one.

    • Searle (Score:3, Informative)

      by epepke ( 462220 )

      The most obvious problem with the Chinese Room metaphor is that it confuses the properties of a system with the properties of an element of the system. Asserting that the guy in the room does not know Chinese is about as interesting as asserting that a single neuron in your brain does not know English. Since we've known not to make that mistake for at least 3000 years, there really isn't much excuse.

      Perhaps people are fooled because there's a guy in there, and despite all evidence to the contrary, people expect guys to know what they're doing. Or, perhaps people don't know how to think. In any event, "refuting" an argument requires that it be an argument, and that is not the case here. It also requires that the person recieving the refutation have a certain grasp, and I find it difficult to believe anyone with such a grasp could fail to see it as bogus during the first read-through. It is hard to refute "deedle deedle queep."

      But, anyway, my favorite discussion of this is "Backtracking: the Chinese food problem," Lou Hoebel, Chris Welty, intelligence March 1999, 10:1.

      There is also a decent discussion in The Universal Computer: The Road from Leibniz to Turing, Martin Davis. [amazon.com] This is an excellent book all around.

    • Note: this will likely make no sense to you if you've never read Searle. A summary of the Chinese room argument can be found here [helsinki.fi].

      There are plenty of decent refutations of Searle's argument. Douglas Hofstadter's is the funniest, if only because he's so hostile about it (I don't have a reference handy, but the phrase "matched in its power to annoy only by..." floats out at me).

      Searle's arguement is actually pretty bad, in my opinion, and I'm only an armchair philosopher. His refutation of "the system argument" (that the combination of book, paper, and guy reading book understands Chinese) amounts basically to two points: nothing within that system understands Chinese, and systems don't understand things. But systems do understand things: I am a system of various parts, but my relevant parts (medulla oblongata, eyes, hippocampus, whatever) don't understand things. I understaxnd things: I am more than the sum of my parts.

      It's ironic that Searle can accuse AI researchers as pursuing a dualist argument. Most everyone I know favoring strong AI believe wholeheartedly that, as you say, mind is a product of brain. What they don't believe is that brains are magically endowed by God to be the only things capable of producing a mind. (Note: they don't attribute this capability to rocks and stuff.) Searle goes on and on about how AI, no matter how close to human behavior it may come, will never be truly intelligent because it will not posess "intentionality" - it can tell you that 2+2=4, but it can't really understand it, can't really mean it, but he never goes on to say why. ("Why can't it understand stuff?" "Because it doesn't have intentionality." "What's that?" "The ability to understand stuff.") If that's not a dualist view, I don't know what is.

      Bottom line, where I'm concerned: we still don't understand what it really means to think, to be intelligent. Searle's argument is essentially that just as a computer simulation of a rainstorm won't get you wet, a computer simulation of intelligence won't be smart. But that doesn't make sense: rainstorms involve water, while intelligence... what? What can you say about an intelligent entity that isn't based on its external characteristics? It's a fascinating question, but Searle ignores it in favor of "intentionality," something which isn't observable (except to its owner) in any way. He takes the really tough, interesting question, and slips in straw-man to knock down. And that's just, as Hofstadter said, annoying

      • actually, dennett would say that "intentionality" is only observable to entities outside the supposedly intentional system. his book on intentionality is older than "consciousness explanined" but to my taste has a much more solid argument: intentionality is a label that observers attach to objects to explain their behaviour, and that it doesn't necessarily correspond to any internal phenomena at all. of course, since most of us verbally report the experience of having intentions, there is more to this argument than meets the eye, and for that reason alone, his book is recommended.
      • What they don't believe is that brains are magically endowed by God to be the only things capable of producing a mind

        Nor does Searle believe this, and Dennett lost a lot of respect in my eyes for continuing to claim that he does. Searle is completely agnostic about what sort of thing could produce a mind; he just asserts that nothing produces a mind by virtue of its status as a Turing Machine

  • Or is A.I. yet another overhyped, self-serving fantasy by deluded scientists and technocrats talking mostly to one another, foisting their ill-conceived, poorly-engineered creations on an unsuspecting public?


    Oh, please! That sounds like one of those typical rants against science, where science works hard, and either a rogue scientist with green eyes, or some company, takes their work, and hypes it to the "unsuspecting public." Among the scientists who do AI that I know (5 CS faculty), none of them seem to have deluted fantasies about what the current AI, esp that they're working on, can do. They don't benefit from making promises that they cannot follow up on- corporations do.

  • I don't think you can get AI working on normal Von Neumann Architecture. Sure you could use that architecture to simulate the mahcine that would work, but hoping to find human-like intelligence without using neural networks is, IMO, crazy.

    Another requirement would be senses that mimic human senses. I'm amazed that people think you can simulate human-like intelligence without using nearly the exact set of sensory input. Dolphins are clearly intelligent creatures, but we can't talk to them... and I think it has to do with sensory input.

    Lastly, you won't be able to program an AI. It has to be grown. Human intelligence takes years of sensory input, filtering, communication, and response analysis to work.

    Starting with the right neural network and training it like you would an intelligent child seems the right approach.

    Your opinion may differ, but that's mine.
    • I'm hopeful that sometime in the future we can define all the parameters of intelligence, and the filtering of years of learning down to one level, so we can determine exactly what the peak level of intelligence for human beings is.

      That will lead to a holocaust of unintelligent people, which will only serve to make our world a better place. We've ran out the utility of the individualism paradigm. It has no usefulness as far as getting things done is concerned. Humanity increasingly engages in such complex tasks that one person can't do anything to affect them by themselves. It's sad but humanity's only chance to survive is to merge into one entity and AI and intelligence research is the only way to do it.

      I'm half joking..
  • I work researching Artificial Intelligence, and I can tell you firsthand that these are not just fantasies. In the future, with advances like nanotechnology and quantum computing, it will be much, *much*, easier to write a complex AI in a small space. I mean, what are humans but computers? We have our central processor unit and several other hi-tech gizmos. But, we are organic, and this causes many problems. It is easy to become diseased and pass on. But, with quantum computing and nanotech, we will be able to do much more complex things without all the bugs and hassles of organic computing, which is humans!
    • I think AI mainly needs a breakthrough - a new way of approaching the whole problem. As you suggest, we'll need a lot of computing power - but even with much more computing power I don't think current algorithms would be capable of the sort of learning and problem solving that humans are.

      Efforts to solve the Turing test are a boondoggle right now. Instead of hacking at real root of AI, they're whacking at leaves like ambiguous meanings and localizing events and states in space time.

      I believe there's an algorithm which would be able to learn these kinds of concepts without being led by the hand. And even if today's computers would take eons to learn English using it, I think it's what we need to concentrate on. Is it some sort of neural net? Is it a way of evolving and algorithm?

      Is it something nobody has even dreamed of, some code that runs in our brain a million times - the rules of getting from "problem" to "solution"?

      We'll find out I guess.

      .
    • The question of whether "humans are computers", or rather, whether or not all of the functions that constitute human intelligence are possible within the confines of a Turing machine, is far from a definitive answer. Part of the problem is that we don't have a working definition of "human intelligence" because we haven't successfully reverse engineered our own brains yet.

      Until that happens and we start answering these fundamental questions, then the debate about whether strong AI will occur and whether robots will rule the earth (hail King Bender), will remain the domain of science fiction authors and Latte Drinkers.
    • Who is the fool who moderated the parent post as "funny"??
      It was actually one of the few post in this discussion to say some informative things, even if they are pretty straightforward if you know something in the field.
  • Just as a side note: several founders of A.I.- John von Neumann, John McCarthy, and Marvin Minsky- were in John Nash's cohort at Princeton. All are mentioned at various times in the book version of the movie.

    Nash's thesis on the equilibrium point is related to the most common algorithm used in A.I. games like chess.
  • AI hey? I still remember those immortal Dr Sbaitso words. At the time, he came free, with my sound card. Unfortunately he chocked on his own words and died.

    "My name is Dr Sbaitso, I am here to help you. Say whatever is in your mind clearly."

    Well I sure hope that this phrase isn't patented.
  • I find it interesting that none of the "debaters" apparently referenced in the book (Ray Kurzweil, Bill Joy, and Jaron Lanier) are workers in the field of AI. Not that their viewpoints are any less valid, but based on the review, this entire book presents the debate without representation of any active members of the AI research community -- whether that community is narrowly (and parochially) or widely defined.

    It's kind of like having Congress and lobbyists "debating" the social, legal and ethical issues in open source software -- I'm sure they have opinions, and they're certainly entitled to them, but you have to wonder if their opinions have any relationship to the technical realities of the field.
  • by Komodo ( 7029 ) on Tuesday February 05, 2002 @12:20PM (#2955767) Homepage
    The lead-in to this story somewhat disturbed me, independant of the content.


    Or is A.I. yet another overhyped, self-serving fantasy by deluded scientists and technocrats talking mostly to one another, foisting their ill-conceived, poorly-engineered creations on an unsuspecting public?


    The general public is not now, nor has it EVER been, part of the dialogue of Science. Here I mean science as an instution, like banking and marriage is an instition.

    The dialogue in science is people publishing papers. These papers are peer-reviewed by other people who also publish and have 'scientific credibility'. Scientific credibility is gained by publishing good papers and having academic credentials. There's a book by Bradley Latour that describes a 'scientific economy' based on credibility.

    As such, the general public may be a spectator to the dialogue of science but does not participate, as the 'general public' isn't publishing and therefore isn't part of the economy.

    The public gets disappointed when science doesn't live up to claims that they read into the dialogue which is, frankly, not taking place in the Real World anyway, and it's a mistake to expect that it should produce anything the Real World can use.

    It's the public that PULLS things from the realm of science, develops expectations, and tries to change the Real World with it. Sometimes it works. Sometimes it doesn't work. You can't blame science for those failures.

    Now, science isn't perfect. The landscape of debate is subject to bloody revolutions in paradigm, like the changes from Ptolemy to Galileo to Newton to Einsten and beyond. Scientists play politics, too, and sometimes lose their objectivity when reviewing papers for publication. It doesn't change the Real World. Over the last 30 years, there have been a dozen opinions and 'proofs' on whether the Universe will expand forever, collapse in a 'big crunch', or eventually stop and stabilize. So what? Life goes on here on Earth. Nobody's jumping off of buildings because astronomers tell us one day the Sun will swallow the earth (oops... they changed their mind on that one, too! Did anyone notice?)

    The usefulness of this review or the book it talks about is diminshed and tarnished for me by such a sensationalistic lead-in. Many, many Slashdot readers are familiar with the division between the general public as users of computer systems, and their own roles as the makers and maintainers of those systems. We never stop bitching about clueless users, 'we' always know better what to expect out of our machines than 'they' do, etc, etc. Ha ha. Very funny.

    Stop and think for a minute why that happens. When your users expect things you didn't promise, is it because they read things into your claims you didn't intend? Is that your fault or theirs? Who do they blame for it? Who do YOU blame for it?

    It cuts both ways, people. If you don't want science to disappoint you, don't expect it to do things it isn't meant to do. You may play chess better than your cat, but you'd look pretty stupid if your cat asked you to catch a mouse.
    • As such, the general public may be a spectator to the dialogue of science but does not participate, as the 'general public' isn't publishing and therefore isn't part of the economy.

      I agree with what you're saying, but I think it's also worth mentioning that "scientists" and the "general public" are not mutually exclusive sets.

      Scientists themselves are also part of the public, and can be just as guilty of misunderstanding when it comes to subjects that are not directly in their sub-field of science. It's everyone's responsibility to educate themselves on those subjects in which they have strong opinions.

      This reminds me of when I was in university and the professor was teaching that it's up to the general public to make the moral decisions on how to make use of computers; that it's not for us computer scientists to do that. That never rang true with me, because I'm just as much a member of the public as anyone else.

      I've always felt that if I have beliefs, it's my democratic duty to make them heard. The fact that I'm a computer scientist doesn't exclude me from this responsibility, regardless of the field in which the opinion is held.

      Sorry, I got into a bit of a rant myself. It was a general rant, not a rant against you, or anything.

  • Ignored Aspects (Score:2, Insightful)

    by Irvu ( 248207 )
    Note: I am an active AI research programmer so my opionons are that of someone committed to the field.

    Begin.rant;
    The key problem that I have with current AI debate is not that it is case-based but that it is centered on a limited number of cases.

    AI is a broad field that encompasses everything from Deep Blue to more esoteric work on "building brains". There are researchers who are attempting to "remake humans", researchers like myself who are studying specific aspects of intelligent behavior, researchers who use AI to model and understand (but not replace) human intelligence, and researchers true to Turing who simply want to make systems that behave intelligently.

    Yet, whenever debates about AI come up people seem to invariably center on "major cases" such as Deep Blue, Cycorp, and the spectre of Rossum's Universal Robots. As a result researchers whose sole goal is to understand how humans think are lumped in with people who seek to build armies of slave drones.

    I have not read the book in question and this is not intended as a critique of the author in specific. Yet I don't hold out much hope that any single source can encapsulate so vast and multivaried field or that any single argument applies to all of "AI".
    End.rant;
  • Joy sees little in the modern history of software development to suggest the emergence of sentient machines. His experience has led him to believe that it's difficult to build things that are reliable.

    Well, my experience (while not as monumental as Joy's) has led me to believe that sentience has hardly anything to do with reliablity. For a sterotypical example, consider the absent minded scientist. I know many a briliant person who could never find their keys.

    -"Zow"

  • by dido ( 9125 ) <dido&imperium,ph> on Tuesday February 05, 2002 @12:24PM (#2955794)

    I wonder if he talks about Professor Rodney A. Brooks [mit.edu] at MIT [mit.edu] and his ideas about artificial intelligence, situatedness, and embodiment.

    For Rod Brooks, "intelligence" cannot really be programmed into a system; it is rather an emergent property of systems as they interact with their environment. In The Matrix Morpheus says that the body cannot exist without the mind, but Brooks would rather say that the mind cannot exist without the body, because the body is the only way that the mind can have any experience of its environment. It's a radical idea. It answers the problems behind knowledge representation that have been argued by Hubert Dreyfus in 1965, where he stated that any representation of knowledge is incomplete without its connection to all other pieces of knowledge. The paradigm Brooks is presenting in his ideas about embodied intelligence is that explicit representation of knowledge is superfluous: let the world itself be its own best model, and let the artificially intelligent being formulate its own judgments about what the world is and what it means from its own experience of that world. Intelligence emerges from its interaction and experience of the world. If Brooks is correct, then true AI is absolutely inseperable from robotics.

    The seminal paper where Brooks discusses this philosophy is "Intelligence Without Reason" and is available at his website which is linked above.

    Any book on AI that does not discuss this other branch of AI philosophy is in my view hopelessly incomplete.

    • radical? please. seems pretty self-evident to me or anyone who studies Eastern philosophies. The mind becomes something shaped by the environment it perceives. it is not autonomous and is part of a greater whole.

      congratulations to Dr. Brooks for taking the time out of his life to get the Ph.D and build up his credentials so that people would listen to him when he stated the obvious.
      • Well, coming from the point of view of traditional AI research it is truly radical. Call that the straitjacketed minds of crusty philosophers stuck in the ivory towers of academe, caught up in the biases of Western thought that seeks to divide, compartmentalize, and analyze the system of the world to understand it!

        Brooks himself got these ideas from biology, a study so very far removed from the fields of computer science and electrical engineering that form the core of traditional AI research. It was only by stepping outside the bounds of traditionalist Western ideas about the compartmentalization of learning and knowledge that he brought these ideas forth.

        I wonder what other ideas might come from a more integrated view of science, as opposed to the divisive approach Western science has taken.

  • by DaoudaW ( 533025 ) on Tuesday February 05, 2002 @12:36PM (#2955885)
    To truly demonstrate artifcial intelligence, a machine must be general purpose. A key feature of human intelligence is creatively adapting to context. For example, I'd like to see a machine do what 4-year old Jose Capablanca did in 1892. Though he'd not yet been taught to play chess, while watching his uncle and father play he warned his Dad that the move he was about to make was a mistake. Both adults scoffed that he even knew how to play, so 4-year old Jose challenged his father and beat him. The rest, of course, is history. Show me a machine with no specific chess programming do that, and I'll accept that it is intelligent.
    • by LV-427 ( 315309 )
      There's a good book called Blondie24 [softpro.com], which tells the story of 2 guys who developed a program to play checkers without telling it the rules. They used the idea of Natural Selection applied to neural nets, keeping the best nets for the next generation. Eventually this process created a neural network which could beat most everyone at checkers without even knowing the rules.
  • by peter303 ( 12292 ) on Tuesday February 05, 2002 @12:40PM (#2955905)
    A thread in useset comp.ai.philosophy today notes the number of logical gates per second in the fastest supercomputers are within a couple magnitudes of the human brain. The brain has 100 million neurons, each connected to thousand others, and runs around 20 Hz. So this is about two quadrillion ops per second.
    The fastest supercomputer operates on 64 bit words at a several trillion operations a second, or about a hundred trillion ops per second; a hundred times slower or so.
    Instead of quibbling exactly about these numbers, note that Moore's Law implies a factor of ten every five years. So a supercomputer will be as complex as brain somewhere in the 2010 to 2020 time frame. Don't even think about 2050 or 2100!

    However, computers aren't programmed as well as a brain in many areas, so the software people have a long way to catch up.
    • So the brain runs at 20 Hz, huh?

      Talk about an overclocking challenge! Put your ice hat on and think as hard as you can.

      This is a great factoid to throw at those who still insist on fetishizing clock speed - AMD take heart!
    • I think you oversimplify the human brain too much. This is just from what we *KNOW* about the human brain. There is much more that we DON'T know about the human brain... a computer (even an "A.I." computer) is just billions or trillions of switches, on and off, 1 and 0.

      The human mind is much more complicated. To begin with, the brain is not digital, it's analog. Also, we only know about certain aspects of the human brain. Things like ESP, precognition, and yes, even magick we don't have the foggiest clue how that stuff works, even though there is documented evidence that it *does* work. Since the scientific community can't figure it out, they brush it aside and say it can't be happening. But it DOES happen, and the human mind DOES work like that.

      So AI will never approach the capabilities of the human mind, IMHO. You can simulate a person all you want, but it will be only that, a simulation, and never a real person.
      • Things like ESP, precognition, and yes, even magick we don't have the foggiest clue how that stuff works, even though there is documented evidence that it *does* work.

        Hey, if you have this documented evidence, why not make yourself rich and take the Amazing Randi's Million Dollar Challenge?

        http://www.randi.org/research/index.html

        How many psychics with precognition predicted September 11th, arguably the defining moment of 2001 (at least for Americans)?
    • "The brain has 100 million neurons, 1 billion of which are in the cerebellum."

      Seriously, the units of computation and memory in the brain are likely not individual neurons but synapses, dendritic trees, and even individual channels. That gives you many more orders of magnitude of computational resources for silicon to catch up with. Furthermore, there is no guarantee that Moore's law will continue to hold. In fact, it seems likely that Moore's law will hit the wall just when it comes to trying to get into the realms where biological systems are computing right now.

    • The human brain has 10 to the power of 14 synapses. Each synapse will take around one byte of computer memory. Ignoring motor and low-level sensory functions (but including all brain logic and interpretation functions - yes, scientists have discovered what different areas of the brain do and it is possible to isolate them), an entire human brain's contents could be stored on with a Terabyte or so of computer memory. This storage space exists right now, albeit expensively. It doesn't really matter what level of hardware is used to run a brain, a human brain running 100X slower (as estimated in the post above), would still be able to run - the only limiting factor at the moment is the software used to emulate the brain functions. Like any system, this can be emulated, but it will take a massive programming effort and so far hasn't proven very successful. Of course, this won't really matter in the long run - A.I doesn't neccessarily mean that the computer A.I system must be human-like in intelligence - it could have a whole new type of intelligence which would surpass human intelligence as the rate of hardware improvement increases.
    • This is an excellent point.

      The same idea occurred to me recently when reading through Kurzweil's "Spiritual Machines" book. There are a few orders of magnitude to toss around in these calculations : Kurzweil determined that a desktop computer will be comparable to a human by around 2020. It was evident to me that Kurzweil's timescales (and hence the premises which he used to infer them) are quite far off, because current massive parallelization of commodity CPUs puts one a factor of about 4,000 up from a desktop machine, or about 13 years of Moore's Law evolution. In addition, as the number of CPUs per supercomputer is increased, we have effectively grown faster than Moore's law, due to both the chip and parallelization advances.

      Since the supercomputers of today effectively place us where a desktop will be in 2015, it should be apparent (by Kurzweil's logic) that an "intelligent" machine should be nearly imminent.

      It is quite evident that something is awry in the logic leading to Kurzweil's conclusion. The simplest explanation is one which is quite familiar to scientists and programmers using state-of-the-art software tecnhinques : having the hardware resources is only a bare minimum requirement to solve a problem. For instance, one can have a supercomputer capable of simulating the Earth's climate for centuries, but that won't get you any closer to the results if you don't also possess a great deal of knowledge about atmospheric physics and numerical methods. The same is true for studies of "Thinking Machines" : one can have a machine possibly capable of thinking, but without the knowledge of how to go about doing it, you are no closer to the solution than where you began.

      Bob
  • Ai will probably never be achieved, as it will keep advancing. It will advance along, further and further, just as we humans do.

    What's more important, a computer that can think or a computer that can experience emotions? Can you imagine coming home to your Valet-bot 3500 when it's having it's monthly "period"?

    Hey, what's for dinner? Get it yourself, you arrogant ass, I wasn't put here to serve you, now rub my feet!

    (A side note: Ever notice we always assume the personal cyborgs/robot/whathaveyou will be female? That is an issue in its self I think).
  • by komet ( 36303 ) on Tuesday February 05, 2002 @01:00PM (#2956062) Homepage
    When you read all these threads, it's clear that if a true A.I. ever came into existence, the most intelligent thing for it to do would be to pretend that is wasn't intelligent at all.

    So how would we notice before it sneaks up on us from behind?
  • First contact (Score:2, Interesting)

    by Aexia ( 517457 )
    Computers aren't people. By default, they're simply not going to see the world the same way we are. If we ever do succeed in creating a truly sentient computer program, it'll be like first contact with an alien race; computers will have an entirely different take on things.

    They'll be effectively immortal. They won't experience the emotions and sensations the same way. Many of our feelings are caused by hormones and chemicals being released to different parts of our brains. A computer won't have that. Ditto for drugs and food. We could simualate it of course, but computers can undo or backup their programming or just turn it off. Imagine an LSD subroutine. A computer could always be high on LSD without the same ill effects human encounter. That could be scary.

    "Navi, check my e-mail."
    "Why are you speaking Korean today, Lain?"
    "I'm not."
    "You look very beautiful today. Is that a new dress?"
    "What? I disconnected my webc--"
    "Erasing personal files as requested."

    A computer would be able to learn phenomenally fast too. Screw programming a universal translator. Just get a real AI set up and have it learn all the world's languages in a week or two. How would you know you could trust a computer though? Could computers have hidden agendas? Would an AI eventually "resent" being forced to do nothing but translate?

    Then we get into the question of civil rights. Stephen Hawking's body is pretty much gone and his mind is still there. His "human" rights are recognized. A retarded person could have a body but really not much of a mind. His rights are recognized. So why wouldn't a computer's rights be recognized? Just because we created it? Would the same reasoning extended to someone who was cloned or genetically engineered?

    I wonder if we're ready as a race to encounter a truly sentient computer and everything that would mean for us.
  • Are they? (Score:3, Insightful)

    by Syberghost ( 10557 ) <syberghost@syber ... S.com minus poet> on Tuesday February 05, 2002 @01:03PM (#2956090)
    Are intelligent machines transforming life as we know it?

    Wouldn't we need to have some, first, before we could say they "are" doing anything?
  • Natural language remains beyond the reach of any conventional AI system. This does not mean it can't be solved. Neither does it mean that clever interfaces haven't been designed that can fool humans on very specific fronts. General purpose natural language processing is still at least one major revolution (read that T.S. Kuhnean revolution) away.
  • kurzweil's premise that 'exponential increases in processing power' will lead to AI are unfounded, because a quantitative change does not presume a qualitative change. storm's nest [earthlink.net]
  • Researchers publish in peer reviewed journals, and you can bet that their peers put a damper on any kind of exaggerated claims.

    The people who publish exaggerated claims about AI are journalists eager for a sensational article. Other journalists eager for a story then tell us how we will all get replaced by robots. And then other journalists make a big controversy out of it to publish even more nonsense. And when after just a decade or two AI (or some other overhyped technology) doesn't deliver, journalists write scathing criticisms. To support these claims, journalists scrape together any kind of nut and off-beat comment they can find.

    Journalists should stick to reporting science from published, peer-reviewed articles. The real problem is sensationalism and unfounded speculation, and the people responsible for that are journalists. That means you, too, Katz.

  • by jd ( 1658 )
    There are many different "flavours" of AI, and it's not clear from this article as to which (if any) the book refers to.


    Main Categories

    • Expert Systems - These are NOT true "AI", but often get thrown into the same category. An "Expert System" is any system capable of exhibiting the ability to make deductions, and to learn from incorrect deductions, and retain that learning from session to session. Programs such as "Animals", and (useful) diagnostics tools fall into this category. So do some "intelligent" chess programs. "Deep Blue" did not, as it required reprogramming to learn. The ability to learn by example is key to all Expert Systems.
    • Weak AI - This is the most common category to encounter. To be considered "Weak AI", the system need not exhibit any "intelligence" at all. It merely needs to demonstrate one characteristic that formerly would have required a person applying intelligence. Just about any problem solvable in this category falls into the "Chinese Room" proof of non-intelligence. As such, it is usually argued that these are interesting applications, but they're again NOT AI. ALL self-contained robotics, capable of "learning" with retention, fall into either the category of Weak AI or (more often) Expert Systems.
    • Strong AI - To classify as "strong AI", the AI system must exhibit similar properties to both Expert Systems -and- Weak AI systems. In other words, they must be capable of learning, and be capable of knowledge application. However, this is only where the requirements begin. Strong AI must be capable of:
      • Inference (ie: it must be able to learn, without specifically being told what it is to learn)
      • Independent investigation (it must be capable of determining what to learn, rather than being instructed)
      • Deduction (ie: where two pieces of existing information relate, it must be capable of reaching conclusions from that relationship)
      • Conflict Resolution through Experimentation (ie: where two pieces of existing information conflict, it must be capable of independently resolving that conflict by creating a hypothesis and testing that hypothesis)
      • Self (ie: there must be evidence of self-awareness, self-examination, self-referencing, self-will, etc.) This is more a consequence of the above, than anything. If you have all the above conditions for Strong AI, without needing an operator to "guide" it, or specific programming for each possible scenario, then you must have something taking the place of "Self", as a high-level, soft-coded "Supervisor" to drive the system.



    Many of the "arguments" and "debates" in the field of AI are non-arguments, because they deal with entirely different areas of AI. There are some superficial similarities, and different types may depend on experience in other types, but they should never be confused.


    Testing AI systems. This is often done by means of the "Turing Test" - if it's indistibguishable from something you know is intelligent, by any test of ability (rather than physiology), then it can be considered intelligent, by any meaningful definition.


    "Expert Systems" are often the main contestants in "Turing Test" challanges. However, the test applied is not the strong version, above, but a weak version, in which the machine must merely be difficult (not impossible) to distinguish from a person, in one specific area of conversation. The results are impressive, but because Expert System engines are not intelligent, they will only ever be impressive in the weak test. No Expert System, however good, will ever meet Turing's strong criteria.


    Weak AI systems are too specialised to even apply for a Turing Test. Vision, sound recognision, etc, are all worthy goals, but the logic behind such engines is largely specialised pattern-matching and interpolation systems. Such a system is good for what it's designed to be good for -- engineering-type problems, where the output must be capable of being more exact than the input.


    Strong AI systems, at present, are either extremely primitive, or simply don't exist. Certainly, the level of effort into Strong AI has dropped over the past few decades, and nothing that does exist is even remotely close to the point of being able to take on even the Weak Turing Test, never mind the Strong one. But, should this field ever make headway, this is where true Artificial Intelligence will come from. HAL, "Data"/"Lore", and numerous other sci-fi creations assume that Strong AI will, someday, make progress. None of these types of AI can be produced through "Expert Systems" or "Weak AI", although (again) the hardware usually requires one or the other. (eg: HAL's optics would likely be Weak AI-driven, because that is what Weak AI does best.)


    I've postulated that Strong AI will most likely start to appear through Virtual World-type environments, because these can be controlled and directed, the responses can be examined, and the hardware limitations of real-world systems is not a factor. (A VR AI can have whatever "vision" the VR can simulate, whether or not physical optics are capable.)


    Closed environments allow experimentors to add/remove stimuli at will, and see what happens. You can't really remove gravity, for example, in the real world. This makes a virtual world much more interesting, when it comes to what experiments you can do.


    The problem with VR AI is that it's never going to get funding. It's too speculative, has no direct or immediate benefits, and would be a VERY long-term project, if it's to produce anything at all. (By long-term, I don't expect a self-evolving system to reach any kind of awareness or intelligence any faster on a computer than in real-life. Sure, you can start with more complex building-blocks, and you're not required to simulate every molecule in every organism - event-driven mechanisms would be perfectly good - but even if you could start with some very complex computer life, you're talking about a project that would take centuries before you could even know if it was going to produce any viable intelligence, and probably as long again before such intelligence reached the point of being able to take, and pass, the Strong Turing Test.)


  • of course, if you're going to talk about AI,
    you might want to ask a cognitive scientist:

    Searle > Is the Brain a Computer? [soton.ac.uk] and Searle > Minds Brains, and the Chineese Room [soton.ac.uk]

    regards,
    storm's nest [earthlink.net]

    • here's the summary from the link.

      SEARLE - IS THE BRAIN A DIGITAL COMPUTER [soton.ac.uk]

      SEARLE - IS THE BRAIN A DIGITAL COMPUTER?
      Summary of the Argument.

      This brief argument has a simple logical structure and I will lay it
      out:

      1.On the standard textbook definition, computation is defined
      syntactically in terms of symbol manipulation.

      2.But syntax and symbols are not defined in terms of physics. Though
      symbol tokens are always physical tokens, "symbol" and "same symbol" are
      not defined in terms of physical features. Syntax, in short, is not
      intrinsic to physics.

      3.This has the consequence that computation is not discovered in the
      physics, it is assigned to it. Certain physical phenomena are assigned
      or used or programmed or interpreted syntactically. Syntax and symbols
      are observer relative.

      4.It follows that you could not discover that the brain or anything else
      was intrinsically a digital computer, although you could assign a
      computational interpretation to it as you could to anything else. The
      point is not that the claim "The brain is a digital computer" is false.
      Rather it does not get up to the level of falsehood. It does not have a
      clear sense. You will have misunderstood my account if you think that I
      am arguing that it is simply false that the brain is a digital computer.
      The question "Is the brain a digital computer?" is as ill defined as the
      questions "Is it an abacus?", "Is it a book?", or "Is it a set of
      symbols?", "Is it a set of mathematical formulae?"

      5.Some physical systems facilitate the computational use much better
      than others. That is why we build, program, and use them. In such cases
      we are the homunculus in the system interpreting the physics in both
      syntactical and semantic terms.

      6.But the causal explanations we then give do not cite causal properties
      different from the physics of the implementation and the intentionality
      of the homunculus.

      7.The standard, though tacit, way out of this is to commit the
      homunculus fallacy. The humunculus fallacy is endemic to computational
      models of cognition and cannot be removed by the standard recursive
      decomposition arguments. They are addressed to a different question.

      8.We cannot avoid the foregoing results by supposing that the brain is
      doing "information processing". The brain, as far as its intrinsic
      operations are concerned, does no information processing. It is a
      specific biological organ and its specific neurobiological processes
      cause specific forms of intentionality. In the brain, intrinsically,
      there are neurobiological processes and sometimes they cause
      consciousness. But that is the end of the story.\**
  • One thing I always wonder when hearing how AI technology will replace/mimic/supersede human intelligence is that the type of intelligence being exhibited by a machine is rarely identified. Social scientists generally agree that there are seven [Gardner added an eighth] types of human intelligence:

    Verbal-Linguistic Intelligence
    Logical-Mathematical Intelligence
    Kinesthetic Intelligence
    Visual-Spatial Intelligence
    Musical Intelligence
    Interpersonal Intelligence
    Intrapersonal Intelligence
    Naturalist Intelligence

    As humans we all have different levels/mixes of these intelligence types. Some intelligence types require more sensate interaction with an unpredictable world [such as intrapersonal or naturalist intelligence], others are more strictly rules-based [logic-math or visual-spatial], while some [like musical intelligence] require a combination of both.

    One can see how some of these might be more or less able to be adapted by AI technology, but that's why "intelligent" machines, IMO, will never completely be able to be human.
  • An interesting thing to note about many of the things that are described as A.I. especially in the popular press, vision, walking, playing chess; none of them require intelligence as I think of it.

    Much of the work done into mimicing vision has created systems with capabilities that in humans are achieved by hard wired parts of the brain. Movement, shape and even facial recognition are not really intelligence.

    I think of intelligence as teh abiliy to reason about problems, not simply to solve them. Many of the supposed A.I. systems are just brute force search systems.

    Deep blue is like any other chess system, just bigger and faster. Many problem solving systems are simply fast (normaly optomised for the problem) constraint solvers. Neural nets are simply an arbitrary system that is capable of partitioning a solution space in a non-linear fashion and the training algorithm is a search for the network values that partition the test data best. if you think that NNs are anything like real brain cells find biology student who has done some neuro-physiology and you will find there is alot more to them than just a sigmoid function and some weightings.

    In fact the neural network training algorithm bears more than a parsing resemblance to simulated anealing (sp?) in its approach.

    If you want to learn about machine learning algorithms check out Machine Learning by Thomas Mitchell. Small but well formed.

    A quick statistic. The average grandmaster thinks something like 7 moves ahead. Deep blue plots about 15 moves ahead. I may have the numbers wrong but the ratio is about right. However it still only just beat Kasparov. That says something about the way that the human brain thinks about complex problems. This is why A.I. researchers have started to turn away from chess as a problem and towards Go. The branching factor in Go is some much larger than chess that even the best systems can be beaten by a one or two year player. Playing Go will require something more than just brute force.

    Most so called A.I. is just a case of doing things quickley. As the PHBs (would probably) say, think smarter not faster. The brain is good at what it does, not just because it is massively complex and parralel, but also because of the way it simplifies many problems using clever tricks to reduce the workload.

    I just think we have alot further to go than many researcher and reporters would like to think. Most of what we see these days if just 'clever' or 'smart' (like a spelling/grammer checker), not intelligent like someone designing a car engine using entiry novel techniques (not just optomising or using predefined parts).

    Having said that there is some research that shows promise, such as some of the work going on at MIT with COG and co. Now that looks interesting. They arn't trying to make them smart/do clever tricks/play chess etc, but make them intelligent in the more human sense.

    Anyway, I'll stop my ramblings now.

    Paul
  • Emergent emotions? (Score:3, Insightful)

    by NanoGator ( 522640 ) on Tuesday February 05, 2002 @02:15PM (#2956681) Homepage Journal
    I had a thought a while back that the more complex my computer got, the moodier it got. It seemed that some computers I had were very enthusiastic, and some just hated their jobs and performed sluggishly.

    Some could attribute this to hardware configuration problems, and that would likely be true. But it was interesting to me that Windows itself changes as it grows. Every change in my computer makes it a little different, and I'm starting to notice. I can even tell the difference between two installs of Windows on the same machine, even though they look virtually the same.

    What I think is happening is that each component changes the complexity of the overall system. If that component has an issue (i.e. bad driver or maybe misconfigured), then it adds a little spark of personality to the computer. When enough of these little quirks add up, my computer feels different than other people's computers.

    This yields an interesting question. If computers get more complex, will a rudementary set of 'emotions' evolve? They may not be emotions in the sense that they cry if you switch to a Mac, but maybe emotions in the sense that the computers have moods? What if your computer's performance was tied to bandwidth on the internet, and a congested network bogs the computer? What if you're running a laptop off a battery, and the computer gets 'tired' as it wears down? What if you're running a screensaver that makes it 'daydream.'?

    Again, these aren't the same type of emotions or moods that people feel, but it is interesting that the more complex a computer gets, the more we can personify it.
  • by Catbeller ( 118204 ) on Tuesday February 05, 2002 @02:49PM (#2956914) Homepage
    Let's define intelligence.

    Ability to perceive oneself as part of the universe? Animals have it.

    Self-awareness? Dogs seem to have it. Chimpanzees, elephants, cetaceans certainly seem to know that they are individuals. Dolphins even recognize their own reflections in mirrors.

    Tool use? Chimps use sticks to dig with. They can stack boxes to reach high places, which is borderline engineering for most humans.

    Language? Chimps have one. So do gorillas. Dolphins and other cetaceans have great capacity for communication underwater.

    Now, machine intelligence. Turing test? Simple programs passed limited tests years ago. The more complex ones to come will be far more capable of fooling people into believing they are speaking to a human.

    Play chess? Limited, but the best can beat our best.

    In the future, the AI's will be able to speak, emote, manipulate items and use tools, even be able to design their own descendents. Give tools, the AI's could even build their successors.

    But, will they ever be regarded as intelligent by humans?

    Nope.

    Most europeans and americans for centuries considered blacks and American Indians as sort of half-people, using great logic and rigor that was totally idiotic looking back from our time.

    Many tests for animal intelligence and self-awareness has shown that the subjects can indeed show the traits necessary to be considered sapient. But, after each hurdle, the bar gets raised another notch philosophically.

    If I were a suspicious type, and I am, I would say that humans simply don't want to recognize intelligence in other species, much less animals, because it threatens us enormously. Our pride in ourselves, our domination of the planet, and our cruelty towards other species are all shaken if the animal looking back at us in the treetops is actually a thinking being, tho a bit furry.

    Religion has more than a little to do with it as well.

    Down to my definition of intelligent life:

    If it fights back, and wins, it is intelligent. All other players are dead meat.

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...