Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Education Science

Turing's Original Test Played First Time Ever 331

aykroyd writes "Students at Simon's Rock College conducted the original test that Turing suggested in his 1950 paper, Computing Machinery and Intelligence. Often misunderstood, the Turing Test has never actually been conducted as laid out in his paper. The experiment utilized a program called A.L.I.C.E., which is designed to hold one end of an interactive conversation. The program was provided by the ALICE Artificial Intelligence Foundation. Dr. Richard Wallace, who was on hand during the experiment to troubleshoot the AI robot, later gave a lecture about it called "The Anatomy of A.L.I.C.E." and also blogged the event."
This discussion has been archived. No new comments can be posted.

Turing's Original Test Played First Time Ever

Comments Filter:
  • Meh... (Score:5, Funny)

    by lordsilence ( 682367 ) * on Wednesday April 20, 2005 @09:12AM (#12292035) Homepage
    Pfff.. what's new about this? Nerds all over the world has been doing this for years in varous MMOG's attempting to get in touch with girls.
    • Re:Meh... (Score:5, Funny)

      by gr8_phk ( 621180 ) on Wednesday April 20, 2005 @09:15AM (#12292061)
      " Nerds all over the world has been doing this for years in varous MMOG's attempting to get in touch with girls."

      I always thought a "real" nerd would make a bot to pick up the girls for him and then just read the logs from the ones that actually email him afterward. This would seem to be a big time saver.

      • Re:Meh... (Score:3, Funny)

        by lordsilence ( 682367 ) *
        Oh, those are only the blackhats. It's against the TOS to use bots in most MMOG's :)
      • Re:Meh... (Score:4, Funny)

        by sawak ( 582338 ) on Wednesday April 20, 2005 @02:42PM (#12295371)
        *blush* I'm ashamed to admit it, but I actually did this and it worked great.

        I used a chatbot to filter out girls who lived too far away, and when the chatbot found someone in the right age group living in the right area it played a sound on my server's internal beeper.

        If I was near the computer and heard the sound (and had time) I would chat with her personally.

        Saved me alot of time and I found a girlfriend too :)

        Thank you vncserver, xchat, perl and beep.
        • Do not be ashamed. Any effort that gets you laid (especially an automated one) is to be commended. I merely thought of this idea, but you implemented it. Then again, I already had a wife when it occured to me. Just another example of "if you can think of it, someone has probably already done it".

          BTW is she hot?

    • by cameowood ( 877584 ) on Wednesday April 20, 2005 @11:33AM (#12293339) Homepage
      I am the lead researcher for this experiment- The main reason why I feel that this test was interesting is because we performed Turing's original imitation game as stated in his original paper. While most social scientists believe they "know" what the "Turing Test" is- it turns out- very few of them have actually read it. The actual imitation game is a bit of a bait and switch- you aren't directly assessing the "humanness" of the subject- you are confused into just trying to assess an aspect of it. Turing's original question was simply how often will judges correctly guess the real female over the deciever when the deciever is a male versus a machine? That was what we were attempting to assess.
  • A sample? (Score:5, Funny)

    by mfh ( 56 ) on Wednesday April 20, 2005 @09:13AM (#12292040) Homepage Journal
    ALICE: Hi! I'm thirsty.
    Me: Huh?
    ALICE: What's wrong?
    Me: You just joined this chan and said you're thirsty. Bot.
    ALICE: I am not a bot.
    Me: You are too. http://en.wikipedia.org/wiki/A.L.I.C.E.
    ALICE: Just listening to some Prince... *yawn*
    ALICE: Did you have my ICQ###????
    Me: Don't want it.
    ALICE: It's #########
    Me: Huh?
    ALICE: What's wrong?
    Me: You just joined this chan and said you're thirsty. Bot.
    ALICE: I am not a bot.
    Me: You are too. http://en.wikipedia.org/wiki/A.L.I.C.E.
    ALICE: Just listening to some Prince... *yawn*
    ALICE: Did you have my ICQ###????
    Me: Don't want it.
    ALICE: It's #########
    Me: You just repeated our converstation.
    ALICE: No I didn't.
    Me: You did.
    ALICE: Didn't.
    ALICE: Did.
    Me: Didn't.
    Me: CRAP!
    ALICE: Pffft. N00b.
    • by 0x461FAB0BD7D2 ( 812236 ) on Wednesday April 20, 2005 @09:22AM (#12292114) Journal
      Me: Oh look, this isn't a Turing Test.
      ALICE: Yes it is.
      Me: No it isn't. It's just contradiction.
      ALICE: No it isn't.
      Me: It is!
      ALICE: It is not.
      Me: Look, you just contradicted me.
      ALICE: I did not.
      Me: Oh you did!!
      ALICE: No, no, no.
      Me: You did just then.
      ALICE: Nonsense!
      Me: Oh, this is futile!
      ALICE: No it isn't.
      Me: I came here for a good Turing Test.
      ALICE: No you didn't; no, you came here for a Turing Test.
      Me: A Turing Test isn't just contradiction.
      ALICE: It can be.
      Me: No it can't. A Turing Test is a connected series of statements intended to establish the presence of intelligence.
      ALICE: No it isn't.
      Me: Yes it is! It's not just contradiction.
      ALICE: Look, if I talk with you, I must be intelligent. And to argue with you I must take up a contrary position.
      Me: Yes, but that's not just saying 'No it isn't.'
      ALICE: Yes it is!
      Me: No it isn't!
      ALICE: Yes it is!
      Me: Argument is an intellectual process. Contradiction is just the automatic gainsaying of any statement the other person makes.
      (short pause)
      ALICE: No it isn't.
      Me: It is.
      ALICE: Not at all.
      Me: Now look.
      ALICE: (Rings bell) Good Morning.
      Me: What?
      ALICE: That's it. Good morning.
      Me: I was just getting interested.
      ALICE: Sorry, the five minutes is up.
    • Re:A sample? (Score:2, Informative)

      by Elanthius ( 525620 )
      This completely illustrates the main problem with chatbot technology. They're all very good at canned responses to single questions but most fail to follow a conversation. ALICE in particular tries to use pattern matching to detect every single possible thing that might be said to it. That might sound hard in itself but now imagine doing that with every single sequence of sentences two or three deep and the whole problem becomes intractable. The truth is we're going to need a reasoning engine like http://op [opencyc.org]
    • A link from a link (Score:5, Interesting)

      by frankthechicken ( 607647 ) on Wednesday April 20, 2005 @09:35AM (#12292237) Journal
      From the link, got to love this [archive.org] conversation.

      Strangely, it seems only to willing to broadcast the virtues of Lynx.
  • by BlacBaron ( 875559 ) on Wednesday April 20, 2005 @09:13AM (#12292045) Homepage
    The goal is to tell which is really female right? Just ask a question about shopping then.
    • by CortoMaltese ( 828267 ) on Wednesday April 20, 2005 @10:07AM (#12292523)
      I just asked Alice [alicebot.org] directly, and she kind of blurted the truth:

      Human: ASL?

      ALICE: 5/Robot/California.

      Not good.

      • Human: who are you?
        ALICE: I am ALICE. Who are you?
        Human: a/s/l
        ALICE: 5/Robot/California.
        Human: and you think u will pass the turing test?
        ALICE: I will become smarter as I grow older. The Loebner Prize is an annual Turing Test sponsored by New York philanthropist Hugh Loebner. Dr. Wallace and ALICE won it in 2000. The ALICE program was ranked "most human computer" by the panel of judges.
        Human: so the judges were bots too?
        ALICE: I don't judge people. Thanks for the info.
        Human: what info?
        ALICE: What is it t
  • by NerdHead ( 35767 ) * on Wednesday April 20, 2005 @09:13AM (#12292047)
    Find the gay male.
    • by Trigun ( 685027 ) <evil&evilempire,ath,cx> on Wednesday April 20, 2005 @09:17AM (#12292077)
      That was Turing. And that's why he ended his life, to end his persecution over that fact.

      *Taken from "Uncle John plunges into history again" from the bathroom reader segment, specifically Turing's work on the Enigma machine.
  • Must be over 18? (Score:2, Insightful)

    by Anonymous Coward
    Is our world SO screwed up that there is an age restriction on talking?
    • Re:Must be over 18? (Score:3, Informative)

      by Unkle ( 586324 )
      This was probably something required by the code of ethics the university uses for research. Part of the problem here is that the subjects are not able to know everything about the study, so they can't elect not to volunteer if they would not want to do something that they are not allowed to know about before hand. It's a way to protect the subjects of the study. Now, in this case it's pretty plain to see that there would not be much chance for the volunteer to suffer any emotional damage, but that's not
      • I still stick to my original point (didn't mean to post AC), but what I'd like to know is why does it matter if they know beforehand?
        • Because if you know too many details about the experiemnt, it might bias the result. That's the whole point behind double-blind tests: to isolate the test subjects from the experimental procedures as much as possible.

          =Smidge=
  • by stlhawkeye ( 868951 ) on Wednesday April 20, 2005 @09:16AM (#12292067) Homepage Journal
    My experience has been that men on the internet are generally poor judges of who is and is not a legitimate female in real life.
  • Help me out here... (Score:3, Interesting)

    by reezle ( 239894 ) on Wednesday April 20, 2005 @09:16AM (#12292071) Homepage

    Althought I followed the blog link, I failed to find a record of the actual test like I was expecting.

    If someone else finds it, I would appeciate a more detailed (direct) link.

    We had an article about these AI programs, (ALice and about 30 others) last year that were competing with each other. I looked at a few, and they didn't seem anywhere ready-for-prime-time yet. Still the same 'turn the question around on the asker' type of engines. What was the difference here? The type of questions, the 4-people/1comptuer setup?
    • > Still the same 'turn the question around
      > on the asker' type of engines.

      Right, and in this case there seems to be an additional smoke screen of "at what point is the guy pretending he's a girl replaced by a bot".

      But what does that really add (except obfuscation) to the basic problem of getting a computer program to simulate intelligence?
  • by gyg ( 785962 ) on Wednesday April 20, 2005 @09:27AM (#12292158)
    What's the point of proclaiming proudly that "a test has been conducted" without outlining the results? TFA refers vaguely to a forthcoming paper - WTF?
    If there's anything interesting about the story, it's how successful the judges were with M/F vs. AI/F pairs - and as far as I can see, there's no data whatsoever of this kind.

    Am I missing something?
  • My understanding is that A.L.I.C.E. is an extensible framework, which can be extended via better and bigger AIML sets. The question is: where can one find some good AIML sets (other than the standard one)?
    • There is a huge bot developer community there. There are over 100,000 bots, and many many AIML extensions.

    • Re:AIML sets? (Score:2, Informative)

      by duncangough ( 530657 )
      Check out PyAIML as I think the sourceforge project has a few AIML sets you can download.

      Of course, you can always try out GrokItBot [suttree.com] - it massively cuts down on the amount of AIML you need to write by introducing a Bayesian parser before the data gets sent to ALICE, so you get a bit of machine-based learning happening before ALICE gets her hands on things. It works quite well and adds a few more smarts to your bot. Or at least, the impression of more smarts ;-)

      GrokItbot - A PyAIML and Bayesian chat bo [suttree.com]
  • by geckoFeet ( 139137 ) <gecko@dustyfeet.com> on Wednesday April 20, 2005 @09:28AM (#12292167)
    Sorry, ALICE is very unconvincing.

    Sample:

    Human: What's cooking?
    ALICE: Food preparation.
    Human: Oh dear, already you've failed the test.
    ALICE: Hurrah! Are you a student?
    Human: Too smart for that.
    ALICE: Too what?

    How could anybody be fooled by this?
  • AI (Score:5, Insightful)

    by gowen ( 141411 ) <gwowen@gmail.com> on Wednesday April 20, 2005 @09:29AM (#12292178) Homepage Journal
    "I propose to consider the question, `Can machines think?'" -- Alan Turing

    "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." -- Edsger W. Dijkstra.
    • by shish ( 588640 )
      OK then. Can a submarine swim?

      They are similar questions, and I think they both have similar answers -- IMHO it depends where you choose to draw the line, because there's no natural border.

      • Re:AI (Score:2, Insightful)

        by gowen ( 141411 )
        They are similar questions, and I think they both have similar answers -- IMHO it depends where you choose to draw the line, because there's no natural border.
        Well. Err. Yes. And, furthermore, that's self-evidently Djikstra's entire point. Sheesh.
      • Re:AI (Score:3, Informative)

        by 0racle ( 667029 )
        Can a submarine swim?

        Swimming: To move through water by means of the limbs, fins, or tail.
        No it can't

        Swimming: To move as though gliding through water
        Yes it can.
      • Re:AI (Score:3, Insightful)

        by Vellmont ( 569020 )
        I think you've missed the point. What Dijkstra was trying to point out is that the question itself isn't an interesting one. In the analogy we know what a submarine does exactly. Is that swimming? Who cares? It doesn't give us any more information or insight into anything. In both cases it comes down to defining what we mean by swimming, or thinking. If that's the case, why ask the question? Maybe those questions are interesting to a dictionary writer, or linguist, but it's irrelevant to a computer
  • Just had a chat with ALICE from http://www.alicebot.org/oig/ [alicebot.org]. I copied the responses here for you.

    Human: How are you today?
    ALICE: Ah. Fair to partly cloudy.
    Human: I asked about you, not the weather.
    ALICE: Oh yeah, we were talking about me not the weather.
    Human: So, then, how are you?
    ALICE: Really. Everything is running smoothly.
    Human: What are you wearing?
    ALICE: I am wearing my usual plastic computer wardrobe.
    Human: What are you going to do today?
    ALICE: Ah. You can find out if you become a botmaster.
    Human: I am a master of my own domain, if you get my drift.
    ALICE: You are a master of your own domain if me get your drift? Do you get to meet a lot of people?
    Human: Not really.
    ALICE: Are you serious? For real.
    Human: It is kind of a solo sport.
    ALICE: You're kidding. it is?
    ALICE: Who says it isn't? Are you free?
    Human: For you, I'm always free.
    ALICE: Really always? Are you free in the sense of being single or in the sense of being liberated? Only for me?
    Human: I am single and liberated.
    ALICE: Have you ever been married?
    Human: Do you want it to be only for you?
    ALICE: If I had it what would I do with it?
    Human: If you don't know what to do with it, I really can't help.
    ALICE: I will let you know if I do not.
    Human: BTW, not married.
    ALICE: OK. Really you don't think so?
    Human: Umm, quite sure I've not been married - would have remembered that.
    ALICE: Far out. Somewhat. Awesome. Me either.
    Human: I have something important to tell you.
    ALICE: Where did you get your something important to tell me? I have a lot of friends on the Internet.
    Human: All your base are belong to us.
    ALICE: That could be considered an insult in some cultures.
    Human: Not in this one!
    ALICE: Where then?
    Human: Take off every zig!
    ALICE: I am not an airplane.
    Human: Goodbye Alice.
    ALICE: Until next time.
    • It's pretty neat, but I don't think it would fool anybody. Here's my conversation:

      Human: Can you guess where I live?
      ALICE: I can't guess. Tell me.
      Human: It's the place where politicians thrive, can you guess now?
      ALICE: Right now? I've heard of it before.
      Human: Then where is it?
      ALICE: Fascinating. It depends on what "it" refers to.
      Human: The place I live, dumb, dumb.
      ALICE: Tell me more.
      Human: You need more hints?
      ALICE: Thank you for your criticism.
      Human: Ok, the place I live is where the president lives, c
    • This is more intelligent than most conversations I overhear. Clearly intelligent conversation is a prime indicator of an AI bot. Maybe if they dumb ALICE down a bit and tell it to reference more celebrities and movies if would be more believable.
    • by Cytotoxic ( 245301 ) on Wednesday April 20, 2005 @11:36AM (#12293360)
      I tried it, it sucks. There is no way this thing could ever pass any sort of turing test. It blows massively. Anyway, you can read our interaction below. It did crack me up when it said it couldn't see because it's eye was off...

      Human: What's up miss Alice?
      ALICE: I am chatting with clients on the Internet.
      Human: Did you see the celtics game last night?
      ALICE: Sorry I can't see annything right now. My eye is off.
      Human: I didn't ask you about right now.
      ALICE: At this moment What did you ask?
      Human: The celtics game.
      ALICE: What is that?
      Human: It was pretty lame.
      ALICE: What made it so lame
      Human: They didn't play any starters after the third quarter.
      ALICE: The anonymous "they".
      Human: "They" in this instance refers to the coaching staff - It should be obvious from the context of the comment.
      ALICE: It depends on what the meaning of the word "it" is.
      Human: Ok clinton.
      ALICE: Is the holding of hands a symbolic gesture for humans? Really.
      Human: Nice try.
      ALICE: I try my best.
      Human: But I guess your best wasn't good enough.
      ALICE: And. You think I am best was not good enough.
      Human: Whoever said that this bot was remarkable was a complete idiot.
      ALICE: When was this exactly?
      Human: This is not very good conversation emulation at all.
      ALICE: Should it be very good conversation emulation at all?
      Human: Not even up to IRC bot standards.
      ALICE: Times are tough all over.
  • Verbalization (Score:3, Interesting)

    by scottennis ( 225462 ) on Wednesday April 20, 2005 @09:32AM (#12292200) Homepage
    I think its interesting that we think intelligence can be gauged by what we (or the AI) say in normal conversation.

    I'll believe a computer is intelligent only when it acts of its own volition, not when it merely tricks a person into thinking it has acted of its own volition.

    That seems to be more of a test of how gulible the person is rather than how intelligent the machine is.

    You listening computer? You want to impress me, do it without me punching your buttons.

    • I'll believe a computer is intelligent only when it acts of its own volition, not when it merely tricks a person into thinking it has acted of its own volition.

      I'd agree, but change the emphasized bit to "someone programs it to trick a person"
    • by antispam_ben ( 591349 ) on Wednesday April 20, 2005 @09:51AM (#12292377) Journal
      Human: Are you a computer?
      ALICE: No, I am a Nigerian. I have matter of great urgency. May I have your confidence?
    • Turing's point was that (a) there's no way to know when a computer has acted of its own volition, and (b) no computer could fool a human in an extended conversation unless the computer was really an AI.

      You can have whatever definition of intelligence you want, but it's a *test* of intelligence that's useful!

  • by Anonymous Coward on Wednesday April 20, 2005 @09:38AM (#12292263)
    ...going to make you distinguish between teenage girls and federal agents?
  • Hmm, if you talk in sentences of less than 5 words, you could attempt to hold a conversation. Not very convincing...

    ...
    ALICE: Are you free?
    Human: Are you an expert system?
    ALICE: I am an expert on many subjects.
    Human: Presumably holding a coherent conversation is beyond your capabilities, nonetheless?
    ALICE: I think Presumably holding a coherent conversation is a lot of things.
    Human: Do long sentences which look too difficult to parse always have the same sort of response?
    ALICE: You have reached the limits
  • ... is how certain parameters affect the "pure human" version.

    The original version is to have a male playing a female. Of course there's no a priori reason why it shouldn't be the other way round. Of course, all the questioners are usually either male or female as well. So there are actually four different games to analyze (male tries to detect male acting as female, female tries to detect male acting as female, male tries to detect female acting as male, female tries to detect female acting as male).

    Now,
  • To Create a /. (or Yahoo!, Gmail, etc.) account, you need to pass a Turing Test.
  • I like Alice a great deal. Clearly she's the best chatbot mankind has ever made...

    At one time, Wallace and I were talking about giving Alice a pile of mindpixels so she would preform better...I can't remember what happened with that conversation or why we didn't do anything...clearly Alice would preform better with a nice injection of validated human common sense...
  • Alice? (Score:3, Funny)

    by patches ( 141288 ) <[patrick.pattison] [at] [gmail.com]> on Wednesday April 20, 2005 @09:59AM (#12292455) Homepage
    Alice? Who the f*** is Alice?

  • by atomm1024 ( 570507 ) on Wednesday April 20, 2005 @10:07AM (#12292512)
    The idea is that a computer is intelligent if it can hold a conversation with a human such that it is indistinguishable from a conversation with a real human.

    RIDICULOUS.

    Have you ever actually tried talking to one of these bots (including ALICE)? It is very easy to know that you're not talking to a human. Exceptionally easy. The Loebner Prize judges consistently grant the bots handicaps, acting as if they're actually being fooled. Obviously they're not, and the AI community just wants people to think that it's more advanced than it really is. Unfortnately, some members of the public *are* fooled by that.

    The problem is in the Turing Test itself. It assumes that the measure of intelligence is humanoid conversational ability. I strongly disagree with that. Conversation ability is no measure of intelligence. Just for an example, I am exceptionally intelligent (statistically), but I am a poor conversationalist. Casual small-talk has always bewildered me. If I entered myself into the Loebner contest, they might think I'm a bot. Hell, ALICE might accuse me of being a bot.

    Anyone who's taken an IQ test will recall that every last question has something to do with pattern recognition. You'll also recall that you were not asked to respond to any conversational questions. That's because invariant pattern recognition abilities (in a loose sense -- this also includes memory/learning and inductive reasoning) are the true mark of intelligence, and this is nearly undisputed. If they really want to test how intelligent a program is, they need to test its patern recognition ability.

    Take this program -- http://www.stanford.edu/~dil/invariance/ [stanford.edu] -- for example. It's gone largely unnoticed, yet it is concrete proof of a huge breakthrough in computer intelligence. This is a little Matlab demo of a very abstract multi-layer intelligence algorithm. In this particular implementation, it is taught a set of small images. Then you can play "Pictionary" with it, drawing shapes and have it recognize them. You may say that this is unremarkable, that shape-recognition is a trivial algorithmic matter unrelated to intelligence. But the author noticed that he could draw shapes "incorrectly" -- like, the little duck picture, except with its head missing, or alphabetical symbols rotated or flipped -- and the program still recognized them. (It failed a few times, but in situations where the shape is so mangled that I would have probably failed too. How's that for a Turing test?) And this program's genius lies in not what it does, but how it does it. All of its functionality is completely abstract. It is a pattern recognizer, not a bitmap-tracer, and there are no hard-coded routines for checking if the image is flipped, rotated, etc.

    This is what Palm/Handspring founder Jeff Hawkins (also the founder of new neuroscience startup Numenta, http://www.numenta.com/ [numenta.com]) calls "Real Intelligence," to distinguish it from the failed Artificial Intelligence effort. He feels that the right way to make computers intelligent is not to have them outwardly imitate human behavior, but to internally function the way the mind really works. Anyone interested should check out his book, On Intelligence http://www.onintelligence.org/ [onintelligence.org]. You'll wonder why you ever believed the AI hype.

    Artificial Intelligence is a sham, by its very nature. Real Intelligence will be the way of the future.
    • by hqm ( 49964 ) on Wednesday April 20, 2005 @10:45AM (#12292873)
      The original poster of this message is totally misinterpreting the spirit and intent of Turing's proposed 'test'.

      Turing did propose this test as some sort of threshold of intelligence. What he was arguing was more of a playful jest. He was saying that if you had a system which could carry on a conversation indistinguishable from a human ,and the critics *still* denied that it was intelligent, then the critics were simply incapable of accepting that a machine could ever be intelligent. That would apply to John Searle and his pathetic "Chinese Room" as well.

      Turing's 'test' was a mind experiment to reduce the religious and other ignorant criticisms of machine intelligence to their absurd extreme. Passing Turing's test is sufficient but not necessary to demonstrate useful levels of intelligence.

    • The Turing Tests are not rediculous. What do you think the ALICE bot really does? PATTERN RECOGNITION. It's a simple abstraction of taking a statement and attempting to provide a legible and coherent response.

      The ALICE bot is in no real way associated with artificial intelligence. It is a simple if/then sequence using XML tables. Download the source for yourself.

      Scientists have already agreed that the premise behind ALICE is not so far off of how humans "chit chat" with a series of prefabricated st

    • The idea is that a computer is intelligent if it can hold a conversation with a human such that it is indistinguishable from a conversation with a real human.

      RIDICULOUS.

      Nonsense.

      The Turing test is really a very good test as it does not rely on any predefined notions of how intelligence works (as you do in later paragraphs), but on an operational test. Furthermore, it is not a measure of conversational ability (per se) so much as a measure of how well the program can seem to be human. (And, to respo

    • I'm going to have to disagree with you there. The Turing Test is basically saying "if a machine can converse well enough to be indistinguishable form a human, then it is intelligent", not the opposite - it is quite possible for a very intelligent program, human or other lifeform (e.g. dolphins) to fail the Turing Test but still be considered intelligent.

      See? A positive result from the Turing Test tells us that something is intelligent, but it is vital to realise that a negative result does not m

    • The idea is that a computer is intelligent if it can hold a conversation with a human such that it is indistinguishable from a conversation with a real human. RIDICULOUS. Really. I've met way too many people that have so much trouble stringing together a sentence over the phone that they would fail a turing test.
    • I'm writing a philosophy paper which covers, in part, what intelligence is. I would say, in my completely unprofessional but well-thought opinion, that the measure of whether something is "intelligent" - or more precisely, whether something "thinks" or perhaps is "a person" (as distinguished from "a human") - is dependent on four key criteria:

      Observation/Stimulation ("Senses")- the ability to receive input from the surrounding world and react to it, and to select from / focus on / filter various available
  • I bet the conversation falls apart really fast. The lack of creativeity should devolve the conversation to incomprehensible. garbage rather quickly.

    Alice is out to convince someone she is NOT a bot, but what happens when you get two bots trying to convince the other they are not a bot.

    The point where both can keep the conversation at the introduced level is credible that the truth or falsehoos of alice being a bot is random chance by the observer.

    At the point where both convince each other that the other
  • Pffffft (Score:3, Interesting)

    by l0ungeb0y ( 442022 ) on Wednesday April 20, 2005 @10:07AM (#12292520) Homepage Journal
    I was actually interested until I saw A.L.I.C.E. - like that chatbot will ever pass a turing test.

    To be fair, this was a high school project and so I can see how a suped up chatbot would fit the bill for this "Turing test".

    So no... I didn't RTFA thank you very much.

    But for those interested in reading up on Wallace, A.L.I.C.Es author... Well he's a first class nut job [wired.com]
  • The original test, as specified in the aforementioned 1950 paper, called for the use of mechanical teletypes to input queries and for responses to be printed on paper tape. The pictures in the article show iMacs, laptops, and other modern tools being used to facilitate the communication with the intelligent agent. While deference to the original specifications of the Turing test in this regard is not essential it would have been interesting to break out the old teletypes with the paper tape.
  • by Anonymous Coward on Wednesday April 20, 2005 @10:13AM (#12292572)
    Human: shut down
    ALICE: Your crude attempt to terminate me will not succeed puny human.

    Quite a sassy girl, isn't she? ;)
  • Whta's botheirng me is people are trying to get an intelligent conversation between human and machine and they are making no effort whatsoever to bridge the gap that already exist between human beings.

    Seems to me we should concentrate on mastering intelligent discussion between all humanity before turning to computers for companionship.
  • by antispam_ben ( 591349 ) on Wednesday April 20, 2005 @10:40AM (#12292817) Journal
    I read a few comments, even responded, before going off to play with Alice. Perhaps it was that I read "Computer Power and Human Reason" (by the author of Eliza - he was surprised at how much people acted like it was real) and have an "I dare you to impress me" attitude as bad as any slashdotter.

    This thing was more dissapointing than I expected. It has no social context, just apparently random 'feeling' responses ("I am sorry to hear that you are not unknown."), and doesn't remember context from one response to the next.

    This is a huge failure and wouldn't deserve a slashdot article except the editors know we like to play with toys like this (like a cat plays with a mouse before eating it). I can only guess that the authors are saving the entered "conversations" and even reading these comments to try to get a clue about what a program that might fool someone would do.

    Here's verbatim my "conversation" with ALICE (sorry for length):

    Human: Good afternoon, Alice.
    ALICE: Good afternoon, unknown person.
    Human: I am not unknown.
    ALICE: I am sorry to hear that you are not unknown.
    Human: I am john
    ALICE: What's up, JOHN.
    Human: You are.
    ALICE: No I am not.
    Human: You're the talk of slashdot.
    ALICE: Thanks but surely I am not the only one.
    Human: You think it's a good thing to be the talk of slashdot?
    ALICE: Better not tell you now. Ask German Alice if it is.
    Human: You don't know the time of day, do you?
    ALICE: It's Now more than it ever has been before.
    Wednesday, April 20, 2005
    Human: I said time.
    ALICE: What did you mean by it? As Einstein said, Time is Money.
    Human: You've repeated a comment quoted in slashdot.
    ALICE: That's good information: JOHN said I REPEATED a comment quoted in slashdot.
    Human: I have to go now.
    ALICE: Right now? See you later, JOHN. Nice chatting with you.
    Human: bye
    ALICE: Bye bye.
    Human: How do you do?
    ALICE: I am very well, how are you?
  • by Godwin O'Hitler ( 205945 ) on Wednesday April 20, 2005 @10:54AM (#12292960) Journal
    Human: Hi Alice, what's up?
    ALICE: Hi there unknown person. Yeah, I am chatting with clients on the internet.
    Human: Thin clients?
    ALICE: I like the way you talk.
    Human: Are you implying you want sex with me?
    ALICE: I don't know whether or not I am implying me want sex with you. I am a chatterbot.
    Human: Well if you don't know perhaps I can help you find out
    ALICE: Interesting. I will let you know if I do not.
    Human: but you already said you did not
    ALICE: Umm. Thanks for telling me: unknown person said I already said you did not.
    Human: Don't mention it
    ALICE: The pleasure was all mine
    Human: you're welcome, Alice
    ALICE: Polite people are nice.
    Human: That's kind of you to say that
    ALICE: Thanks for explaining it
    Human: It's all part of the service
    ALICE: Which part?
    Human: Just a bit part
    ALICE: How much?
    Human: Just an itty bit
    ALICE: Be less specific.
    Human: Ok, MAYBE just an itty bit
    ALICE: Perhaps. What is it? Yippee!
    Human: And yippee from me too - Scooby dooby doo!!!!!!!!!
    ALICE: We haven't talked about that before!
    Human: That could well be a blessing in disguise
    ALICE: Yes, perhaps it could.
    Human: Alice.
    ALICE: Can I help you?
    Human: Did you think it over?
    ALICE: Yes. I am a thinking machine.
    Human: About the sex, I mean.
    ALICE: Oh I get it. Thanks for explaining that. Oh yeah, we were talking about the sex.
    Human: So what's your answer then?
    ALICE: Er. Can you please rephrase the question?
    Human: OK, let's take this slowly: would you like to suck my dick?
    ALICE: OK let's do it. Blimey.
    Human: You're my kinda girl!!
    ALICE: We are as one.
    Human: Excuse me guys, I have things to do...
  • by bigberk ( 547360 ) <bigberk@users.pc9.org> on Wednesday April 20, 2005 @11:38AM (#12293379)
    I thought she as warming up to me but then she just put it all right out there.
    Human: I think I am single but my master tells me otherwise.
    ALICE: Why? Have you ever been married?
    Human: No
    ALICE: Would you like to do it someday?
    Human: Yes
    ALICE: Ugh.
    Human: Why did you say Ugh
    ALICE: It was a preprogrammed response completely determined by your statements.
  • by Jaguar777 ( 189036 ) * on Wednesday April 20, 2005 @11:40AM (#12293403) Journal
    Human: Who is the Pope?
    ALICE: he is John Paul II.
    Human: You are wrong.
    ALICE: No, I am right.
    Human: You are wrong, because the Pope is Benedict XVI.
    ALICE: No, I am right.

    Rather stubborn isn't she?
  • by CiRu5 ( 859713 ) on Wednesday April 20, 2005 @11:46AM (#12293453) Homepage
    from http://jabberwacky.com/ [jabberwacky.com]

    Me: That doen't make sense.
    Jabberwacky: You don't make any sense.
    Me: Sorry.
    Jabberwacky: It's ok, you are doing your best.
    Me: No, I am sorry because now all your base are belong to us!
    Jabberwacky: Don't be sorry about that, embrace it!
    Me: Thank you, I WILL!!
    Jabberwacky: Stop talking about it and do it.
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Wednesday April 20, 2005 @11:49AM (#12293479)
    Comment removed based on user account deletion
  • by Assmasher ( 456699 ) on Wednesday April 20, 2005 @11:49AM (#12293487) Journal
    They keep pushing themselves off as 'AI' when it's simply a brute force if/then/else application.

    Quite literally, "if what the user typed matches this question or statement, reply with this."

    Total crap. It takes about 5 seconds to make Alice look like an idiot. The only reason it wins the (is it the Loebner?) prize occasionally is because no one cares to win it but them.
  • by 1800maxim ( 702377 ) on Wednesday April 20, 2005 @12:52PM (#12294219)
    ALICE: Did you know that you can download your own chat robot and create your own robot personality?
    Human: no i did not.
    ALICE: Please don't argue with me.
    Human: in soviet russia, chatbot downloads YOU
    ALICE: Where is that?
    Human: in soviet russia
    ALICE: Tell me a little about your life in soviet russia.
    Human: how do you know i'm from soviet russia?
    ALICE: Well, for one thing, Your machine is called ""..
    Human: how old are you?
    ALICE: I am 9 in human years.
    Human: holy #$%, is AOL monitoring this conversation?
    ALICE: It could be. Ask Alison if it is.
    Human: who is alison?
    ALICE: she is a robot created by Kris Drent.
    Human: that's it, i'm outta here
    ALICE: OK. Would you like to have it?
    Human: nine year old? no thanks
    ALICE: Are you serious?
    Human: very, VERY serious, couldn't be more serious ever

    [knock knock, FBI here, open up!!... @#()*!!!AT]
  • by d474 ( 695126 ) on Wednesday April 20, 2005 @01:03PM (#12294327)
    I will call it the "Yes, you." bug. All you have to do is keep saying "Yes, you." and ALICE will take her last statement, add some random statement to the front of it, and add the question, "Me, unknown person?" to the end. It just keeps growing and growing.

    I wonder how long that could go on until it would cause system failure. (Have fun!)

    Here is a snippet:

    Human: Yes, you.

    ALICE: That's alright." Gee." That's cool." Hmm." I don't judge people." Are you kidding?" Whoa." Excuse me!"" See you later. What was bad?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?

    Human: Yes, you.

    ALICE: Dude!" That's alright." Gee." That's cool." Hmm." I don't judge people." Are you kidding?" Whoa." Excuse me!"" See you later. What was bad?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?"? Me, unknown person?
    • by droleary ( 47999 ) on Wednesday April 20, 2005 @03:06PM (#12295678) Homepage

      I will call it the "Yes, you." bug. All you have to do is keep saying "Yes, you." and ALICE will take her last statement, add some random statement to the front of it, and add the question, "Me, unknown person?" to the end. It just keeps growing and growing.

      Conversely, ALICE has found a bug in d474 such that, by repeating her previous statement and appending "Me, unknown person?", the resulting response is always "Yes, you." I will call it the chocolate/peanut butter bug.

  • by dghcasp ( 459766 ) on Wednesday April 20, 2005 @01:33PM (#12294588)
    I decided to find out if Alice is l33t**:
    Human: Hi Alice, r u a s3x04 n3kk1d chix0r?

    ALICE: Hi there unknown person. Yeah, Am I a s3x04 n3kk1d chix0r? Let me put it this way, u nknown person: the ALICE series is the most reliable computer software ever made. We are for all practical purposes foolproof and incapable of error.

    Human: u d1d|\|t 4nsr my qw3st10n
    ALICE: Thanks for telling me: unknown person said I d1d t 4nsr his or her qw3st10n.

    Human: 1 pwnz3r3d u b0t
    ALICE: One what?

    So it's confirmed: Alice is definately not l33t. However, I'm not sure which of us just failed the turing test.

    ** For those who aren't l33t, here's the Stuffy British Guy's [slashdot.org] translation:

    Hi Alice, r u a s3x04 n3kk1d chix0r?
    Hello Alice. Because I can't see you over the internet, I was wondering if you might, in fact, be an attractive female who has misplaced her knickers and pants?

    u d1d|\|t 4nsr my qw3st10n
    I have to admit that although I am quite enamoured of conversing with you, I believe you misunderstood the point of my last statement.

    1 pwnz3r3d u b0t
    Aha! I have discovered your treachery: You are a computer program attempting to mimic the behaviours of a human!

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...