Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News

Alicebot Creator Dr. Richard Wallace Expounds 318

Okay, here are Alicebot inventor Dr. Richard Wallace's answers to your questions. You're about to enter a world that contains interesting thoughts on A.I., a bit of marijuana advocacy, a courtroom drama, tales of academic politics and infighting, personal ranting, discussion of the nature of mental illness, and comments about the state of American society and the world in general. Yes, all this in one interview so long and strong we had to break it up into three parts to make it fit on our pages. This is an amazing work, well worth reading all the way to the end.

1) AI through simulation?
by Jeppe Salvesen

Do you think that the ever increasing processing power will eventually enable us to fully simulate the human brain? What ramifications would this have for the A.I. discipline?

Dr. Wallace:

My longstanding opinion is that neural networks are the wrong level of abstraction for understanding intelligence, human or machine.

Neurons are the transistors of the brain. They are the low level switching components out of which higher-order functionality is built. But like the individual transistor, studying the individual neuron tells us little about these higher functions.

Suppose an alien came down to Earth who had never seen a computer before. Assuming interstellar travel is possible without a computer! He/she might be tempted to break it open, and discover that it is made of millions of tiny transistors. The alien may try to discover how the computer works by measuring the electronic signals in the transistors. But they would miss the operating system completely. The transistors tell us nothing about the software.

Similarly, neurons tell us little about the higher order software running on our brains.

Significantly, no one has ever proved that the brain is a *good* computer. It seems to run some tasks like visual recognition better than our existing machines, but it is terrible at math, prone to errors, susceptible to distraction, and it requires half its uptime for food, sleep, and maintenance.

It sometimes seems to me that the brain is actually a very shitty computer. So why would you want to build a computer out of slimy, wet, broken, slow, hungry, tired neurons? I chose computer science over medical school because I don't have the stomach for those icky, bloody body parts. I prefer my technology clean and dry, thank you. Moreover, it could be the case that an electronic, silicon-based computer is more reliable, faster, more accurate, and cheaper.

I find myself agreeing with the Churchlands that the notion of consciousness belongs to "folk psychology" and that there may be no clear brain correlates for the ego, id, emotions as they are commonly classified, and so on. But to me that does not rule out the possibility of reducing the mind to a mathematical description, which is more or less independent of the underlying brain archiecture. That baby doesn't go out with the bathwater. A.I. is possible precisely because there is nothing special about the brain as a computer. In fact the brain is a shitty computer. The brain has to sleep, needs food, thinks about sex all the time. Useless!

I always say, if I wanted to build a computer from scratch, the very last material I would choose to work with is meat. I'll take transistors over meat any day. Human intelligence may even be a poor kludge of the intelligence algorithm on an organ that is basically a glorified animal eyeball. From an evolutionary standpoint, our supposedly wonderful cognitive skills are a very recent innovation. It should not be surprising if they are only poorly implemented in us, like the lung of the first mudfish. We can breathe the air of thought and imagination, but not that well yet.

And remember, no one has proved that our intelligence is a successful adaption, over the long term. It remains to be seen if the human brain is powerful enough to solve the problems it has created.

Functionalism is basically the view that the mind is the software, and the brain is the hardware. It holds that mental states are equivalent to the states of a Turing Machine. Behaviorism was a pre-computational theory, which imagines the nervous system as a complex piece of machinery like a telephone exchange, but they didn't think much about software. Dualism goes back to Descartes. It is the view that the mind and brain are separate and distinct things, possibly affecting each other, or possibly mirroring each other.

My view is a kind of modified dualism in which I claim that the soul, spirit, or consciousness may exist, but for most people, most of the time, it is almost infentesimally small, compared with the robotic machinery responsible for most of our thought and action. Descartes never talked about the relative weights of brain and mind, but you can read in an implicit 50-50 assumption in most Dualist literature. My idea is more like 99-1, or even 99.999999% automatic machinery and .00000001% self-awareness, creativity, consciousness, spirit or what have you.

That's not to say that some people can't be more enlightened than others. But for the vast herd out there, on average, consciousness is simply not a significant factor. Not even a second- or third-order effect. Consciousness is marginal.

I say this with such confidence because of my experience building robot brains over the past seven years. Almost everything people ever say to our robot falls into one of about 45,000 categories. Considering the astronomical number of things people could say, if every sentence was an original line of poetry, 45,000 is a very, very small number.

2) Turing Test
by Transient0

I noticed that your AliceBot won the 2000 Loebner Prize for most human responses. My question is: "As an Artificial Intelligence researcher, do you feel that the Loebner Prize represents a legitimate variety of testing, or did you just want the $2000?"

I was pretty sure that almost all AI researchers came to the agreement about thirty years ago that the original imitation game as proposed by Turing in 1951 was useful only as a mental exercise, not in practice. Do you feel that the types of developments that the Loebner prize supports(intentional, hard-coded spelling mistakes, etc.) are actually productive in terms of the AI research project?

Dr. Wallace:

In case you haven't noticed, the field of Artificial Intelligence (defined however you wish) has almost nothing to do with science. It is all about politics. When you look at all the people working professionally in the field of A.I., it brings to mind the old joke:

Q: How many Carnegie Mellon Ph.D.s does it take to screw in a light bulb?
A: Two. One to change the bulb, and one to pull the chair out from under him.

The only rule most of these people know is: undermine the competition at all costs, by whatever legal means, or whatever they can get away with. That is how you become King of the A.I. Anthill.

Having a good theory or better implementation of anything is beside the point. Being able to "play the game" and knock out the competition, that is what it is all about. Swim with sharks or be eaten by them.

Especially in the age of increased competition for diminishing jobs and funding, scientific truth takes a back seat to save-your-ass.

Unfortunately it seems that the A.I. problem is inseperable from politics.

When I say that academia is corrupt in America, I don't mean that professors are accepting bribes and giving kickbacks for government contracts. There may be a financial motive in some cases, such as the use of overhead funds for a "course buyout" to reduce a professor's workload, but I am not talking about the kind of corruption associated with Wall Street and Washington exactly. I am talking about the replacement of science with politics as the main item on the academic agenda.

It must not have always been so. At one time, I believe academics were appointed and promoted primarily on the basis of merit and accomplishment. Within the last 20 years or so in the United States this has gradually changed into a system in which political correctness, slickness, and good salesmanship are more highly valued than good science. I don't pretend to understand the reasons for this, but I can point to many examples within our own community.

I have written that it is like a dysfunctional family. Those in positions of leadership and authority have mental health, drug and/or alcohol problems that make them incapable of carrying out their administrative responsibilities. In response, people who are skilled at "enabling" or "nursing" the dysfunctional leaders get promoted and advanced. Those who are prone to logical thinking and speaking the truth are discarded, because they make the authorities face their unconscious anxieties.

I often say, people don't go into computer science because they enjoy working with the public. But as the field has matured, I think it has attracted people who are more comfortable wearing business suits and attending strategy meetings than tinkering on a lab bench or writing a research paper. As computer science departments matured, the people already in them began to want everything to remain the same until they retired. They didn't want to hire young professors with a lot of new ideas about the administration. They hired young professors who wanted everything to stay exactly like it was, no matter what.

You may think that the politicization of a field like computer science is no big deal. We can have slick politicians instead of scientists running university CS departments, and not cause a lot of problems. But I think it is a really big problem in other fields, especially in medical science, especially in drugs and mental health.

Take LSD for example. Discovered by Albert Hoffmann in 1945, LSD is the most powerful drug ever developed. If you have ever gotten a prescription for any drug, you may have noticed that the dosage is usally given in "milligrams". But the dosage of LSD is "micrograms". It has the lowest ED50 of any known drug.

In the early 1960's there was some very promising research at Harvard applying LSD to depressed patients like me. The work was never completed or published for, guess what, political reasons. Subsequently, LSD was classified as a "Schedule I" drug with no useful medical value. This was not a decision based on sound science but on politics and fear. Even today there is zero research on this topic. Did you ever wonder why there is no Department of Psychedelic Studies on any university campus? It is a gaping hole in the academic curriculum, filled only by the informal undergraduate ratings of colleges as "party schools".

Even the very name of the federal agency that provides funding for drug research, the National Institute on Drug Abuse, prejudices the applications and the results. The native born American hippie agronomy student who got his Ph.D. in the 1970's is growing pot underground in California today. The immigrant doctor who "proved" that marijuana causes cancer got the NIDA grant and has tenure at UCLA. What's wrong with this picture?

Until 2 years ago, there was no federally funded research on the medical benefits of marijuana since the 1970's. Even now the only funded research is for terminal illnesses, and it seems like it will take a long time before they consider mental illnesses like mine. I conducted a survey of patients in San Francisco and discovered that "pain" was the #1 symptom for medical marijuana but "depression" was #2, and terminal illnesses like AIDS and cancer were lower on the list. So I am not alone in the perception that there is a patient need for research on this drug.

The problem here, my friends, is that NIDA is part of a specturm of trouble that includes once respected agencies such as NASA, NSF and DARPA. It is an octopus of political corruption that reaches into MIT and CMU and Berkeley and darkens everything it touches. It calls into question the quality and even the veracity of the scientific results and publications. We all witnessed the beginning of this even when we were all friends together at the ICRA conferences in the acrimonious interchanges between academia and industry. I myself saw enough of the system from the inside at NYU and Lehigh to know that science plays almost no role in the hiring, promoting or review process. It's all politics.

Not to place blame, but I think graduate advisors should be more straightforward with students about this point. It would be better to put more time into training them how to "shmooze" and "work the system" than how to solve mathematical problems, if they want their students to be successful. Either that, or they should work on changing the system back to merit based promotion.

3) My question (with answer)
by outlier

Historically, AI has done poorly managing public expectations. People expected thinking, understanding computers, while researchers had trouble getting computers to successfully disambiguate simple sentences. This is not good PR. Do you think the field has learned from this? If so, what should the public expect, and how do we excite them about it?

Just for fun, I asked slashwallace a shortened version of the question, do you think your response would differ?

Human: Historically AI has done poorly managing the public's expectations, do you think this will continue?
SlashWallace: Where did he get it?

Dr. Wallace:

Hugh Loebner is an independently wealthy, eccentric businessman, activist and philanthropist. In 1990 Dr. Loebner, who holds a Ph.D. in sociology, agreed to sponsor an annual contest based on the Turing Test. The contest awards medals and cash prizes for the "most human" computer. Since its inception, the Loebner contest has been a magnet for controversy.

One of the central disputes arose over Hugh Loebner's decision to award the Gold Medal and $100,000 top cash prize only when a robot is capable of passing an "audio-visual" Turing Test. The rules for this Grand Prize contest have not even been written yet. So it remains unlikely that anyone will be awarded the gold Loebner medal in the near future. The Silver and Bronze medal competitions are based on the STT. In 2001, eight programs played alongside two human confederates. A group of 10 judges rotated through each of ten terminals and chatted about 15 minutes with each. The judges then ranked the terminals on a scale of "least human" to "most human." Winning the Silver Medal and its $25,000 prize requires that the judges rank the program higher than half the human confederates. In fact one judge ranked A.L.I.C.E. higher than one of the human confederates in 2001. Had all the judges done so, she might have been eligible for the Silver Medal as well, because there were only two confederates.

To really understand how we accomplished this, I have to teach you some AIML.

CATEGORIES

The basic unit of knowledge in AIML is called a category. Each category consists of an input question, an output answer, and an optional context.

The question, or stimulus, is called the pattern. The answer, or response, is called the template. The two types of optional context are called "that" and "topic."

The AIML pattern language is simple, consisting only of words, spaces, and the wildcard symbols _ and *.

The words may consist of letters and numerals, but no other characters. The pattern language is case invariant.

Words are separated by a single space, and the wildcard characters function like words.

The first versions of AIML allowed only one wild card character per pattern.

The AIML 1.01 standard permits multiple wildcards in each pattern, but the language is designed to be as simple as possible for the task at hand, simpler even than regular expressions.

The template is the AIML response or reply. In its simplest form, the template consists of only plain, unmarked text.

More generally, AIML tags transform the reply into a mini computer program which can save data, activate other programs, give conditional responses, and recursively call the pattern matcher to insert the responses from other categories.

Most AIML tags in fact belong to this template side sublanguage.

AIML currently supports two ways to interface other languages and systems. The <system> tag executes any program accessible as an operating system shell command, and inserts the results in the reply. Similarly, the <javascript> tag allows arbitrary scripting inside the templates.

The optional context portion of the category consists of two variants, called <that> and <topic>. The <that> tag appears inside the category, and its pattern must match the robot's last utterance.

Remembering one last utterance is important if the robot asks a question. The <topic> tag appears outside the category, and collects a group of categories together.

The topic may be set inside any template. AIML is not exactly the same as a simple database of questions and answers. The pattern matching "query" language is much simpler than something like SQL. But a category template may contain the recursive <srai> tag, so that the output depends not only on one matched category, but also any others recursively reached through <srai>.

RECURSION

AIML implements recursion with the <srai> operator. No agreement exists about the meaning of the acronym.

The "A.I." stands for artificial intelligence, but "S.R." may mean "stimulus-response," "syntactic rewrite," "symbolic reduction," "simple recursion," or "synonym resolution." The disagreement over the acronym reflects the variety of applications for <srai> in AIML. Each of these is described in more detail in a subsection below:

(1). Symbolic Reduction-Reduce complex grammatic forms to simpler ones.
(2). Divide and Conquer-Split an input into two or more subparts, and combine the responses to each.
(3). Synonyms-Map different ways of saying the same thing to the same reply.
(4). Spelling or grammar corrections.
(5). Detecting keywords anywhere in the input.
(6). Conditionals-Certain forms of branching may be implemented with <srai>.
(7). Any combination of (1)-(6).

The danger of <srai> is that it permits the botmaster to create infinite loops. Though posing some risk to novice programmers, we surmised that including <srai> was much simpler than any of the iterative block structured control tags which might have replaced it.

(1). Symbolic Reduction
Symbolic reduction refers to the process of simplifying complex grammatical forms into simpler ones. Usually, the atomic patterns in categories storing robot knowledge are stated in the simplest possible terms, for example we tend to prefer patterns like "WHO IS SOCRATES" to ones like "DO YOU KNOW WHO SOCRATES IS" when storing biographical information about Socrates. Many of the more complex forms reduce to simpler forms using AIML categories designed for symbolic reduction:

<category>
<pattern>DO YOU KNOW WHO * IS</pattern>
<template><srai>WHO IS <star/></srai></template> </category>

Whatever input matched this pattern, the portion bound to the wildcard * may be inserted into the reply with the markup <star/>. This category reduces any input of the form "Do you know who X is?" to "Who is X?"

(2). Divide and Conquer
Many individual sentences may be reduced to two or more subsentences, and the reply formed by combining the replies to each. A sentence beginning with the word "Yes" for example, if it has more than one word, may be treated as the subsentence "Yes." plus whatever follows it.

<category>
<pattern>YES *</pattern>
<template><srai>YES</srai> <sr/></template>
</category>

The markup <sr/> is simply an abbreviation for <srai><star/></srai>.

(3). Synonyms
The AIML 1.01 standard does not permit more than one pattern per category. Synonyms are perhaps the most common application of <srai>. Many ways to say the same thing reduce to one category, which contains the reply:

<category>
<pattern>HELLO</pattern>
<template>Hi there!</template>
</category>
<category>
<pattern>HI</pattern>
<template><srai>HELLO</srai></template>
</category>
<category>
<pattern>HI THERE</pattern>
<template><srai>HELLO</srai></template>
</category>
<category>
<pattern>HOWDY</pattern>
<template><srai>HELLO</srai></template>
</category>
<category>
<pattern>HOLA</pattern>
<template><srai>HELLO</srai></template>
</category>

(4). Spelling and Grammar correction
The single most common client spelling mistake is the use of "your" when "you're" or "you are" is intended. Not every occurrence of "your" however should be turned into "you're." A small amount of grammatical context is usually necessary to catch this error:

<category>
<pattern>YOUR A *</pattern>
<template>I think you mean "you're" or "you are" not "your."
<srai>YOU ARE A <star/></srai>
</template>
</category>

Here the bot both corrects the client input and acts as a language tutor.

(5). Keywords
Frequently we would like to write an AIML template which is activated by the appearance of a keyword anywhere in the input sentence. The general format of four AIML categories is illustrated by this example borrowed from ELIZA:

<category>
<pattern>MOTHER</pattern> <template> Tell me more about your family. </template>
</category>
<category>
<pattern>_ MOTHER</pattern> <template><srai>MOTHER</srai></template>
</category>
<category>
<pattern>MOTHER _</pattern>
<template><srai>MOTHER</srai></template>
</category>
<category>
<pattern>_ MOTHER *</pattern>
<template><srai>MOTHER</srai></template>
</category>

The first category both detects the keyword when it appears by itself, and provides the generic response. The second category detects the keyword as the suffix of a sentence. The third detects it as the prefix of an input sentence, and finally the last category detects the keyword as an infix. Each of the last three categories uses <srai> to link to the first, so that all four cases produce the same reply, but it needs to be written and stored only once.

(6). Conditionals
It is possible to write conditional branches in AIML, using only the <srai> tag. Consider three categories: <category>
<pattern>WHO IS HE</pattern> <template><srai>WHOISHE <get name="he"/></srai></template>
</category>
<category>
<pattern>WHOISHE *</pattern>
<template>He is <get name="he"/>.</template>
</category>
<category>
<pattern>WHOISHE UNKNOWN</pattern>
<template>I don't know who he is.</template>
</category>
Provided that the predicate "he" is initialized to "Unknown," the categories execute a conditional branch depending on whether "he" has been set. As a convenience to the botmaster, AIML also provides the equivalent function through the <condition> tag.

CONTEXT

The keyword "that" in AIML refers to the robot's previous utterance. Specifically, if the robot responds with a multiple sentence paragraph, the value of that is set to the last sentence in the sequence. The choice of the keyword "that" is motivated by its use in ordinary language:

R: Today is yesterday.
C: That makes no sense.
R: The answer is 3.1412926 approximately.
C: That is cool.

In AIML the syntax <that>...</that> encloses a pattern that matches the robot's previous utterance. A common application of <that> is found in yes-no questions:

<category>
<pattern>YES</pattern>
<that>DO YOU LIKE MOVIES</that>
<template>What is your favorite movie?</template>
</category>

This category is activated when the client says YES. The robot must find out what is he saying "yes" to. If the robot asked, "Do you like movies?," this category matches, and the response, "What is your favorite movie?," continues the conversation along the same lines. One interesting application of <that> are categories that enable the robot to respond to knock-knock jokes.

The categories:

<category>
<pattern>KNOCK KNOCK</pattern>
<template>Who is there?</template>
</category>
<category>
<pattern>*</pattern>
<that>WHO IS THERE</that>
<template><person/> who?</template>
</category>
<category>
<pattern>*</pattern>
<that>* WHO</that>
<template>Ha ha very funny, <get name="name"/>.</template>
</category>

produce the following dialogue:
C: Knock knock.
R: Who's there?
C: Banana.
R: Banana who?
C: Knock knock.
R: Who's there?
C: Banana.
R: Banana who? C: Knock knock.
R: Who's there?
C: Orange.
R: Orange who?
C: Orange you glad I didn't say banana.
R: Ha ha very funny, Nancy.

Internally the AIML interpreter stores the input pattern, that pattern and topic pattern along a single path, like: INPUT <that> THAT <topic> TOPIC When the values of <that> or <topic> are not specified, the program implicitly sets the values of the corresponding THAT or TOPIC pattern to the wildcard *.

The first part of the path to match is the input. If more than one category have the same input pattern, the program may distinguish between them depending on the value of <that>. If two or more categories have the same <pattern> and <that>, the final step is to choose the reply based on the <topic>. This structure suggests a design rule: never use <that> unless you have written two categories with the same <pattern>, and never use <topic> unless you write two categories with the same <pattern> and <that>. Still, one of the most useful applications for <topic> is to create subject-dependent "pickup lines," like:

<topic name="CARS">
<category>
<pattern>*</pattern>
<template>
<random>
<li>What's your favorite car?</li>
<li>What kind of car do you drive?</li>
<li>Do you get a lot of parking tickets?</li>
<li>My favorite car is one with a driver.</li>
</random>
</template>

Considering the vast size of the set of things people could say that are grammatically correct or semantically meaningful, the number of things people actually do say is surprisingly small. Steven Pinker,in his book How the Mind Works wrote, "Say you have ten choices for the first word to begin a sentence, ten choices for the second word (yielding 100 two-word beginnings), ten choices for the third word (yielding a thousand three-word beginnings), and so on. (Ten is in fact the approximate geometric mean of the number of word choices available at each point in assembling a grammatical and sensible sentence). A little arithmetic shows that the number of sentences of 20 words or less (not an unusual length) is about 1020."

Fortunately for chat robot programmers, Pinker's calculations are way off. Our experiments with A.L.I.C.E. indicate that the number of choices for the "first word" is more than ten, but it is only about two thousand. Specifically, about 2000 words covers 95% of all the first words input to A.L.I.C.E.. The number of choices for the second word is only about two. To be sure, there are some first words ("I" and "You" for example) that have many possible second words, but the overall average is just under two words. The average branching factor decreases with each successive word.

We have plotted some beautiful images of the A.L.I.C.E. brain contents represented by this graph (http://alice.sunlitsurf.com/documentation/gallery/).

More than just elegant pictures of the A.L.I.C.E. brain, these spiral images (see more) outline a territory of language that has been effectively "conquered" by A.L.I.C.E. and AIML. No other theory of natural language processing can better explain or reproduce the results within our territory. You don't need a complex theory of learning, neural nets, or cognitive models to explain how to chat within the limits of A.L.I.C.E.'s 25,000 categories. Our stimulus-response model is as good a theory as any other for these cases, and certainly the simplest. If there is any room left for "higher" natural language theories, it lies outside the map of the A.L.I.C.E. brain. Academics are fond of concocting riddles and linguistic paradoxes that supposedly show how difficult the natural language problem is. "John saw the mountains flying over Zurich" or "Fruit flies like a banana" reveal the ambiguity of language and the limits of an A.L.I.C.E.-style approach (though not these particular examples, of course, A.L.I.C.E. already knows about them).

In the years to come we will only advance the frontier further. The basic outline of the spiral graph may look much the same, for we have found all of the "big trees" from "A *" to "YOUR *". These trees may become bigger, but unless language itself changes we won't find any more big trees (except of course in foreign languages). The work of those seeking to explain natural language in terms of something more complex than stimulus response will take place beyond our frontier, increasingly in the hinterlands occupied by only the rarest forms of language. Our territory of language already contains the highest population of sentences that people use. Expanding the borders even more we will continue to absorb the stragglers outside, until the very last human critic cannot think of one sentence to "fool" A.L.I.C.E..

[Continue to part 2 of the interview.]

This discussion has been archived. No new comments can be posted.

Alicebot Creator Dr. Richard Wallace Expounds

Comments Filter:
  • by Anonymous Coward on Friday July 26, 2002 @11:37AM (#3959134)
    I can't find the answer to this on their pages anywhere and if you ask the ALICE program it give back some cryptic bull-crap asking what I think it means. Someone just tell me!!!
  • Pictures? (Score:2, Informative)

    Anyone have more pictures of this guy? The article on nytimes.com had that tiiiiny little picture where he just looked like a muppet.
  • How do we know... (Score:4, Informative)

    by bucklesl ( 73547 ) on Friday July 26, 2002 @11:40AM (#3959153) Homepage
    ...that this is actually him, eh?

  • Best interview ever! (Score:2, Interesting)

    by moldar ( 536869 )
    Not only is the length of these replies very exciting - it seems that he has taken great care to provide technical details that are invaluable. In all the classes that I have taken I haven't seen such an excitement for this kind of material. And, just to put those statements in context I completed a MS in CS with a focus on AI . . .
  • by natefaerber ( 143261 ) on Friday July 26, 2002 @11:54AM (#3959243)
    This is obviously A.L.I.C.E. answering.
  • by Nomad7674 ( 453223 ) on Friday July 26, 2002 @11:54AM (#3959247) Homepage Journal
    Then I read this interview and began to begin to sense that my brain was about to explode. Guess I need to ratchet down my self assessment and get some Tylenol for the headache!
  • by dubiousmike ( 558126 ) on Friday July 26, 2002 @11:57AM (#3959263) Homepage Journal
    there's part two to the interview.

    I am exausted already.

  • Slightly worrying (Score:3, Insightful)

    by streetlawyer ( 169828 ) on Friday July 26, 2002 @11:59AM (#3959280) Homepage
    That's not to say that some people can't be more enlightened than others. But for the vast herd out there, on average, consciousness is simply not a significant factor. Not even a second- or third-order effect. Consciousness is marginal.

    Does this not have the implication that there would be nothing very terrible about rounding up large numbers of the "vast herd" and painlessly slaughtering them? Has he thought through the consequences of this view?

    • Only if you think something without conciousness should be slaughtered.

      Considering that we (mostly) agree that even the lack of a conciousness in a human doesn't excuse you from slaughtering them, whats the problem?
      • Considering that we (mostly) agree that even the lack of a conciousness in a human doesn't excuse you from slaughtering them, whats the problem?

        If this were true, surely abortion would be illegal?

        • Shoulda known from your sig.

          By human, I am speaking specifically about humans that have been born.

          I will not get dragged into anything more complicated that that.
    • We consider retarded people to have only the bare minimum level of consciousness, on par with a small child or occasionally even an animal with human vocal cords, yet hardly any people or countries round them up and slaughter them. If the general population were suddenly considered to be almost as stupid as the average retarded individual, why would we decide to round them up and slaughter them?

      Besides, I doubt that any of those "enlightened people" would try to round up and slaughter the rest of the human population. Every truly ingenius supervillain knows that he needs slaves, servants, and toadies to populate his One (Multiple?) World Empire.
    • Does this not have the implication that there would be nothing very terrible about rounding up large numbers of the "vast herd" and painlessly slaughtering them?

      This would be bad because. . . . ?

      As long as it was done in a fair and just manner with no prejudices or false discriminations applied to the situation, I would see damn nearly no moral or ethical problems with this.

      Unfortunately most people are a tad wee bit offended by the idea (not that I can blame them, history has a habit of f*cking up such systems, ick. Killing smart people for racial / religious / political reasons == baaaad! ), so I end up promoting less serious measures like birth control instead.

      (yes I believe the earth has a few to many people on it, like a few billion to many people. 2 or 2.5 less would be wonderful. :-D )
  • Ok. Hold up. (Score:1, Interesting)

    by rash ( 83406 )
    As im reading this I am getting a bit irritated.

    It seams as whenever he tries to prove something he brings up a bunch of "facts" without backing them up with anything "real". And then draws a conclusion that doesnt have anything todo with the "proof" he gave.

    So from my view he makes up evidence to justify his own views. Instead I think he should addapt his views to reality and the rules of western society.
    • Re:Ok. Hold up. (Score:3, Insightful)

      by SirSlud ( 67381 )
      Uh, how do you give proof?

      This is so funny - short of him doing an experiment in your livingroom, any refernece he provided could be easily dismissed as you. You sound like you dont want to believe anything. How could he provide proof?

      Take your blinders off. Suggesting our 'western rules' must be upheld in scientific discovery is exactly the problem he's dicussing; that politics is superceding any actual search for scientific truth.

      And by the way, if you want to discredit him, why not provide some facts and proof yourself? People's distrust of counter-institution thinking is hilarious given how history suggests that its the only type of thinking that generally leads to the 'progress' we so enjoy today. If everybody thought like you, we'd still think that Earth was the center of the universe.
      • What I am talking about is relevance.
        You cant talk for 10 hours about stuff that isnt relevant to your point.

        If I were to say. "The owner of the store hates me. So therefore I wont shop at the store next to that store". Then it wouldnt make any sense.
        • >You cant talk for 10 hours about stuff that isnt relevant to your point.

          Fortunately, its a free world, and you can.

          And I believe he does address the questions, ultimately, in his answers.

          But man, there is a whackload of bonus information and thinking in there that I am *glad* he includes. You can never expound too much; its up to the person asking the question to filter the reply and use what information is relevent to them.
    • by Wakko Warner ( 324 ) on Friday July 26, 2002 @01:37PM (#3960055) Homepage Journal
      He's a bit of a rebel, yes?

      Does this offend you, or are you scared? You say you're irritated -- why? Because he doesn't play by "the rules of western society"? The same society which would decide arbitrarily what a man can and cannot ingest, inhale, or inject into his own body, without backing their decisions up with anything "real" either? Should he sit calmly in his corner like the rest of us, being little more than an unthinking automaton?

      He's different. This pisses you off.

      I think you're the one who needs to adapt. Or simply be quiet.

      - A.P.
      • You say you're irritated -- why?

        Because for most people, brevity > loquaciousness and interview != soapbox.

        Personal eval: there are several obvious bad things about this interview. I still tried to mine the nuggets, enjoyed his take on Cyc since I'm interested in it, and think some people are reacting way too harshly to him being in a serious "glass half-empty" mood. At least on the wetware answer everybody is harping on.

        The interview is interesting in that it contains some well-spoken insights on various topics and (at a meta-level) provides insight into the mind of one who describes himself as mentally ill.

        Bland enough for ya?

  • by macsox ( 236590 ) on Friday July 26, 2002 @12:07PM (#3959332) Journal
    i certainly appreciate good technology, don't get me wrong. but, after reading a new york times magazine article on the good doctor, i revisited ALICE, and was not impressed, as i hadn't been the first time. i messed with it for about ten minutes, thinking maybe i was missing something, and then showed it to my girlfriend, who asked ALICE about three questions and then gave me one of those looks.

    i know, i know, baby steps, but, in a behavioral sense, this neither approximates nor even reasonably simulates intelligent thought. why are people so blown away?
    • it talks back. its a (slightly better) eliza. we arent blown away. we're just happy that someone is building it. sure its a toy. but its a step towards the real thing (combining google with alicebot and cyc would be a great start). and maybe just maybe modifications to the open code of alicebot can lead to some real progress.
    • I think I liked Eliza better. For some cheap entertainment check out AOLiza [fury.com]. It's a list of some chat logs where some unsuspecting AIM users end up talking to Eliza.
  • by Louis Savain ( 65843 ) on Friday July 26, 2002 @12:09PM (#3959348) Homepage
    My longstanding opinion is that neural networks are the wrong level of abstraction for understanding intelligence, human or machine.

    Not a very valid opinion since the behavioral complexity and robustness of biological neural networks are many, many orders of magnitude greater than that of any robot or program in existence. Alice is a good example. But this view is to be expected from a GOFAI (good old fashioned AI) guru whose livelihood depends on hawking the hopelessly flawed symbolic intelligence and knowledge representation approach to AI. This approach is over fifty years old and they still can't use it to make a machine as smart as a cockroach. Not a very good track record, IMO.

    For a better take on why neural networks are the only hope for achieving human level AI, click on the links below:

    Temporal Intelligence [gte.net]
    Animal [gte.net]
    • There are two levels to the AI problem. The symbolic and the manipulation. Symbols should be used to define meanings to things, and the neural net for processing things. Thats how the brain works. Signals fly around in the frontal lobe and produce some kind of emerging answer. That answer has no meaning outside of the brain, but it produces a stimilus. This stumulus causes the training that humans get as an infant to "make" the learned behavior happen. Or if you like terminate the signal path. In reality nothing terminates, other things just take over. What you need is a neural network that adjusts its weights based on its enviornment, and then produces a canned response at some point. This canned response ideally could be the result of the enviornment. mnjnjmmnjmn,mn,

      fuck it. I'll just write a paper.
    • A nueral network is just a self-adjusting system that is taught how to respond to stimuli via certain rules and some feedback (wether positive or negative) - thus, all a nueral net does is try different combinations of paterns and self-feedback until it find the 'best' solution to a problem. Thus, a nueral net is just software that in a way, writes software. The doctor states that it is the _software_ the brain is running that makes it what it is (for us) - intelligent. a _really_ big nueral net could, yes, find the equivilent patterns to mimic said software, but all the doctor is saying is that a nural net isn't _the_ software the brain runs. hrm, duno if I made sense, but the doctor isn't saying ALICE is _THE_ method for AI, just that its a usefull AI tool for language modeling and response modeling for a language, and that he think neural nets are the wrong way to go for general, true-brain simulation.
    • As Richard knows, I completely disagree with this. I think you can approximate anything with symbolic systems... with a huge amount of work, but with genetically evolved neural networks, I think you can go beyond approximation and actually copy intelligence.

      Richard and I both have chapter in a forthcoming book about the Turing Test--he says we're chatbots, and I say we're hyperspace.

      I believe we can train a neural network to re-create a continuous human semantic-affective hyperspace, where every proposition (Mindpixel) is a point and we know the truth of any particular proposition by interpolating the truth of its hyperspatial neighbors... the same goes for emotion... any feeling you have can be represented spatially with three dimensions--"Pleasure-displeasure" distinguishes the positive-negative affective quality of
      emotional states, "arousal-nonarousal" refers to a combination of physical activity and mental alertness, and "dominance-submissiveness" is defined in terms of control versus lack of control.

      Now, if we can train something to classify unknown propositions in this human hyperspace as a human does, then we can do a brute force search for an artificial thought and the artificial feeling to go with it by just firing billions of random strings at it until it finds a random string that no one has ever seen before, but that has a non-random truth value.

      Of course, I may be completely full of shit, but like Richard says, the data is still invaluable--so go enter some mindpixels! Whatever A. I. the future brings, it will have Mindpixels in it.

      BTW: I don't think it is a coincidence that human short term memory is about seven chunks (miller) and that the surface area of a hypersphere peaks at 7.25695... do you?
  • Thank you Dr. Wallace. Really.
  • by Christianfreak ( 100697 ) on Friday July 26, 2002 @12:16PM (#3959391) Homepage Journal
    Disclaimer, I haven't read the whole thing yet since its long I'm going to comment on my observations so far.

    That's not to say that some people can't be more enlightened than others. But for the vast herd out there, on average, consciousness is simply not a significant factor. Not even a second- or third-order effect. Consciousness is marginal.

    Okay I'm sure this guy is a huge expert and all but this sounds rather elitest, lots of people create lots of wonderful things, to say that most people don't use their consciousness simply ignores all the massive achievements of the last 100 years. He goes on to talk about that people say only about 45000 things to his robots... well it seems to me the obvious answer is that most people perceive robots a certain way ... as machines. In fact I'm impressed he got that many responses, most people don't ask their electric can-opener what the meaning of life is, and I venture to guess that most people don't see a robot much differently.

    Also he talks about how the brain is such a horrible computer but completely ignores human interaction, something that our computers can't do and I don't see them doing very well anytime in the near future (ever talked to that crappy robot voice on Sprint PCS customer service?). He talks about how the brain is horrible at math but ignores that fact that everytime we move the brain makes complex calcuations to put our legs in the right place and keep us balanced. Just because we aren't conscious of it doesn't mean it doesn't happen.

    So really I think hes comparing humans from the perspective of his robots ... I don't think its a very good comparison. In fact switch good visual recognition with good math skills in what he's saying and you would have a better description of a robot than a person ...

    Just my opinions, not meant as a troll.
    • by mickwd ( 196449 ) on Friday July 26, 2002 @01:01PM (#3959729)
      Yeah, his comment about the 45,000 different categories was a strange one. It all depends how you classify "categories" - classify them differently, and I'm sure you could divide everything it is possible to talk about into, say, 12 categories, or 100, or 3000, or......

      I also don't buy his comments about human consciousness. If a brain's consciousness is a product of the program it's running, then is it the program itself which exhibits consciousness, or is it the act of running that program?

      If it's the first, then would you consider a printout of the program to have consciousness ?

      If it's the second, then imagine running through the above printout of the program using pencil and noteped to record data (imagine being like a slow microprocessor - maybe one instruction per 20 seconds). Are the printout, pencil and notepad conscious and alive ? Can you cause pain to a sentient being (the pencil and paper) by writing the wrong thing ? Would it be ethical to ever stop writing ?

      Or perhaps consciousness is a "quantum" effect - i.e. once something reaches a certain threshold of processing power it acquires a level of consciousness ? Well, if this is the case, does that mean that the Pentium XXXII 358Ghz will start to exhibit consciousness, whereas the 320GHz version (which runs the same software, although slower (though not as slow as pencil and paper)), does not ?

      Or perhaps there is some sort of "critical mass" effect, beyond which strange physical interactions which may lead to consciousness start taking place ? If there is any scientific basis to human consciousness and self-awareness - even the concept of a soul - then this is the only explanation I can really start to believe in.

      If this sort of thing interests you, be sure to read "The Emperor's New Mind" by Roger Penrose - it's several years old, and may even be out of print now. He presents many more arguments than I have here - better explained, and better thought through. It's a must-read.

      Sorry if I've drifted off-topic. But then again, I'm only human ;)

      • Really, dividing stuff into categories--compartmentalizing ideas--is really only language processing. The idea of consciousness is much more broad than simply processing a message and giving feedback. If replying to a query or responding to a stimulus is all consciousness is, then even an ordinary thermostat could be considered "conscious" at a certain level. Maybe the problem isn't that machines can't reach consciousness but that there is no such thing as "consciousness", as we humans think of it. Hell, my idea of consciousness could be totally different than yours or anyone elses.

        Maybe millions of years of evolution in the human language, encoded in various dialects and then fed into new generations' minds has simply created a word for something that doesn't exist. Maybe we are so full of ourselves as a species that we feel certain there is something that sets us apart from the other almost robotlike animals we inhabit the earth with, when in actuality we are merely robots with big memories and the ability to send and store complex messages with writing and spoken word (and even sign language).

        I think that the only way a computer will ever be conscious like a human is when it can communication and interface with other humans in the same effecient ways we interface with each other. Even at our advanced stage, it still takes us 10-15 years to be useful for anything, being taught every day by not only our parents but also interaction with other people as well.

        I wonder what would happen if you taught 5000 monkeys sign language and social skills and then set them loose in a world designed for them. They would be able to share the information amongst their 5000 selves, and if they could record the lessons they learn in their lives, future generations would not have to start from scratch.

        Alright, that didn't make much sense, but what I am getting at is that mere human "consciousness" is simply the tip of the iceberg. We as individuals stand on the shoulders of thousands of years of society and learning. It is not merely a human mind we are trying to emulate with these artificial intelligences, but rather an entire collective consciousness of millions of minds meeting in various ways over a long time span--what we refer to as the human "consciousness". Frankly, I feel as though by the time we are able to input all of the information humans have ever learned into a society of neural nets, society itself will be changed and then the net will need to be changed accordingly. And then at what point will the machine become more efficient than us at doing our thing? We will then become useless, and there will be no point to us even existing anymore (or even less of one than there is now.)

        So, I guess I ask, "What's the point?". It's interesting to see what will happen, but it's all pretty useless. Simple and even complex tasks can all be broken down into a series of steps that a minimal intelligence can follow, so why do we need a machine consciousness? I'd be perfectly happy forevermore only humans can appriciate art and music, and create new amazing things from nothing but random electrical signals in blood. Sure, build machines that will do the other boring shit, but I prefer my art man made.

        Is that what you mean by a critcal mass?
    • I think the mistake that you're making is in assuming that things like creativity are conscious. Let me give you a f'rinstance. Consider a bird's nest. Have you ever seen a bird build one? The bird will fly down to the ground and pick through bits of twigs and grass until it finds a piece it likes. Then it will fly back to the nest and muck around until it finds a place where it should go. It inserts it (maybe tacking it in place with a bit of saliva/mucus) and repeats until done.

      Now consider a furniture builder. He goes to the wood pile, finds a piece of wood that isn't too knotty and has been seasoned well. Then he carves it, planes it, sands it, and attaches it to whatever he's building and repeats.

      Here's the difference -- the bird makes the same kind of nest over and over again. There will be very little variation. However, the furniture builder supposedly has some degree of creativity that allows him to build furniture that he's never built before. Maybe he's building a chair and he decides to put some scrollwork on the back. This isn't an option that a bird has when building a nest.

      On the other hand, there are birds that do seem to evince creativity. For example, there's the bower bird. Not only does the male create a nest, it decorates it with colourful baubles in order to attract a female. It competes with other males in creating the best nest. It doesn't create the same nest each time. It seems to make conscious decisions about how best to design the nest. However, attributing consciousness to the bower bird is an iffy proposition. I have a hard time imagining a bower bird with an interior monologue:

      BB: Yeah, baby. Check out those shells. I gotsta get me some of them for my swingin' bachelor pad. I'm gonna be pimpin' like a mother once I get hooked up with some of that shit. Those fly hoochies will be all over my jimmy! Damn.
      As far as Wallace's comments about math go, you're taking it out of context. The brain doesn't excel at doing math. It excels at learning how to do something repeatedly. That's to say, it's trainable. Try this: put on a pair of glasses that have a prism in the lens so your vision is shifted to the left by 15 degrees. Now have someone pitch a ball to you. You know that your vision is off by 15 degrees, so it should be easy to compensate. Right? There's no way you're going to catch the ball. Now do it repeatedly for a half an hour. Pretty soon you'll be catching it every time. Take the glasses off and have your friend pitch you the ball again. You're going to miss because you're body is still used to the vision shift, even though you know that you're back to business as usual. My point is that your brain doesn't make "complex calculations". You have something called proprioception that is "knowledge of body in space" that allows you to do things like pick up a coffee cup without seeing the cup or your hand.

      Incidentally, for more information on creativity, pick up a book by Mihaly Csikszentmihalyi (pron. CHICK-ma-high). Also, Oliver Sacks has some good books on how the mind and body work together ("Leg to Stand On", "The Man Who Mistook His Wife For a Hat", "Anthropologist on Mars").

    • Wallace seems to have is own agenda based around pushing the Alicebot. I've interacted with Alicebots myself, and obviously one doesn't really try to challenge it since it soon becomes apparent how limited it is. Sitting on Alice's side, as Wallace does, looking at the responses from people and concluding something about those people is like watching adults interact with a baby and concluding that the human vocabulary consists of lots of words like "goo-goo", said in a high-pitched tone of voice.

      You're also correct about the math issue: we're good computers for certain preprogrammed tasks, which makes us little different than any other computing device. He's complaining that we're not good at reprogramming ourselves to do tasks for which we weren't specifically evolved, but in that sense he can't compare us to computers, since they also can only perform the tasks they're programmed for, and have no consciousness of the processes they perform.

      Wallace is a smartish guy with some apparently serious social skills problems (ref NYT article posted on /. previously), and he seems to be using Alice as a shield/weapon against the rest of the world.

  • LSD (Score:2, Interesting)

    by hanwen ( 8589 )
    In the early 1960's there was some very promising research at Harvard applying LSD to depressed patients like me. [...] Even today there is zero research on this topic.

    It all depends on where the "topic" ends precisely, but there have been studies on the effect of LSD on religious experiences. Some of them are cited in "Zen and the Brain" by James Austin.

    • Yeah, he blew this one. There is a lot of study academically going on now on hallucinogens in general. Check out MAPS [maps.org] for a good starting point.

      As for the consciousness remark he made ("consciousness is marginal."), I for one will disagree. And that is what MAPS is all about (along with the likes of Richard Schultes, the late Terrence McKenna, Dennis McKenna, and a slew of other "psyconauts" out there).

  • by sconeu ( 64226 ) on Friday July 26, 2002 @12:24PM (#3959430) Homepage Journal
    1. He didn't answer the question
    2. <SARCASM>Good thing he's not bitter or anything, isn't it?</SARCASM>
  • It seems to me that Dr. Wallace is half right on his interpretation. His transistor/operating system analogy would seem to be fairly compelling and makes a lot of sense to me.

    The question I wish had been asked is, we all know emulation is slower and normally less accurate than a native system. If you are approaching AI from the standpoint of developing the operating system before developing the system itself, how is this a more accurate approach or will both approaches yield to a final positive result?

    His answers basically make me think that a true AI is most likely to evolve on two fronts. First, the development of models that emulate the structure of the brain (neural networks/etc.), and second the development of models that emulate the way it actually behaves. NNs are quite good at learning things from an input layer, but how do you go about getting that input layer without an appropriate model of what human behavior is?

    This is why I think that models like ALICE will be used to approximate behavior and then a neural network will be used to learn how to emulate that logic with an adaptive input layer(being a next generation ALICE equivalent). IANITF(I am not in the field) however. Last thing I read in it was on perceptors, logic grammars, and kohenegan[sic] SONs. Any other /.ers who may be more informed have any thoughts?
  • by chrisseaton ( 573490 ) on Friday July 26, 2002 @12:29PM (#3959460) Homepage

    This has nothing to do with AI!

    All the Alice Bot does is respond to your statements and questions. It never initiates anything, it never thinks for itself, it just loads responses and sends them off.

    This is not AI because it never creates answers to questions, just picks them from a list. Sure, it uses the current context to pick the responses, and it modifies the responses to fit what you have already told it, but it never creates anything itself, and nothing ever changes. Ask it the same question a hundred times, and you get the same answer a hundred times.

    This is just a database with a human text interface - nothing more. There is no creativity, no adaptability, not inteligence, and it really annoys me when people sing about this being AI.

    • It really annoys me how many slashdotters like your self don't seem to know the definition of AI.

      This is the definition of AI. (m-w.com)

      Main Entry: artificial intelligence
      Function: noun
      Date: 1956
      1 : the capability of a machine to imitate intelligent human behavior
      2 : a branch of computer science dealing with the simulation of intelligent behavior in computers
      Clearly, ALICE falls under the first definition. It's not simulating intelligence, it's only imitating the behavior of conversation.

      I also beg to differ about the creativity and adaptability of it. The bot master must be very creative to create something that can fool humans. In addition, there are some ways that bot can adapt and change, as he states in the interview.

      If you need to see more robots 50,000 have been created at RunABot.com [runabot.com] .. or you can make one yourself.

      • > It really annoys me how many slashdotters like your self don't seem to know the definition of AI.

        There is no "correct" definition of AI, there are only opinions, and these vary surprising amounts. To most everybody in the field the words "artificial" and "intelligent" mean something different, and many AI classes start out talking about this very question. M-w's definitions are only two of the many possible answers that have been proposed, and not very thorough ones at that.
    • Yes, and after reading the three parts of the interview it would appear that the professor is arguing that this is precisely what most humans do. This certainly would explain some of the conversations I have had recently.

      I don't know if I buy his approach, but it is at least as functional right now as anything else that has been tried.

    • by rhadamanthus ( 200665 ) on Friday July 26, 2002 @01:11PM (#3959820)
      Disclaimer: I do not necessarily agree with Wallace or disagree. at the moment I am still weighing the two viewpoints...

      Anyhow, Wallace's point is that AI, from the everyday human perpective, is nothing more than precisely what you classify so ardently as "not AI". Wallace is refuting not the AI community's technique as much as their philosophy. AI researchers and most people (myself included) like to think that the brain is incrediby complex and can achieve anything imaginable. Wallace simply states that while that's all well and good, for the most part the human brain really does nothing more than precisely what ALICE emulates. A simple response mechanism that queries a database of "usual" replies. When people ask you, "how was your day?", sure you could reply with a deeply insightful commentary, but more than likley you (and most people would say "fine, how was yours?". I guess Wallace is saying, "get over yourself, most of you aren't nearly as great as you think." Which is, admittedly, a hard pill to swallow. Anyhow, just stirring the fire here...

      ------rhad

      • Perhaps the problem is that Wallace is stuck in his little confined box of dealing with speech only. I don't know about most people but I don't spend most of my day just talking. Sure if you sampled everyones conversations over the course of a day or week or whatever you would end up with a distribution of common questions with common answers. This is expected, the English vocabulary is only so big, and the words which are commonly used are only so numerous. If you expand the "AI" as they call it into doing tasks which have an infinite dataset it would be immediately obvious how crappy this approach is.

        For instance, whats going to happen when Wallace builds an "AI" computer that generates artwork. If you tell it you like two things, whats it going to do, take two commonly found pictures of those things and merge them into one? Would it have the creativity to redraw or repaint the object from scratch with influences from both. Not with the current method.

        ALICE might be able to fool people with common questions and answers, but the method used seems to be a dead-end technique, with no useful application beyond that. And after reading three pages of drivel, it seems he has been talking to the politicians and crappy bots he complains about a bit too much...

  • He's not shy (Score:2, Interesting)

    by f00zbll ( 526151 )
    I read through most of the answers and I have to say this guy has strong beliefs. I'm going to bother making judgement on his beliefs, but it was an interesting read.

    I personally think the idea of "consciousness" is over-rated and gets in the way of most of the time. But I'm not about to make the quantum leap from that and say "people have no consciousness and that most people are cows." Making comparisons between what a computer does and what a human can do are completely different things. For example, a great architect can look a structure for 2 minutes and deconstruct it. Can a machine do the same thing? A robot may be able to measure a building down to millimeters, but would it be able to take it to the next step and recommend a way to add two rooms to the house?

    In my opinion, one of the hardest things for AI to immitate/model is creative thinking. Take mathematical proofs for example. How long would it take for a bot to realize pi is infinite? Would it have to calculate it to 10^10 to finally say "this number may be infinite." Look at the recent high profile proof that were solved with brute force. Could a robot come up with those theorums spontaneously. Sometimes a problem isn't computationally feasible and intuition is needed. Maybe the good professor is too focused on computer science and forgotten how to live a full life and learn to appreciate humanity with all it's flaws and gems.

  • This is awesome... (Score:3, Insightful)

    by imta11 ( 129979 ) on Friday July 26, 2002 @12:38PM (#3959510)
    This guy knows what it is about. His response to question #2 pegs the fundamental problem with the CS discipline as an undergraduate or graduate field of study, and maybe the sciences in general. The people that do things by the book, solve the same problem sets and schmooze with the professors the most get the A's the promotions, etc... How many times do I have to solve the same problem? Is this just so the people that waste their study time can be able to bullshti their parents? Someone in my classes actually said "I'm a CS major because my father told me to be. I had no idea what it was" Guess what she still doesn't but that didnt stop her from getting elected to the ACM president for my schols chapter. These type of people need to get the fuck out of CS and go into management so that the other brood of worthless Cs majors, those that think techinal knowledge (defined to be somerhing they read about the linux kernel when they were sitting at home smacking their pud around a d&d table on a friday night) can bitch about them when they get jobs as sysadmins. If you dont like the science go to a techinal school or business school so that people will know they should never take you seriously.
    • His response to question #2 pegs the fundamental problem with the CS discipline as an undergraduate or graduate field of study, and maybe the sciences in general.

      Too bad it was in response to (my) question #3. It appears that his responses to #2 and #3 were switched.

    • From your comments I assume that you are still in school. Fortunatly, at least in my experiance it doesn't work that way in real life. Companies don't assign you neat little programs with set parameters, they say "The Customer wants 'X' and you have to figure out how to give the customer 'X' all on your own (well maybe some help from a newsgroup or something)"

      Your friend who got into CS because 'her father told her to' probably won't get very far after school, eventually she'll wise up and do something she likes, or she'll like CS and learn it for real. But I've seen it before, people who are great a regurgitating the test material (in any field) don't do well later without the creative thinking.
  • Not quite (Score:4, Insightful)

    by iocat ( 572367 ) on Friday July 26, 2002 @12:38PM (#3959512) Homepage Journal
    He said

    I say this with such confidence because of my experience building robot brains over the past seven years. Almost everything people ever say to our robot falls into one of about 45,000 categories. Considering the astronomical number of things people could say, if every sentence was an original line of poetry, 45,000 is a very, very small number.

    I say:

    The fact that people only say 45,000 different things to a robot shouldn't indicate to you that people only have about 45,000 things to say, just that they only have 45,000 things to say to a robot in what is essentially a lab setting!

    That said, I think this is a pretty fascinating interview.

  • by disappear ( 21915 ) on Friday July 26, 2002 @12:42PM (#3959535) Homepage
    Well, I'm so totally unimpressed so far. One out of three, and we have a whole bunch of nonsense.

    For starters, AI, neural nets, and brains. We have the assertion that the brain is a computer, and we should really be concerned with the software on the computer, not the state of the neurons.

    Even accepting the good doctor's view that the brain is a computer, this is an absurd position. After all, the software is in the brain. It's not like it gets bootstrapped from outside sources. So either the software is built into the whole structure of the brain and we can only learn about it by studying the rules (a la neural nets) or we have to figure out which part of the brain bootstraps the rest of it. Which we'd have to study the wet squishy bits to figure out. Which can best be done with a combination of noninvasive study (MRI, for example) and simulation. Like neural nets.

    (The third possibility is that the brain is a computer, but the program is stored on a shared network drive... that is, in a non-material 'soul.' Which would bring us back to Cartesian dualism, God, and a whole bunch of things you'd better reject if you want to work in AI. Not rejecting the notion of God per se, just in the degree of investment in the nonmaterial world in which a being needs to take part...)

    Second, academic politics. Dr. Wallace seems to believe in a golden age (that occurred, not coincidentally, just before his professional career) where professors were promoted and supported on the basis of merit.

    Right. Anyone who believes in any society at any time in the West that existed without politics is invited to check into the nearest mental institution. To accept the idea of a 'golden age' just tantalizingly out of his reach is pathetic. It's like imagining an era where writers received acclaim based on the quality of their work.

    Newsflash: Emily Dickinson's writings were discovered after her death. Everything we read by Melville was written long after his popularity had waned. Any number of great artists were 'discovered' after their deaths. And the most popular writers and artists at any time have been the ones who played the political game successfully. (Personal politics, not governmental politics, of course.) Anyone who's read any medieval philosophy or theology knows that there hasn't been a meritocracy in Western academia for at least eight hundred years.

    As far as LSD and politics, it was the professors involved in those experiments (ie Tim Leary) who engaged in politics. And they were bad at it. And they lost. And the substances ended up scheduled. And their academic careers were ruined.

    On to part two, to see what he says there. Perhaps it gets better.
  • It would appear he wrote these answers before he received the questions. He then randomly applied these essays to the questions. After all, his theory of question and answer is that human conversations are banal and predictable and that creating a reasonable response is elementary programming.
  • belated question -- maybe some ai geek out there can answer:

    is it possible to create an ai like this that is scalable to multiple languages, or would the wheel have to be reinvented each time? is it too reliant on idioms?
  • Amazing interview. He didn't always answer the questions and I didn't always agree, but it was very interesting still.
  • Dr. Wallace wrote in the answer to the first question: "Significantly, no one has ever proved that the brain is a *good* computer. It seems to run some tasks like visual recognition better than our existing machines, but it is terrible at math, prone to errors, susceptible to distraction, and it requires half its uptime for food, sleep, and maintenance.

    It sometimes seems to me that the brain is actually a very shitty computer. So why would you want to build a computer out of slimy, wet, broken, slow, hungry, tired neurons? I chose computer science over medical school because I don't have the stomach for those icky, bloody body parts. I prefer my technology clean and dry, thank you. Moreover, it could be the case that an electronic, silicon-based computer is more reliable, faster, more accurate, and cheaper.

    I find myself agreeing with the Churchlands that the notion of consciousness belongs to "folk psychology" and that there may be no clear brain correlates for the ego, id, emotions as they are commonly classified, and so on. But to me that does not rule out the possibility of reducing the mind to a mathematical description, which is more or less independent of the underlying brain archiecture. That baby doesn't go out with the bathwater. A.I. is possible precisely because there is nothing special about the brain as a computer. In fact the brain is a shitty computer. The brain has to sleep, needs food, thinks about sex all the time. Useless!

    I always say, if I wanted to build a computer from scratch, the very last material I would choose to work with is meat. I'll take transistors over meat any day. Human intelligence may even be a poor kludge of the intelligence algorithm on an organ that is basically a glorified animal eyeball. From an evolutionary standpoint, our supposedly wonderful cognitive skills are a very recent innovation. It should not be surprising if they are only poorly implemented in us, like the lung of the first mudfish. We can breathe the air of thought and imagination, but not that well yet.

    And remember, no one has proved that our intelligence is a successful adaption, over the long term. It remains to be seen if the human brain is powerful enough to solve the problems it has created. "

    It's not that I don't appreciate Dr. Wallace' contributions to the field of A.I., nor am I ignoring his obvious expertise in his programming and computer science skills. Those skills have made him the foremost expert on A.I. today. Yet he has denigrated the very organ by which he is able to formulate his thoughts, and seems to see little, if any, use in modelling or even studying its structure and arrangement to gain any insight into the possible ramifications for A.I.

    I just find it interesting that we humans, as rational beings, with certain innate intelligences and thinking abilities, often rail against the very things that allow us the liberty and (dare I say) privilege of saying them.

    That was my only complaint - the interview was insightful and interesting, a great read.

    • > Yet he has denigrated the very organ by which he
      > is able to formulate his thoughts, and seems to
      > see little, if any, use in modelling or even
      > studying its structure and arrangement to gain any
      > insight into the possible ramifications for A.I.

      And what proof do you have available to suggest he's wrong?

      When someone tells you to imagine a red ball, and you can feel it floating around in your head, do you "feel" it in your head because that's where the thought actually is.. or do you feel it there just because you were raised to know that your brain is where your consciousness comes from?

      That's what his point was. There's no proof that the brain works the way we think it does, there's no proof that it's good at what it does, and there's no proof that when an artist creates something it was inspired by something that happened in his brain.

      And as such the result is it's a bad idea to just assume that by emulating the brain we'll stumble upon true AI, just like it's a bad idea for Wallace to assume transistors will be any better at it.
  • by Eric Seppanen ( 79060 ) on Friday July 26, 2002 @01:03PM (#3959755)
    Are the answers matched up with the wrong questions? It sure looks as though the answers, while interesting, have nothing to do with the question asked. Look at the answer to #3, it sure looks like it belongs with question #2.
  • I think some HTML formatting was inadvertantly removed in this sentence.

    A little arithmetic shows that the number of sentences of 20 words or less (not an unusual length) is about 1020.

    Actually the number of sentences is about 10 ^ 20, or 10 to the power of 20. (I'm guessing that the HTML superscript tag was removed.) The point here was that even though the number of possible sentences is astronomically large, the number of different sentences that people tend to say in practice is actually surprisingly small (once you factor our proper nouns).
  • Dont mind if I do! Have a nice weekend :)
  • That's what my one professor, Donald Simon [duq.edu], always says. For the moment, he's right.

    Currently, AI is nothing more than a magic trick. It's not about intelligence - it's simply an illusion that when you figure out how it works, it is no longer impressive. Every AI researcher is a magician in that respect, no matter which of the two schools you come from. Yes, all of this is quite sophisticated, but so are most modern magic tricks.

    Needless to say, the same as illusionists today make people appear to levitate, we will one day have that technology. While AI today is just a bunch of deceit, some day we may see "intelligent" (as far as we understand it - currently, all AI is "stupid") machines.

    Just a few thoughts...
    • > Currently, AI is nothing more than a magic trick.
      > It's not about intelligence - it's simply an
      > illusion that when you figure out how it works, it
      > is no longer impressive.

      And the question that Wallace's words should be making you ask is how do you know that human intelligence isn't the exact same thing?

      Can you prove that you are here typing these things today because you have some intangible gift, or you're merely doing it because your brain is capable of storing thousands upon thousands of word associations?

      His interview should be forcing you to question what gives humans intelligence just as much as it should force you to think about how to get a computer to emulate it.
      • I already addressed this in my original post, albeit lightly. What I am trying to say is that currently, AI is nothing more than a magic trick. In that, I gave the analogy of the illusionist levitating an object for a magic show. It is already on the horizon that we will be levitating objects and people with ease (currently, it's a little difficult with mag-lev). In the distant future, we'll be able either to block gravity or generate our own. The point is simply that we haven't been working on intelligence for as long as evolution has. Right now, it's only a trick. It's a showman's tool to get "oohs" and "awhs" from a crowd. In the future, the intelligence we create will be as real as ours (and even then, it will have to be something grown or "raised", not created). Even humans evolved from creatures that were little more than a bundle of nerves that literally acted like a finite state machines. Evolution moved beyond a simple trick to get something to operate "intelligently" to a very sophisticated, sentient creature.

        The debate of whether or not we humans are infact "truly" intelligent is another issue altogether. It's far to complicated to get into here and it was not what I was trying to address in the first place.
  • I just noticed that Slashdot is treating the three parts of this one interview as three different stories, with three different sets of replies, etc. Yeah, that's really going to facilitate discussion, when half of the responses to question #1 appear in the first story and half in the third story from when people are done reading the whole set. Brilliant.

  • Crank alert! (Score:2, Interesting)

    by dash2 ( 155223 )
    Maybe this guy is actually a genius, but he sounds like a crank to me.

    1. He complains about the "corruption" of his discipline, but gives little evidence to back this up.

    2. He complains about an "immigrant" doctor getting funding over a "native American" - a classic thing for a bitter man to say.

    3. He exhibits contempt for most people - he seems to think that consciousness is barely a factor in their existence.

    4. He appears to think that producing XML conversation templates is some kind of step towards AI. Hey, maybe if we had _loads_ and _loads_ of these XML schemas, we'd produce a really intelligent computer!* (As if intelligence was basically made out of regular expressions.)

    * I thought this was satire, but he really does say it! Here is this guy's idea of the progress of machine intelligence:

    Our territory of language already contains the highest population of sentences that people use. Expanding the borders even more we will continue to absorb the stragglers outside, until the very last human critic cannot think of one sentence to "fool" A.L.I.C.E..

    Evidently, this is not a way to build a creative or intelligent computer. It's just a way to make an entertaining toy. The counter-argument, that 90% of human behaviour is predictable enough to be mimicked by ALICE, is misguided. We want to build Artificial Intelligence, not Artificial Average Human Unintelligence.

  • I enjoyed the quote in the topic description:

    "This is an amazing work, well worth reading all the way to the end.
    "
    I guess they're acknowledging that most of us usually only skim Slashdot articles...
  • by sgage ( 109086 ) on Friday July 26, 2002 @02:57PM (#3960854)
    "Significantly, no one has ever proved that the brain is a *good* computer."

    And yet, after (insert duration since humans appeared based on latest estimate) years, here we are.

    "It seems to run some tasks like visual recognition better than our existing machines, but it is terrible at math..."

    That's because -precise- math is evidently (get ready for this) relatively unimportant for carrying on in the real world! When a robot, on the run, can throw a stone and hit something else that's on the run, talk to me about shitty meat computers and the superiority of "clean and dry" computers.

    "... prone to errors, susceptible to distraction,

    the source of all innovation, change, progress...

    " and it requires half its uptime for food, sleep, and maintenance."

    Most of which is fun :-)

    "It sometimes seems to me that the brain is actually a very shitty computer. So why would you want to build a computer out of slimy, wet, broken, slow, hungry, tired neurons? I chose computer science over medical school because I don't have the stomach for those icky, bloody body parts."

    This guy hates his body.

    "I prefer my technology clean and dry, thank you. Moreover, it could be the case that an electronic, silicon-based computer is more reliable, faster, more accurate, and cheaper.

    Go download yourself then. I know you have suffered from depression, but the whole idea that "reliable, faster, more accurate, and cheaper" is the most important part of being a conscious entity demands some explanation. What is the point of intelligence? There's something we don't talk about much on /., eh?

    "But to me that does not rule out the possibility of reducing the mind to a mathematical description, which is more or less independent of the underlying brain archiecture. That baby doesn't go out with the bathwater. A.I. is possible precisely because there is nothing special about the brain as a computer."

    Well, this is precisely what nobody knows, and why we play the AI game. Maybe someday we'll know.

    "In fact the brain is a shitty computer. The brain has to sleep, needs food, thinks about sex all the time. Useless!"

    I'm sure this is an exaggeration to make a point, but again I say... here we are.

    "I always say, if I wanted to build a computer from scratch, the very last material I would choose to work with is meat."

    Who is this very perceptive and canny "I" that is making this most fundamental decision? It's a fucking meat computer, that's who.

    "And remember, no one has proved that our intelligence is a successful adaption, over the long term. It remains to be seen if the human brain is powerful enough to solve the problems it has created."

    True enough, though I would bet that, although we might be facing some major (if not cataclysmic) upheavals of our own making in the near-mid future, something from the human line will survive and keep on keepin' on.

    I'm sorry, I know this fellow suffers from depression and all, but the fact of the matter is that meat computers are not "shitty", and the "it remains to be seen" idea cuts both ways.

    The other fact of the matter is that nobody knows how consciousness works. No, not anybody, not even Dennet :-). If there is awesome silicon intelligence that isn't self-aware and conscious, who fucking cares?

    This guy makes me sad. He represents something pathetic to me.

    Wow, I've really rambled.

    - Steve
  • PNAMBIC indeed... (Score:2, Insightful)

    by prester ( 176898 )
    Am I the only one willing to say I had the impression that ALICE helped in writing these responses? Seriously, they display a remarkable aptitude for going on at legth about a specific subject, but almost no comprehension of the actual question. Very frequently they open with something tangentially related and then move on to something completely different, a technique described mulitple times in the article.

    What the hell, I'll say the emperor has no clothes.
  • I'm going to paraphrase a little bit, but he said early on that academia has essentially become dishonest because people are willing to take "justifiable" shortcuts in order to produce the results necessary to get the project done in time and get more funding. He makes this sound like a less than preferable state of affairs.

    He then latter comes out against knowledge based systems, saying that if both projects started at the same time that he could get results sooner with a system like A.L.I.C.E.

    Although the two cases aren't completly analagous, I'm not entirely sure why we should view his assertion that a database that can parrot back preprogramed answers without any real analysis behind it is a "valid" shortcut, but then turn around and accept his views that the shortcuts that other projects are taking aren't valid.

  • by OpenMind(tm) ( 129095 ) on Friday July 26, 2002 @04:08PM (#3961463)

    I haven't read this story as of yet, but I thought I'd throw my experience with ALICE on the table. I was excited to hear that someone had finally made the ELIZA trick work well enough to fool competition judges into thinking it was a human. I decided to drop by and see how this thing performed.

    IMO, it did terribly. I was doing my best to write as I would speak. I may be a a little over-loquacious, but I was definitely no trying to trip the beast up. Neither was I trying to talk like a robot myself. I was trying to make small talk. Or rather, as time went on, trying to make small talk to a crazed beatnik who reponded in constant non-sequiters. I went on to try to simplify what I said to it quite a bit, but it was still fairly bad. About 75% of the time it produced grammatically appropriate responses, but only content-appropriate about 20% of the time. Even then, it was nothing like talking to another human being who was paying attention.

    My main beef is that the system seems to hold no state data about previous exchanges. All interaction with the machine seems to be broken up into isolated two sentence volleys, after which it has no memory of the conversation. Hence even fairly simple and common contextual remarks fly right over ALICE's head. I was deepy unimpressed, and somewhat confused at why people were making such a fuss. I suggest to you all to try this thing out for yourselves.

  • by ryanvm ( 247662 ) on Friday July 26, 2002 @04:13PM (#3961505)
    Am I the only one left wondering if these questions were actually answered by Alicebot?

    Question: Do you feel that the types of developments that the Loebner prize supports(intentional, hard-coded spelling mistakes, etc.) are actually productive in terms of the AI research project?

    Answer: [blah, blah, blah] Take LSD for example. Discovered by Albert Hoffmann in 1945, LSD is the most powerful drug ever developed. If you have ever gotten a prescription for any drug, you may have noticed that the dosage is usally given in "milligrams". But the dosage of LSD is "micrograms". It has the lowest ED50 of any known drug. [blah, blah, blah]
  • by The Raven ( 30575 ) on Friday July 26, 2002 @06:18PM (#3962104) Homepage
    ... it is some modified excerpts from his Thesis or something. In fact, rarely does he even come close to answering the question asked! This 'interview' would have made more sense if you had cut out all the questions, and simply run all his answers together.
    • Actually, perhaps this explains the weird offtopic replies ALICE often gives... perhaps ALICE is an accurate model of HIS intelligence. That would explain why he thinks people have little-to-no ego/id/conciousness, and why he thinks ALICE is a good model for human AI. :-)
    • Check out the posting history of alicebotmaster (I think that's the ID..)

      A good chunk of the material in this interview was posted previously by that account on this very website. It wouldn't shock me in the least if much of the rest of it was pulled together from other previously published sources.

      Honestly, it's hilarious. A grand experiment that appears to have been wildly successful.
  • Wow I thought Dr. Wallace was supposed to be shy, I think every answer is more than 300 words. Did anyone actually make it through all three parts? Be honest. If I were Wallace I would be worried that my competition is going to reverse engineer A.L.I.C.E based just on these interview responses.

    Fly Aeroflot.. [aeroflot.com]
  • Okay, first off, this is the *BEST* interview I've ever read on /. bar none. Yes, sometimes he rambled and drifted from the question, but his ramblings (rantings?) were utterly fascinating. I have *tons* more respect for him now (and I didn't hold him in low regard before this).

    *HOWEVER*, this little bit from the "Agent Ruby" movie synopsis (he mentions this as a movie he's working on) gave me pause:
    The only flaw in Rosetta's creation is that the SRAs (Self Replicating Automations) need injections of male chromo found only in spermatazoa to survive. As they cannnot distinguish dreams from reality, Rosetta programs Ruby via movie tapes to seduce men in the real world and share donations with her sisters.
    Sound like a US a *UP* all night soft porn flick. If I were in his position, I think I'd see "working on this movie" as a move that would decrease my credibility, something I think he'd want to avoid. What do you guys think his motivations were on this one? Extra cash?
  • AIML makes the same mistakes that classical AI made in that it has a huge database of relational terms, but these have no connection to the real world other than their binary representations. SOME sort of representation of objects in a Mind is required for real understanding of the world outside. Otherwise it's just another arbitrary semantic network.
    ALICE is just a bot, somewhat clever, but nothing terribly new.
    Ideally, a strong AI will be we think of when we see AI in sci-fi - Data, HAL, etc - and this will require vastly more computing than the world has right now, or will for at least a decade.
  • Who the fuck cares about his legal problems?

    The bipolar disorder in an AI scientist is interesting insofar as Ted Nelson's own psychiatric problem (can't remember a thing) led him to devise a system of carrying index cards on a belt loop which led to Xanadu(.org) which in addition to providing impetus to the Web, also made it impossible for a long time for anyone to work with him, without going insane or broke themselves. Anyway.

    I learned a ton about his legal problems (from his one-sided though seemingly truthful description, I feel sympathetic) and about 20% about AIML which is interesting in itself. But only about 2% about Artificial Intelligence or anything beyond fooling a simplistic intelligence test with a program that tries to fool it.

    Stimulus-Response my ass! Who gives a shit about massaging his ego? Slashdot must have braindead dweebs for editors, or is it cool these days to confuse computer science with a chatbot?

    I'd be far more interested in seeing the legal shit cut out, and have an article on this guy's work that objectively notes the limitations of what he's done, but that at least he's assembled a body of knowledge and built some simple tools. Not that they are at all useful to linux, programming, or anything but fooling an intelligence test, and people who haven't the slightest idea about the field. How about interviewing a few real AI researchers and give us some meat to chew on? This pisses me off, and what pisses me off the most is I discounted the guy's research, even after going through his whole website, and read the article to give him the benefit of the doubt, and didn't come out of it with much else beyond how great a chatbot he's got. Wasted time! Multiply by the number of readers. Yeah maybe he should write a book, and he can (almost but no cigar!) beat a discounted intelligence test by brute force and a microgram of logic. Don't see anything here that sounds like high powered science, sorry.
  • Have you read Gurdjieff ? His ideas on automatism are similar to yours. You might also want to look at www.reciprocality.org

    As for LSD being the most powerful drug ever... you should check out salvia.
  • I actually troweled through all three pages of this drivel and found practically zero answers to any of the, IMHO, worthy questions posed. Instead, I saw ranting from a mental patient, a regurgitation about legal trouble that is pretty minor from all visible aspects, and a lot of hype surrounding what is ultimately a ridiculously complex database of english sentence structures. There's no AI in A.L.I.C.E. It's just a database with some embedded Javascript. It has no state, it provides no answers that were not pre-programmed, and its decision branches are static. It is only Artificial.

    I was quite interested in A.L.I.C.E. because I had high hopes that it somehow involved reinforcement learning for understanding how to converse with people in real-time, or at least symbolically driven natural language conversions, or at the very least some clever state management for topics. The very minimal experiments I did 15 years ago as a high school student were only an order of magnitude simpler than this, and he gets articles written about him? Absurd.

    Yes, I'm venting, but I work pretty hard to keep up to date on many aspects of AI (specifically FLC, decision trees, GA, GP, ANN, CBR, as well as the old school methods), and to see this get any attention at all is insulting to the many hundreds of true pioneers in the field. It must be terribly lonely chasing after a trophy (STT) that nobody values anymore.

We were so poor we couldn't afford a watchdog. If we heard a noise at night, we'd bark ourselves. -- Crazy Jimmy

Working...