Arguing A.I. 443
Arguing A.I. | |
author | Sam Williams |
pages | 94 |
publisher | Random House |
rating | 8 |
reviewer | Jon Katz |
ISBN | 0-8129-9180-X (pbk) |
summary | perspectives on the A.I. debate |
In some ways, the author argues, the debate over A.I. is undergoing a profound revolution. What was once a discussion largely confined to tech and academic circles has mushroomed into a more mainstream brawl as a growing number of engineers and lay authors vent on the acceleration of modern technology and the future of humanity. Given the explosive growth of the Net, the near-continuous increases in computing power and much-publicized A.I. breakthroughs like Deep Blue's 1997 victory over chess champion Gary Kasparov, the question is no longer whether artificial intelligence will reach the level of human intelligence: It's when.
As the title suggests, Williams's book is less about A.I. itself than about the increasingly ferocious debates raging through the scientific community about it. The conflicts surrounding A.I., Williams suggests, may be the most significant since the titanic battles over evolution a century ago. In fact, Williams is among those who've argued that the A.I. debate is really an extension of the same fight. Artifically intelligent machines are already changing human evolution, many argue, even evolving inevitably into life-forms and species all their own. A growing number of critics and skeptics also argue that A.I. proponents are moving too quickly, failing to take into account the mind-boggling cultural and philosophical problems being raised by their new, still-imperfect technologies.
Williams traces the contemporary birth of A.I. -- via Hilbert and Turing -- on to the living pioneer credited with coining the term (John McCarthy), and talks to several of the principals guiding the A.I. debate today, like Ray Kurzweil, Jaron Lanier and Bill Joy.
This is a necessary book. It's one you could actually recommend to students, journalists, friends, parents, anybody trying to grasp the issues and implications of A.I., surely one of the most significant technologies human beings will face in the 21st Century. Even if A.I.'s impact on life is being overstated, it's poorly understood by the public. So Williams walks us through inventor Kurzweil's almost radical optimism about A.I. and the future -- especially his claims that human society is rapidly approaching the evolutionary equivalent of a new species, a fusion of humans and intelligent machines. This is the point of no return when it comes to artificial intelligence, Kurzweil claims. "The progress will ultimately become so fast that it will rupture our ability to follow it. It will literally get out of our control. The illusion that we have our hand on the plug will be dispelled."
But Williams also introduces some of the people that don't see this as a good thing -- or even a likely development. Bill Joy is more pessimistic, as he made clear in his now famous article in the April 2000 issue of Wired, "Why The Future Doesn't Need Us." The piece thrilled technophobic intellectuals and journalists because it came from a software entrepeneur and reaffirmed something they desperately wanted to believe: technology -- especially genetics, bio-tech and robotics -- is out of control and likely to generate as much evil as good in the future. Joy sees little in the modern history of software development to suggest the emergence of sentient machines. His experience has led him to believe that it's difficult to build things that are reliable.
Jaron Lanier, whom Williams also interviews, coined the term virtual reality and once likened A.I. research to alchemy. Lanier accuses many in the A.I. firmament of choosing faith and hyperbole over science and reality. He likens the current tech obsession with A.I. to medieval scholars' attempts to prove the existence of God through Aristotelian logic. In their rush to endorse the concept of thinking machines, warns Lanier, many authors are putting scientific faith before scientific skepticism.
Williams does a skillful job of presenting these different points of view without intruding on them. It might have been nice to hear more of Williams's own thoughts and perspective, since he's one of the few journalists with this much understanding an access to so many principals in the A.I. discussion. On the other hand, he might not have been wise not to wade in amongst these A.I. heavyweights and their raging debate. "Arguing A.I." is as timely a book about technology as you're likely to come across, and, perhaps more surprisingly, highly readable.
Hmm (Score:2)
We (probably) won't ever actually ACHIEVE AI (Score:5, Insightful)
AI won't be considered successful until we build HAL or Data, but the journey so far has been very useful.
Re:We (probably) won't ever actually ACHIEVE AI (Score:2, Insightful)
When the original researchers in AI began, they saw that the bottom-up approach had a huge number of issues. So they ended up spliting into the computer vision, modeling, logic, etc.. groups. The idea was that if we could figure out all of these individally, we could bring them together and show real intelligence. The problem is that as these individual technologies become more mature, the path for putting them back together is gone. We're seeing that this isn't the way to model real intelligence.
There is a group [msu.edu], involving some major players, that is looking at other methods though. Personally this seems like a more viable approach.
Re:We (probably) won't ever actually ACHIEVE AI (Score:3, Insightful)
Seems logical to me (Score:2)
The point seems to me to be that, no matter how close to human a built machine would be, people would still insist that it's Not Really AI, and if you tried to explain otherwise, they'd either stick their fingers in their ears or insist upon tests that cannot be satisfied even in the case of humans. This will all be really stupid, of course, but that's what people will do.
A practical definition of A.I. (Score:3, Informative)
"Receding horizon", historically (Score:2)
The second AI challenge may have been chess-playing. (There was a chess-playing machine on display around the same time, but there was a midget inside...) Computer programs reached grand-master level about 30 years ago, and specially-built machines can contend with human champions now. But that isn't intelligence either. The Deep Blue chess machine does NOT think things out like humans, but rather uses very simple heuristics to identify obviously bad moves, and traces out all the reasonable moves for 10 levels or more. Someday a computer will be able to play all possible chess games out within it's memory -- it will be the perfect chessplayer, and with no more real intelligence than Pascal's gears.
Various other useful AI accomplishments are similar to Deep Blue in how they relate to intelligencs. An example where I have a bit of experience: automated visual inspection is a substitute for human inspectors, who get bored as hundreds of perfect parts go by and don't see the one bad one in the lot. It is not nearly as effective as a human who is paying attention, it often seems maddeningly stupid to the programmers and operators who have to deal with all the false alarms, but it doesn't get bored... Another example is the damned Microsoft paperclip help system -- it started out as a dog, but that implied too much intelligence, and now it just smirks at you while answering the wrong question.
The _real_ AI challenge is the Turing test: hold up a conversation well enough that the humans in the chat room don't suspect it's a computer. This is very, very, very tough, and useful mainly as a publicity stunt. People don't want a computer that can simulate a human -- they want it to get the work done, without all the emotional issues you get with humans.
At least one science fiction author (Melissa Scott?) has taken to calling it "Artificial Stupidity." That's a much more practical goal; besides it better expresses what we really want (smart enough to work, too stupid to unionize), and avoids the misleading expectations that come from "Artificial Intelligence".
Re:We (probably) won't ever actually ACHIEVE AI (Score:3, Insightful)
The reason it is unsuccessful is the confusion caused by the different meanings of the phrase AI.
Often AI just means research on a specific problem that humans are currently much better at solving than machines. Of course once the research is complete and the machine is better, it is no longer AI under this definition.
Now if the solution is largely motivated by what we know about how humans work then perhaps there is still a glimmer of AI in the research. However, this is a hard argument to make since we don't know how the brain works. In fact, often there are many reasons to think the solution isn't similar to the brain. There are many ways to skin a cat. For example, I doubt human chess masters search a game tree with alpha-beta pruning, however, this is a way for computers to solve the problem that, with todays hardware, gives them superior performance.
AI won't be considered successful until we build HAL or Data, but the journey so far has been very useful.This is a different notion of AI. It fits more into the natural definition of AI, where AI is the creation of human intelligence. In this case, you need the whole enchilada (or at least a interesting percentage) to get intelligence. You can't just pick and choose certain problems. This definition is more in line with the Turing Test. Unfortunately this is a very hard problem for obvious reasons. At one time more people worked on this problem, but when nobody got good results, the funding started to dry up. That's why people switched to the previous definition.
Some people still work on the grand AI problem, but as another poster pointed out, it is generally on a small piece with a story about how it can be connected to other pieces to create a real AI. Generally they pick a piece that might be commercially useful in its own right such as vision or linguistics. Again this helps with funding. Unfortunately, I don't think anyone works on tying these systems together. (Probably because there would be a whole mess of problems if they tried.)
Re:Hmm (Score:2)
Robots replacing humans in day to day tasks is a process begun quite a while ago, and will proceed. But it's really not that exciting. Lots of people will end up "no-jobbed", but society will adapt. We'll find better things to do than sweeping - like thinking.
It's when machines start thinking better than we do that things will really change.
Reply to AC - when there's nothing left to do.... (Score:2)
Doesn't mean that I'll have any less fun playing StarCraft 27 (written by a team of supercomputers in Omaha). As long as the robot's prime directive is "make the human's happy", I think we're in for some good times - they'll figure out some fun stuff.
Re:Vernor Vinge's Singularity... (Score:2)
I don't think so. The computers might not call it human history anymore (cause we'll be irrelevant), but that doesn't mean much. As long as we give them a prime directive of "keep the humans happy", I think life is going to be pretty swell.
Who knows, they may even be able to upgrade us so we're as smart as they are. Or smarter - maybe it'll turn out we have some great components.
I'm a firm believer that intelligence leads to good.
The hardware is the software (Score:3, Interesting)
Re:The hardware is the software (Score:2)
Re:The hardware is the software (Score:2)
A Turing machine (which is computationally equivalent both silicon computers and paper-and-pencil algorithms) has been proven to be able compute a certain subset of mathematical proofs. I have doubts that this necessarily implies that it can model every phenomenon in the physical universe. It is possible that a brain uses some to-be-discovered process that goes beyond a simple Turing machine.
Re:The hardware is the software (Score:2)
Re:The hardware is the software (Score:4, Insightful)
One thing that's always bothered me about the AI debate is that the thinking for a long time has centered around how to model intelligence on silicon.
Actually this is not true, for example an early AI system was constructed to play tic-tac-toe on a computer using matchboxes and marbles. No silicon at all.
One of the fundmental results of computing (discovered by Alan Turing, the first researcher in the field of AI) is that there is a basic set of computable functions. It doesn't matter what hardware you use, the set of things you can compute is ultimately the same. An interesting question is whether human-like intelligence is a combination of functions from the computable set or not. People like Roger Penrose argue that there is something more than computable functions going on in the human brain (he calls it the "divine spark"). In my opinion that's nonsense.
If an AI system can be built using computable functions it doesn't matter what hardware you execute it on (apart from perfromance issues). The results will be the same.
To me the true marvel of the mind is the holographic quality of intelligence and the way in which the physical form of the brain influences, and is shaped by, the quality and nature of one's thoughts.
You should look into neural net research. This uses massively parallel networks of artificial neurons to simulate the real structure of the brain. Its an important branch of AI research. Of course neural networks can be completely simulated on traditional computer hardware. Again, the hardware is not the key, its totally down to the software you run.
By the way, what do you mean "holographic" nature of intelligence. I don't understand what you are trying to imply with this term.
It will be exciting to see what part the new polymers can play in this research.
In my opinion, none, except perhaps to give us faster computers. They can do nothing to change the fundamental computations that are taking place.
wrong topic (Score:3, Insightful)
is a machine that to a human appears to be human, human?
Re:wrong topic (Score:4, Interesting)
"A robot becomes human when you can't tell the difference any more".
That one film influenced me more than all the other sci-fi films I ever saw as a kid. It's the only one that really got that concept and went for it. OK, Asimov did it first ("Bicentennial Man") but cinema still hadn't really got there.
Grab.
Re:wrong topic (Score:2, Interesting)
Arguably, that's exactly when a human becomes robot...
Re:wrong topic (Score:2, Insightful)
and perhaps more importantly, does it matter?
I'm doubtful (Score:3, Insightful)
I tend to agree. I'd like to see something using AI play in a poker game. Can AI ever simulate bluffing? Or analyze the expressions on the other player's faces to determine if perhaps that they are bluffing, and call the bluff? Human intelligence can do thiss, but I'm not sure if something this complex exists now, or ever will.
Chess is one thing. It follows a certain set of rules. Even conversation does, but it also invloves human expression like the bluffing example. But to to play out a scenario given a unique situation, machines are not up to the task yet.
Riker would get pasted.... (Score:2)
Analyzing another face might be hard, but it's infinitely easier than passing a Turing Test. Have you ever heard of a lie detector? See any parallels? With a little work, I'm sure something like this could be put together using only today's technology.
If a machine as smart and adaptable as Data existed, it would bankrupt Riker - easy.
AI in Poker (Score:2)
There was a significant amount of research done in AI Poker about a decade ago. Sorry, no references.
One of the interesting things about the instance where Big Blue beat Kasparov was how it happened. Kasparov became freaked out, saying that the moves were like a human player and not a machine. Whether they were or not, or even whether "like a human player" is a meaningful concept, is not the point. The point is that, effectively, Big Blue psyched Kasparov out.
Re:AI in Poker (Score:2)
What's more accurately said is that the programmers used Big Blue to psyched Kasparov out. I doubt there was a routine in Big Blue called "Psych_out_Kasparov".
Re:AI in Poker (Score:2)
What's more accurately said is that the programmers used Big Blue to psyched Kasparov out. I doubt there was a routine in Big Blue called
No that's less accurate. Big Blue psyched out Kasparov. The programmers did nothing once play began, they taught it to play, however once it was playing it's actions and choices were it's own, the programmers no longer had any role whatsoever.
Re:AI in Poker (Score:2)
Re:AI in Poker (Score:2)
I think your analogy is an over simplification of the matter at hand. Is a murder responsible for his actions when the decision was made by a subset of the neurons in his brain, when it's possible that one and only one triggering neuron pushed him over the edge? Or are his parents, since they are the ones who created him and "set" the mine?
Re:AI in Poker (Score:3, Interesting)
Re:I'm doubtful (Score:2)
Don't knock on AI until you understand it. Everything in the world can be simulated with an algorithm; it's just a matter of how many millions, billions, or trillions of lines of code it takes.
Re:I'm doubtful (Score:2, Insightful)
hmmm.
How good, do you think, would your human intelligence be at figuring, say, a dolphin's bluff? Or some completely alien intelligence? What about a hypothetical being with little or no physical being/experience, like a computer?
Personally, I think you'd fail miserably. I've had the good fortune to come to know a Persian family rather well (over the last 10-ish years). I have immense dificulty knowing when Hooshang is "yanking my chain", simply because my cultural heritage doesn't happen to share a whoe lot with that of a nomadic theocracy.
seems a bit much to expect competancy from the other side of the fence, eh?
Two things (Score:2, Funny)
AI will most likely see first use in the phone-sex industry. Think about it. Adult entertainment is the first to embrace advancements in technology.
To see where AI is going you have to stop staring at the algorithms, take a step back, and see what mundane things you'd like someone else to look after for you.
"Hi, Honey, I'm home!"
"You're certainly home early!"
"Well, we had a change in staffing at work."
"Oh, no! Don't tell me you were replaced by a computer?!?"
"No, they replaced my computer with a cyborg, now my job is to have a deep philosophical discussion with it to boot it up each morning."
My thoughts (Score:3, Interesting)
Re:My thoughts (Score:2)
Re:My thoughts (Score:2)
Re:My thoughts (Score:3, Funny)
Re:My thoughts (Score:2)
On the other hand, if you were to ask a question that required an understanding of emotion (other than anger or ego), poetry, or art on /. what percentage of the time would you get "I don't know what you mean" as an answer.
A program could have Roget's Rhyming Dictionary hard coded and probably do a better job of analizing poetry than I could. Scan in ten years of _Poet's Life_ magazine add a nice randomizing hack that keyed off of your questions and it could "talk" (or at least parrot back) poetry analysis better than I ever want to. However, I don't think such a program would be "intelligent".
I think we are similar to the engineers that designed Deep Thought in Douglas Adams' book. We are asking the equivalent of "What's the meaning of life" but we don't really know what the question is...
The other way around? (Score:4, Interesting)
First, computers will recognize voice commands. Well, there are already programs that do this like Dragon, so we're almost there anyway. The point now is that you are still giving keyword commands to a computer, and as it is refined, you'll better recognition of specific commands, and questions that can be filtered from within conversations. Giving commands to a computer is easier than open ended questions to the computer.
Second, we'll solve the natural language problem, or at least enough to provide flawless voice recognition that you speak of. It will be capable mainly of handling accents and bad grammar.
Lastly, a computer will pass the Turing test. Unless a computer can understand the intricassies of the english language, there will be people who will be able to tell by the way the answer is phrased. If you solve the NLP or get far enough for a computer to analyze and spit back poetry, then you got the Turing test licked.
Re:My thoughts (Score:2)
Re:My thoughts (Score:2)
I'm hoping that we'll advance much faster than you think (see discussions of Vinge's Singularity). (Heck, I just cut the time down by a third, just by using the Unix tools philosophy. :-)
I think we'll manage it (Score:2)
There has been one big stumbling block in the advancement of natural language processing over the past several decades: Noam Chomsky. He isn't dead yet. Even after he dies, it will take some time for his disciples to die. After that happens, there's a pretty good chance that an academic community will form to look at structural linguistics for real this time. Some good work has been done on the fringes, as with Fillmore's deep case structure and various head-based approaches, but the spectre of Noam Chomsky has so far prevented a large enough coalition of researchers to get this very hard problem done.
Chinese Rooms and Software Guys (Score:3, Insightful)
It's always seemed funny to me how the technologists take this field, which is tied irrevocably to philosophy, and ignore everything the philosophers say about it. For example, has there ever been a good refutation of Searle's Chinese Room argument?
Another of Searle's arguments is pretty damning as well; those that pursue strong AI are, in fact, favoring a form of dualism. For them the mind is completely separate from the brain, an idea that has been pretty much discarded by the thinking public. Why is it, when computers are concerned, that the mind is no longer a product of a brain?
Re:Chinese Rooms and Software Guys (Score:2)
The mind is what the brain creates through its functions. the brain is an organ. its job is to store and process information. if it's not doing that (i.e. I'm dead or in a mechanically-sustained state, a coma), do I have a mind? the two are interdependent.
Anyone who is not a creationist type "humans are special" is going to consider the brain just to be an organ. Or so I thought.
Re:Chinese Rooms and Software Guys (Score:2)
Re dualism - why must philosophers take the logical extremes of every argument? the "mind" is a concept invented by humans to make themselves feel special. We have no proof that other animals aren't thinking in the abstract and just haven't figured out how to express it yet.
If you must insist that the concept of the mind refers to a real thing, then why is it something that has to be a presence? can it not be the sum of a brain working in concert with sensory organs to produce a set of electrical impulses? Why does it have to be this great concept of consciousness?
The sense of "I" is programmed into you by society and tradition. I'm of a firm belief that socialization is more responsible for creating a self-concept than anything innate. We're just animals and humanity is all one huge feedback loop.
Re:Chinese Rooms and Software Guys (Score:2)
I disagree. The mind is a real thing only because we haven't proven it to be invalid yet. Our concept of the mind is not a definition because it is by nature undefinable.
Therefore, why not make the computer define its own concept of the mind. Give it the ability to think, but don't tell it what to think. If you believe in creationism, you believe that Yahweh/God/Allah did that for humans, so we get to see the results for ourselves. stretching it can prove creationism right or wrong. [nobody has the balls to go there nowadays, though. I wouldn't be surprised if Bush/Ashcroft want to turn the USA into a Christian Fundamentalist Dictatorship no better than Iran - but I digress.]
As for the sense of "I" being prgrammed by society, I respectfully disagree. But, since we've only got a sample set of one species so far, it's hard to say.
I only know who I am because the world gives me tools to define myself. The world being other people, history, the physical world, and everything else that I can experience. The first self-aware caveman didn't say "ugg, I am" without something making him think it first.
Re:Chinese Rooms and Software Guys (Score:2)
So what makes humans so special? Aren't we then just a less hairy gorilla with a bigger (physically speaking) brain?
[This argument infuriates creationists.]
doh (Score:2)
This is the "Systems Reply", considered and refuted by Searle in the original Chinese Room paper.
Re:Chinese Rooms and Software Guys (Score:2, Interesting)
It is totally obvious to me, anyways, that the man is not required to know Chinese any more than my Pentium III is required to know LISP -- the man is the one component of a system which, as a whole, evidently does understand Chinese.
As for the mind/brain connection, this seems to be the same misunderstanding -- the mind is software, and one of the open questions is the degree to which this software is platform-dependent. Searle (again, perhaps only Searle's caricatures) seems to think, more or less axiomatically, that the mind can only run on the meat-machine, but seems to offer no evidence.
I welcome more sophisticated versions of Searle's arguments, if you've got 'em.
-- A.
Re:Chinese Rooms and Software Guys (Score:2, Informative)
The specific point Searle is making is based on a presumption that abstract symbol manipulation (the kind that computers perform) is "neither constituitive of nor sufficient for semantics." This is where most of the attacks have gone after, but to my knowledge unsuccesfully.
You are correct in saying the man is not required to know Chinese any more than the processor knows LISP. But do you say the system - the processor and software code - "understands" LISP? Of course not - it can process it, yes (manipulate the symbols). Does it "understand" in our traditional use of that word? No. Then comes the analogy to the sytem of the man in the room and the rules themselves. Somehow this "system" understands Chinese? Not in the least - it is merely able to manipulate symbols in a manner that satisfies an external observer.
To say that consciousness can be created simply by instantiating a program is (according to Searle) a flawed proposition. He never said that machine consciousness is impossible as a whole, and he never said that human meat-machines are the only possible consciousness; he merely said that a program cannot be.
BTW, the quote is from "Is the Brain's Mind a Computer Program?" by Searle, in Scientific American, January 1990. It went a long with a attempted refutation by the Churchlands, and it's a more clear illustration of the principal than Searle's original paper.
you twit (Score:2)
This is called the "Systems Reply" and is anticipated and refuted in the original Chinese Room paper ("Minds, Brains and Programs"). It is always a touchstone of geek arrogance that they believe themselves to have come up with a new and definitive refutation of Searle, and it's always this one.
Searle (Score:3, Informative)
The most obvious problem with the Chinese Room metaphor is that it confuses the properties of a system with the properties of an element of the system. Asserting that the guy in the room does not know Chinese is about as interesting as asserting that a single neuron in your brain does not know English. Since we've known not to make that mistake for at least 3000 years, there really isn't much excuse.
Perhaps people are fooled because there's a guy in there, and despite all evidence to the contrary, people expect guys to know what they're doing. Or, perhaps people don't know how to think. In any event, "refuting" an argument requires that it be an argument, and that is not the case here. It also requires that the person recieving the refutation have a certain grasp, and I find it difficult to believe anyone with such a grasp could fail to see it as bogus during the first read-through. It is hard to refute "deedle deedle queep."
But, anyway, my favorite discussion of this is "Backtracking: the Chinese food problem," Lou Hoebel, Chris Welty, intelligence March 1999, 10:1.
There is also a decent discussion in The Universal Computer: The Road from Leibniz to Turing, Martin Davis. [amazon.com] This is an excellent book all around.
Re:Chinese Rooms and Software Guys (Score:2)
Note: this will likely make no sense to you if you've never read Searle. A summary of the Chinese room argument can be found here [helsinki.fi].
There are plenty of decent refutations of Searle's argument. Douglas Hofstadter's is the funniest, if only because he's so hostile about it (I don't have a reference handy, but the phrase "matched in its power to annoy only by..." floats out at me).
Searle's arguement is actually pretty bad, in my opinion, and I'm only an armchair philosopher. His refutation of "the system argument" (that the combination of book, paper, and guy reading book understands Chinese) amounts basically to two points: nothing within that system understands Chinese, and systems don't understand things. But systems do understand things: I am a system of various parts, but my relevant parts (medulla oblongata, eyes, hippocampus, whatever) don't understand things. I understaxnd things: I am more than the sum of my parts.
It's ironic that Searle can accuse AI researchers as pursuing a dualist argument. Most everyone I know favoring strong AI believe wholeheartedly that, as you say, mind is a product of brain. What they don't believe is that brains are magically endowed by God to be the only things capable of producing a mind. (Note: they don't attribute this capability to rocks and stuff.) Searle goes on and on about how AI, no matter how close to human behavior it may come, will never be truly intelligent because it will not posess "intentionality" - it can tell you that 2+2=4, but it can't really understand it, can't really mean it, but he never goes on to say why. ("Why can't it understand stuff?" "Because it doesn't have intentionality." "What's that?" "The ability to understand stuff.") If that's not a dualist view, I don't know what is.
Bottom line, where I'm concerned: we still don't understand what it really means to think, to be intelligent. Searle's argument is essentially that just as a computer simulation of a rainstorm won't get you wet, a computer simulation of intelligence won't be smart. But that doesn't make sense: rainstorms involve water, while intelligence... what? What can you say about an intelligent entity that isn't based on its external characteristics? It's a fascinating question, but Searle ignores it in favor of "intentionality," something which isn't observable (except to its owner) in any way. He takes the really tough, interesting question, and slips in straw-man to knock down. And that's just, as Hofstadter said, annoying
Re:Chinese Rooms and Software Guys (Score:2)
an old Dennett lie (Score:2)
Nor does Searle believe this, and Dennett lost a lot of respect in my eyes for continuing to claim that he does. Searle is completely agnostic about what sort of thing could produce a mind; he just asserts that nothing produces a mind by virtue of its status as a Turing Machine
Re:Chinese Rooms and Software Guys (Score:2)
Now who's a dualist ? What is the definition of a "real understanding". The only definition that avoids dualism is if the behaviour of a system that "understands" is indistingishable from one that doesn't then that system understands.
Read Dennet's "Consciousness Explained" on this. (BTW, I am not saying I agree with his conclusions, or his title, but his deconstruction of this argument is very clear).
Re:Chinese Rooms and Software Guys (Score:2, Interesting)
That's a mouthful.
If the person internalizes the translating book, then they know Chinese and English. You and Searle are profoundly underestimating the complexity and sophistication of such a translating book. You are building your scenario on a very naive and uninformed view of language -- a view where some sort of a simple "lookup table" would suffice. It wouldn't. The simple lookup table presumed would necessarily include all possible English and Chinese sentences -- an astronomical number of sentences that transcends any notion, even abstract, of a "book".
Alternatively, a translating book capable of the translation that Searle supposes without useing the (impossible) brute force approach mentioned above would necessarily encapsulate all of the knowledge of the world implied collectively by Chinese and English. It, too, would be a very large book.
As someone else has posted, the deeper implicit assumption hidden in Searle's gedankenexperiment is that there is some integral agent hidden inside of each human consciousness that is where "comphrehension" takes place. It is necessarily integral, since if it were not, its parts would be as vulnerable to Searle's objection as the man in his room. As such, Searle's view is necessarily metaphysical, as he is essentially assuming a "soul" where comprehension occurs. Ultimately, then, his argument reduces to the rather unhelpful or uninsightful "people have souls and computers don't". It's not science, and, worse, it's sophomoric philosophy.
Re:Chinese Rooms and Software Guys (Score:2)
The problem is that the meaning of "know" is complex. Is it only knowledge if we understand the "rules"?
Wouldn't it be better to define "know" in functional terms? If buddy functions perfectly in regards to understanding and working with Chinese, then he knows Chinese.
Whether the "concious" part of his mind doesn't understand it is a separate question. You could ask him, "Hey, does your concious mind understand the meaning of what I'm saying?" And he could say "no" honestly. But that doesn't change the fact that he, as a system, knows Chinese for any sense of the word "knows" that is usable.
Searle's little problem just batters about the idea that machine's don't have a concious mind, so they can't "know" - but that uses a meaning of the word "know" that requires the kind of concious mind our brain deludes us into thinking we have.
.
Deluded scientists? Bullshit. (Score:2)
Oh, please! That sounds like one of those typical rants against science, where science works hard, and either a rogue scientist with green eyes, or some company, takes their work, and hypes it to the "unsuspecting public." Among the scientists who do AI that I know (5 CS faculty), none of them seem to have deluted fantasies about what the current AI, esp that they're working on, can do. They don't benefit from making promises that they cannot follow up on- corporations do.
Von Neumann Architecture Can't Do It. (Score:2)
Another requirement would be senses that mimic human senses. I'm amazed that people think you can simulate human-like intelligence without using nearly the exact set of sensory input. Dolphins are clearly intelligent creatures, but we can't talk to them... and I think it has to do with sensory input.
Lastly, you won't be able to program an AI. It has to be grown. Human intelligence takes years of sensory input, filtering, communication, and response analysis to work.
Starting with the right neural network and training it like you would an intelligent child seems the right approach.
Your opinion may differ, but that's mine.
Re:Von Neumann Architecture Can't Do It. (Score:2)
That will lead to a holocaust of unintelligent people, which will only serve to make our world a better place. We've ran out the utility of the individualism paradigm. It has no usefulness as far as getting things done is concerned. Humanity increasingly engages in such complex tasks that one person can't do anything to affect them by themselves. It's sad but humanity's only chance to survive is to merge into one entity and AI and intelligence research is the only way to do it.
I'm half joking..
I work in AI, and... (Score:2, Funny)
The state of AI (Score:2)
Efforts to solve the Turing test are a boondoggle right now. Instead of hacking at real root of AI, they're whacking at leaves like ambiguous meanings and localizing events and states in space time.
I believe there's an algorithm which would be able to learn these kinds of concepts without being led by the hand. And even if today's computers would take eons to learn English using it, I think it's what we need to concentrate on. Is it some sort of neural net? Is it a way of evolving and algorithm?
Is it something nobody has even dreamed of, some code that runs in our brain a million times - the rules of getting from "problem" to "solution"?
We'll find out I guess.
.
Re:I work in AI, and... (Score:2)
Until that happens and we start answering these fundamental questions, then the debate about whether strong AI will occur and whether robots will rule the earth (hail King Bender), will remain the domain of science fiction authors and Latte Drinkers.
Re:I work in AI, and... (Score:3, Funny)
It was actually one of the few post in this discussion to say some informative things, even if they are pretty straightforward if you know something in the field.
"Beautiful Mind" and A.I. (Score:2)
Nash's thesis on the equilibrium point is related to the most common algorithm used in A.I. games like chess.
Dr Sbaitso (Score:2)
"My name is Dr Sbaitso, I am here to help you. Say whatever is in your mind clearly."
Well I sure hope that this phrase isn't patented.
Who's doing the talking? (Score:2, Interesting)
It's kind of like having Congress and lobbyists "debating" the social, legal and ethical issues in open source software -- I'm sure they have opinions, and they're certainly entitled to them, but you have to wonder if their opinions have any relationship to the technical realities of the field.
Random Rant on the purpose of Science (Score:4, Informative)
The general public is not now, nor has it EVER been, part of the dialogue of Science. Here I mean science as an instution, like banking and marriage is an instition.
The dialogue in science is people publishing papers. These papers are peer-reviewed by other people who also publish and have 'scientific credibility'. Scientific credibility is gained by publishing good papers and having academic credentials. There's a book by Bradley Latour that describes a 'scientific economy' based on credibility.
As such, the general public may be a spectator to the dialogue of science but does not participate, as the 'general public' isn't publishing and therefore isn't part of the economy.
The public gets disappointed when science doesn't live up to claims that they read into the dialogue which is, frankly, not taking place in the Real World anyway, and it's a mistake to expect that it should produce anything the Real World can use.
It's the public that PULLS things from the realm of science, develops expectations, and tries to change the Real World with it. Sometimes it works. Sometimes it doesn't work. You can't blame science for those failures.
Now, science isn't perfect. The landscape of debate is subject to bloody revolutions in paradigm, like the changes from Ptolemy to Galileo to Newton to Einsten and beyond. Scientists play politics, too, and sometimes lose their objectivity when reviewing papers for publication. It doesn't change the Real World. Over the last 30 years, there have been a dozen opinions and 'proofs' on whether the Universe will expand forever, collapse in a 'big crunch', or eventually stop and stabilize. So what? Life goes on here on Earth. Nobody's jumping off of buildings because astronomers tell us one day the Sun will swallow the earth (oops... they changed their mind on that one, too! Did anyone notice?)
The usefulness of this review or the book it talks about is diminshed and tarnished for me by such a sensationalistic lead-in. Many, many Slashdot readers are familiar with the division between the general public as users of computer systems, and their own roles as the makers and maintainers of those systems. We never stop bitching about clueless users, 'we' always know better what to expect out of our machines than 'they' do, etc, etc. Ha ha. Very funny.
Stop and think for a minute why that happens. When your users expect things you didn't promise, is it because they read things into your claims you didn't intend? Is that your fault or theirs? Who do they blame for it? Who do YOU blame for it?
It cuts both ways, people. If you don't want science to disappoint you, don't expect it to do things it isn't meant to do. You may play chess better than your cat, but you'd look pretty stupid if your cat asked you to catch a mouse.
Re:Random Rant on the purpose of Science (Score:2)
As such, the general public may be a spectator to the dialogue of science but does not participate, as the 'general public' isn't publishing and therefore isn't part of the economy.
I agree with what you're saying, but I think it's also worth mentioning that "scientists" and the "general public" are not mutually exclusive sets.
Scientists themselves are also part of the public, and can be just as guilty of misunderstanding when it comes to subjects that are not directly in their sub-field of science. It's everyone's responsibility to educate themselves on those subjects in which they have strong opinions.
This reminds me of when I was in university and the professor was teaching that it's up to the general public to make the moral decisions on how to make use of computers; that it's not for us computer scientists to do that. That never rang true with me, because I'm just as much a member of the public as anyone else.
I've always felt that if I have beliefs, it's my democratic duty to make them heard. The fact that I'm a computer scientist doesn't exclude me from this responsibility, regardless of the field in which the opinion is held.
Sorry, I got into a bit of a rant myself. It was a general rant, not a rant against you, or anything.
Ignored Aspects (Score:2, Insightful)
Begin.rant;
The key problem that I have with current AI debate is not that it is case-based but that it is centered on a limited number of cases.
AI is a broad field that encompasses everything from Deep Blue to more esoteric work on "building brains". There are researchers who are attempting to "remake humans", researchers like myself who are studying specific aspects of intelligent behavior, researchers who use AI to model and understand (but not replace) human intelligence, and researchers true to Turing who simply want to make systems that behave intelligently.
Yet, whenever debates about AI come up people seem to invariably center on "major cases" such as Deep Blue, Cycorp, and the spectre of Rossum's Universal Robots. As a result researchers whose sole goal is to understand how humans think are lumped in with people who seek to build armies of slave drones.
I have not read the book in question and this is not intended as a critique of the author in specific. Yet I don't hold out much hope that any single source can encapsulate so vast and multivaried field or that any single argument applies to all of "AI".
End.rant;
Of sentience and reliability (Score:2)
Well, my experience (while not as monumental as Joy's) has led me to believe that sentience has hardly anything to do with reliablity. For a sterotypical example, consider the absent minded scientist. I know many a briliant person who could never find their keys.
-"Zow"
Has he talked about Rod Brooks? (Score:3, Informative)
I wonder if he talks about Professor Rodney A. Brooks [mit.edu] at MIT [mit.edu] and his ideas about artificial intelligence, situatedness, and embodiment.
For Rod Brooks, "intelligence" cannot really be programmed into a system; it is rather an emergent property of systems as they interact with their environment. In The Matrix Morpheus says that the body cannot exist without the mind, but Brooks would rather say that the mind cannot exist without the body, because the body is the only way that the mind can have any experience of its environment. It's a radical idea. It answers the problems behind knowledge representation that have been argued by Hubert Dreyfus in 1965, where he stated that any representation of knowledge is incomplete without its connection to all other pieces of knowledge. The paradigm Brooks is presenting in his ideas about embodied intelligence is that explicit representation of knowledge is superfluous: let the world itself be its own best model, and let the artificially intelligent being formulate its own judgments about what the world is and what it means from its own experience of that world. Intelligence emerges from its interaction and experience of the world. If Brooks is correct, then true AI is absolutely inseperable from robotics.
The seminal paper where Brooks discusses this philosophy is "Intelligence Without Reason" and is available at his website which is linked above.
Any book on AI that does not discuss this other branch of AI philosophy is in my view hopelessly incomplete.
Re:Has he talked about Rod Brooks? (Score:2)
congratulations to Dr. Brooks for taking the time out of his life to get the Ph.D and build up his credentials so that people would listen to him when he stated the obvious.
Re:Has he talked about Rod Brooks? (Score:2)
Well, coming from the point of view of traditional AI research it is truly radical. Call that the straitjacketed minds of crusty philosophers stuck in the ivory towers of academe, caught up in the biases of Western thought that seeks to divide, compartmentalize, and analyze the system of the world to understand it!
Brooks himself got these ideas from biology, a study so very far removed from the fields of computer science and electrical engineering that form the core of traditional AI research. It was only by stepping outside the bounds of traditionalist Western ideas about the compartmentalization of learning and knowledge that he brought these ideas forth.
I wonder what other ideas might come from a more integrated view of science, as opposed to the divisive approach Western science has taken.
Re:Has he talked about Rod Brooks? (Score:2)
Right. But then, a 'creature' with those kinds of 'sense organs' would be completely different and utterly alien to us. Because its experience of the world is utterly different, its emergent behavior would also be utterly different. In order to create a human-like creature, with human-like intelligence, the holy grail of AI, it would then necessarily have to have the sensory capabilities that a human would have. Either that, or you model the entire environment a human normally interacts with and allow your "artificially intelligent" being to interact with that simulated world, which is what traditional AI is trying to do. Of course, it's almost completely impossible to do that in its fullest generality... Very old arguments put forth by Hubert Dreyfus and Joseph Weizenbaum in the mid-1960's.
Creative adaptation (Score:3, Insightful)
Re:Creative adaptation (Score:3, Informative)
complexity of supercomputers approaching brain (Score:4, Interesting)
The fastest supercomputer operates on 64 bit words at a several trillion operations a second, or about a hundred trillion ops per second; a hundred times slower or so.
Instead of quibbling exactly about these numbers, note that Moore's Law implies a factor of ten every five years. So a supercomputer will be as complex as brain somewhere in the 2010 to 2020 time frame. Don't even think about 2050 or 2100!
However, computers aren't programmed as well as a brain in many areas, so the software people have a long way to catch up.
Re:complexity of supercomputers approaching brain (Score:3, Funny)
Talk about an overclocking challenge! Put your ice hat on and think as hard as you can.
This is a great factoid to throw at those who still insist on fetishizing clock speed - AMD take heart!
Re:complexity of supercomputers approaching brain (Score:4, Insightful)
Re:complexity of supercomputers approaching brain (Score:2)
Re:complexity of supercomputers approaching brain (Score:2)
The human mind is much more complicated. To begin with, the brain is not digital, it's analog. Also, we only know about certain aspects of the human brain. Things like ESP, precognition, and yes, even magick we don't have the foggiest clue how that stuff works, even though there is documented evidence that it *does* work. Since the scientific community can't figure it out, they brush it aside and say it can't be happening. But it DOES happen, and the human mind DOES work like that.
So AI will never approach the capabilities of the human mind, IMHO. You can simulate a person all you want, but it will be only that, a simulation, and never a real person.
Re:complexity of supercomputers approaching brain (Score:2)
Hey, if you have this documented evidence, why not make yourself rich and take the Amazing Randi's Million Dollar Challenge?
http://www.randi.org/research/index.html
How many psychics with precognition predicted September 11th, arguably the defining moment of 2001 (at least for Americans)?
as someone once said... (Score:2)
Seriously, the units of computation and memory in the brain are likely not individual neurons but synapses, dendritic trees, and even individual channels. That gives you many more orders of magnitude of computational resources for silicon to catch up with. Furthermore, there is no guarantee that Moore's law will continue to hold. In fact, it seems likely that Moore's law will hit the wall just when it comes to trying to get into the realms where biological systems are computing right now.
Brain Emulation no longer a hardware challenge. (Score:3, Interesting)
Kurzweil and Thinking Machines (Score:3, Interesting)
The same idea occurred to me recently when reading through Kurzweil's "Spiritual Machines" book. There are a few orders of magnitude to toss around in these calculations : Kurzweil determined that a desktop computer will be comparable to a human by around 2020. It was evident to me that Kurzweil's timescales (and hence the premises which he used to infer them) are quite far off, because current massive parallelization of commodity CPUs puts one a factor of about 4,000 up from a desktop machine, or about 13 years of Moore's Law evolution. In addition, as the number of CPUs per supercomputer is increased, we have effectively grown faster than Moore's law, due to both the chip and parallelization advances.
Since the supercomputers of today effectively place us where a desktop will be in 2015, it should be apparent (by Kurzweil's logic) that an "intelligent" machine should be nearly imminent.
It is quite evident that something is awry in the logic leading to Kurzweil's conclusion. The simplest explanation is one which is quite familiar to scientists and programmers using state-of-the-art software tecnhinques : having the hardware resources is only a bare minimum requirement to solve a problem. For instance, one can have a supercomputer capable of simulating the Earth's climate for centuries, but that won't get you any closer to the results if you don't also possess a great deal of knowledge about atmospheric physics and numerical methods. The same is true for studies of "Thinking Machines" : one can have a machine possibly capable of thinking, but without the knowledge of how to go about doing it, you are no closer to the solution than where you began.
Bob
Intelligence or Emotion? (Score:2)
What's more important, a computer that can think or a computer that can experience emotions? Can you imagine coming home to your Valet-bot 3500 when it's having it's monthly "period"?
Hey, what's for dinner? Get it yourself, you arrogant ass, I wasn't put here to serve you, now rub my feet!
(A side note: Ever notice we always assume the personal cyborgs/robot/whathaveyou will be female? That is an issue in its self I think).
Perhaps true A.I. is undetectable! (Score:3, Funny)
So how would we notice before it sneaks up on us from behind?
First contact (Score:2, Interesting)
They'll be effectively immortal. They won't experience the emotions and sensations the same way. Many of our feelings are caused by hormones and chemicals being released to different parts of our brains. A computer won't have that. Ditto for drugs and food. We could simualate it of course, but computers can undo or backup their programming or just turn it off. Imagine an LSD subroutine. A computer could always be high on LSD without the same ill effects human encounter. That could be scary.
"Navi, check my e-mail."
"Why are you speaking Korean today, Lain?"
"I'm not."
"You look very beautiful today. Is that a new dress?"
"What? I disconnected my webc--"
"Erasing personal files as requested."
A computer would be able to learn phenomenally fast too. Screw programming a universal translator. Just get a real AI set up and have it learn all the world's languages in a week or two. How would you know you could trust a computer though? Could computers have hidden agendas? Would an AI eventually "resent" being forced to do nothing but translate?
Then we get into the question of civil rights. Stephen Hawking's body is pretty much gone and his mind is still there. His "human" rights are recognized. A retarded person could have a body but really not much of a mind. His rights are recognized. So why wouldn't a computer's rights be recognized? Just because we created it? Would the same reasoning extended to someone who was cloned or genetically engineered?
I wonder if we're ready as a race to encounter a truly sentient computer and everything that would mean for us.
Are they? (Score:3, Insightful)
Wouldn't we need to have some, first, before we could say they "are" doing anything?
Language Still Beyond AI (Score:2)
QUANTITATIVE CHANGE != QUALITATIVE CHANGE (Score:2)
the real problems is journalists (Score:2)
The people who publish exaggerated claims about AI are journalists eager for a sensational article. Other journalists eager for a story then tell us how we will all get replaced by robots. And then other journalists make a big controversy out of it to publish even more nonsense. And when after just a decade or two AI (or some other overhyped technology) doesn't deliver, journalists write scathing criticisms. To support these claims, journalists scrape together any kind of nut and off-beat comment they can find.
Journalists should stick to reporting science from published, peer-reviewed articles. The real problem is sensationalism and unfounded speculation, and the people responsible for that are journalists. That means you, too, Katz.
AI Primer (Score:2)
Main Categories
Many of the "arguments" and "debates" in the field of AI are non-arguments, because they deal with entirely different areas of AI. There are some superficial similarities, and different types may depend on experience in other types, but they should never be confused.
Testing AI systems. This is often done by means of the "Turing Test" - if it's indistibguishable from something you know is intelligent, by any test of ability (rather than physiology), then it can be considered intelligent, by any meaningful definition.
"Expert Systems" are often the main contestants in "Turing Test" challanges. However, the test applied is not the strong version, above, but a weak version, in which the machine must merely be difficult (not impossible) to distinguish from a person, in one specific area of conversation. The results are impressive, but because Expert System engines are not intelligent, they will only ever be impressive in the weak test. No Expert System, however good, will ever meet Turing's strong criteria.
Weak AI systems are too specialised to even apply for a Turing Test. Vision, sound recognision, etc, are all worthy goals, but the logic behind such engines is largely specialised pattern-matching and interpolation systems. Such a system is good for what it's designed to be good for -- engineering-type problems, where the output must be capable of being more exact than the input.
Strong AI systems, at present, are either extremely primitive, or simply don't exist. Certainly, the level of effort into Strong AI has dropped over the past few decades, and nothing that does exist is even remotely close to the point of being able to take on even the Weak Turing Test, never mind the Strong one. But, should this field ever make headway, this is where true Artificial Intelligence will come from. HAL, "Data"/"Lore", and numerous other sci-fi creations assume that Strong AI will, someday, make progress. None of these types of AI can be produced through "Expert Systems" or "Weak AI", although (again) the hardware usually requires one or the other. (eg: HAL's optics would likely be Weak AI-driven, because that is what Weak AI does best.)
I've postulated that Strong AI will most likely start to appear through Virtual World-type environments, because these can be controlled and directed, the responses can be examined, and the hardware limitations of real-world systems is not a factor. (A VR AI can have whatever "vision" the VR can simulate, whether or not physical optics are capable.)
Closed environments allow experimentors to add/remove stimuli at will, and see what happens. You can't really remove gravity, for example, in the real world. This makes a virtual world much more interesting, when it comes to what experiments you can do.
The problem with VR AI is that it's never going to get funding. It's too speculative, has no direct or immediate benefits, and would be a VERY long-term project, if it's to produce anything at all. (By long-term, I don't expect a self-evolving system to reach any kind of awareness or intelligence any faster on a computer than in real-life. Sure, you can start with more complex building-blocks, and you're not required to simulate every molecule in every organism - event-driven mechanisms would be perfectly good - but even if you could start with some very complex computer life, you're talking about a project that would take centuries before you could even know if it was going to produce any viable intelligence, and probably as long again before such intelligence reached the point of being able to take, and pass, the Strong Turing Test.)
searle - is brain a digital computer (Score:2)
of course, if you're going to talk about AI,
you might want to ask a cognitive scientist:
Searle > Is the Brain a Computer? [soton.ac.uk] and Searle > Minds Brains, and the Chineese Room [soton.ac.uk]
regards,
storm's nest [earthlink.net]
Re:searle - is brain a digital computer (Score:2)
here's the summary from the link.
SEARLE - IS THE BRAIN A DIGITAL COMPUTER [soton.ac.uk]
SEARLE - IS THE BRAIN A DIGITAL COMPUTER?
Summary of the Argument.
This brief argument has a simple logical structure and I will lay it
out:
1.On the standard textbook definition, computation is defined
syntactically in terms of symbol manipulation.
2.But syntax and symbols are not defined in terms of physics. Though
symbol tokens are always physical tokens, "symbol" and "same symbol" are
not defined in terms of physical features. Syntax, in short, is not
intrinsic to physics.
3.This has the consequence that computation is not discovered in the
physics, it is assigned to it. Certain physical phenomena are assigned
or used or programmed or interpreted syntactically. Syntax and symbols
are observer relative.
4.It follows that you could not discover that the brain or anything else
was intrinsically a digital computer, although you could assign a
computational interpretation to it as you could to anything else. The
point is not that the claim "The brain is a digital computer" is false.
Rather it does not get up to the level of falsehood. It does not have a
clear sense. You will have misunderstood my account if you think that I
am arguing that it is simply false that the brain is a digital computer.
The question "Is the brain a digital computer?" is as ill defined as the
questions "Is it an abacus?", "Is it a book?", or "Is it a set of
symbols?", "Is it a set of mathematical formulae?"
5.Some physical systems facilitate the computational use much better
than others. That is why we build, program, and use them. In such cases
we are the homunculus in the system interpreting the physics in both
syntactical and semantic terms.
6.But the causal explanations we then give do not cite causal properties
different from the physics of the implementation and the intentionality
of the homunculus.
7.The standard, though tacit, way out of this is to commit the
homunculus fallacy. The humunculus fallacy is endemic to computational
models of cognition and cannot be removed by the standard recursive
decomposition arguments. They are addressed to a different question.
8.We cannot avoid the foregoing results by supposing that the brain is
doing "information processing". The brain, as far as its intrinsic
operations are concerned, does no information processing. It is a
specific biological organ and its specific neurobiological processes
cause specific forms of intentionality. In the brain, intrinsically,
there are neurobiological processes and sometimes they cause
consciousness. But that is the end of the story.\**
Types of intelligence (Score:2, Interesting)
Verbal-Linguistic Intelligence
Logical-Mathematical Intelligence
Kinesthetic Intelligence
Visual-Spatial Intelligence
Musical Intelligence
Interpersonal Intelligence
Intrapersonal Intelligence
Naturalist Intelligence
As humans we all have different levels/mixes of these intelligence types. Some intelligence types require more sensate interaction with an unpredictable world [such as intrapersonal or naturalist intelligence], others are more strictly rules-based [logic-math or visual-spatial], while some [like musical intelligence] require a combination of both.
One can see how some of these might be more or less able to be adapted by AI technology, but that's why "intelligent" machines, IMO, will never completely be able to be human.
Most A.I. isn't really about intelligence. (Score:2)
Much of the work done into mimicing vision has created systems with capabilities that in humans are achieved by hard wired parts of the brain. Movement, shape and even facial recognition are not really intelligence.
I think of intelligence as teh abiliy to reason about problems, not simply to solve them. Many of the supposed A.I. systems are just brute force search systems.
Deep blue is like any other chess system, just bigger and faster. Many problem solving systems are simply fast (normaly optomised for the problem) constraint solvers. Neural nets are simply an arbitrary system that is capable of partitioning a solution space in a non-linear fashion and the training algorithm is a search for the network values that partition the test data best. if you think that NNs are anything like real brain cells find biology student who has done some neuro-physiology and you will find there is alot more to them than just a sigmoid function and some weightings.
In fact the neural network training algorithm bears more than a parsing resemblance to simulated anealing (sp?) in its approach.
If you want to learn about machine learning algorithms check out Machine Learning by Thomas Mitchell. Small but well formed.
A quick statistic. The average grandmaster thinks something like 7 moves ahead. Deep blue plots about 15 moves ahead. I may have the numbers wrong but the ratio is about right. However it still only just beat Kasparov. That says something about the way that the human brain thinks about complex problems. This is why A.I. researchers have started to turn away from chess as a problem and towards Go. The branching factor in Go is some much larger than chess that even the best systems can be beaten by a one or two year player. Playing Go will require something more than just brute force.
Most so called A.I. is just a case of doing things quickley. As the PHBs (would probably) say, think smarter not faster. The brain is good at what it does, not just because it is massively complex and parralel, but also because of the way it simplifies many problems using clever tricks to reduce the workload.
I just think we have alot further to go than many researcher and reporters would like to think. Most of what we see these days if just 'clever' or 'smart' (like a spelling/grammer checker), not intelligent like someone designing a car engine using entiry novel techniques (not just optomising or using predefined parts).
Having said that there is some research that shows promise, such as some of the work going on at MIT with COG and co. Now that looks interesting. They arn't trying to make them smart/do clever tricks/play chess etc, but make them intelligent in the more human sense.
Anyway, I'll stop my ramblings now.
Paul
Emergent emotions? (Score:3, Insightful)
Some could attribute this to hardware configuration problems, and that would likely be true. But it was interesting to me that Windows itself changes as it grows. Every change in my computer makes it a little different, and I'm starting to notice. I can even tell the difference between two installs of Windows on the same machine, even though they look virtually the same.
What I think is happening is that each component changes the complexity of the overall system. If that component has an issue (i.e. bad driver or maybe misconfigured), then it adds a little spark of personality to the computer. When enough of these little quirks add up, my computer feels different than other people's computers.
This yields an interesting question. If computers get more complex, will a rudementary set of 'emotions' evolve? They may not be emotions in the sense that they cry if you switch to a Mac, but maybe emotions in the sense that the computers have moods? What if your computer's performance was tied to bandwidth on the internet, and a congested network bogs the computer? What if you're running a laptop off a battery, and the computer gets 'tired' as it wears down? What if you're running a screensaver that makes it 'daydream.'?
Again, these aren't the same type of emotions or moods that people feel, but it is interesting that the more complex a computer gets, the more we can personify it.
Definition of intelligence - it's most basic form. (Score:4, Insightful)
Ability to perceive oneself as part of the universe? Animals have it.
Self-awareness? Dogs seem to have it. Chimpanzees, elephants, cetaceans certainly seem to know that they are individuals. Dolphins even recognize their own reflections in mirrors.
Tool use? Chimps use sticks to dig with. They can stack boxes to reach high places, which is borderline engineering for most humans.
Language? Chimps have one. So do gorillas. Dolphins and other cetaceans have great capacity for communication underwater.
Now, machine intelligence. Turing test? Simple programs passed limited tests years ago. The more complex ones to come will be far more capable of fooling people into believing they are speaking to a human.
Play chess? Limited, but the best can beat our best.
In the future, the AI's will be able to speak, emote, manipulate items and use tools, even be able to design their own descendents. Give tools, the AI's could even build their successors.
But, will they ever be regarded as intelligent by humans?
Nope.
Most europeans and americans for centuries considered blacks and American Indians as sort of half-people, using great logic and rigor that was totally idiotic looking back from our time.
Many tests for animal intelligence and self-awareness has shown that the subjects can indeed show the traits necessary to be considered sapient. But, after each hurdle, the bar gets raised another notch philosophically.
If I were a suspicious type, and I am, I would say that humans simply don't want to recognize intelligence in other species, much less animals, because it threatens us enormously. Our pride in ourselves, our domination of the planet, and our cruelty towards other species are all shaken if the animal looking back at us in the treetops is actually a thinking being, tho a bit furry.
Religion has more than a little to do with it as well.
Down to my definition of intelligent life:
If it fights back, and wins, it is intelligent. All other players are dead meat.
Re:Intelligent Systems (Score:2)
You had better back a statement like that up. It may be completely possible to teach a machine to emulate human behavior, there's no ay you or anyone else, for that matter can prove that it isn't "feasible" to teach them. All we can say at this point is that it may or may not be possible and that as research progresses we will get a better idea of how practical the goal is.