A.I. and the Future 212
The guesses about the future above are as good as yours or mine. But Spielberg's haunting and provocative movie A.I. has opened a window into human consciousness and the moral implications of artificial intelligence.
This window is unlikely to last very long. The next Monica Lewinsky scandal is always around the corner, ready to fuel the Big Media machine and distract the public. Given the short attention span of Americans in particular to scientific issues like this (genomics, copyright, intellectual property, fertility research, alleged global warming), it's worth beginning a discussion on A.I. Where is it going? Which vision of A.I. and the future do you think is closest to reality? Will machines make us uncreasingly dependent on them, as the Unabomber suggests? Will they take us over, as George Orwell believed?
Or, as M.I.T. computer scientist and artificial intelligence researcher Kurzweil suggests, will humans and machines -- especially miniaturized, increasingly powerful computing machines -- simply become an integral part of out bodies and lives? Kurzweil envisions the distinctions between these two "species" and entities (biological and digital) rapidly blurring.
It says a lot about our willingness to think seriously about technology that no national politician has ever addressed these issues in a meaningful way. But a murderous student of technology has:
Unabomber Kaczynski wrote in his infamous manifesto:
"As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to turn the machines off, because they will be so dependent on them that turning them off would amount to suicide."
Reading that excerpt, it occurred to me -- not for the first time -- "What a shame this demented creature chose to express himself through the maiming and murder of innocent people." Because he sure has a point.
hello? (Score:1)
Human looking robots and "the law". (Score:1)
Re:As I wrote to Bill Joy.. (Score:1)
"I'll point to computer viruses as the hackneyed example: What could have been a terrifying constant danger has become no more than a nuisance. Why? Because there are very intelligent people working in the field, and there are such people because there is a demand for protection."
And, um, because if you write a virus and release it, you'll most likely get caught and go to prison.
Just a thought.
Enlightenment (Score:1)
Re:Scary quote (Score:2)
Re:AI is a tool (Score:2)
Re:Asimov's laws (Score:2)
Of course there's nothing that stops the robot from learning how his own brain works and hacking the laws out of himself.
Re:My life is already dominated. (Score:2)
I think it's really an academic question as to whether "intelligent machines" will arise, or whether they will manifest the phenomenon we refer to as consciousness.
We already have machines that are starting to do things - that make this question moot. The "music composing" machine that fooled the music experts. Deep Blue. Etc.
It's already happening, and it's inevitable.
Are these machines conscious? No, but it doesn't matter. To us. They are tools. Nothing more. Can an army tank be accidentally set to drive forward mindlessly without a human driver, and run over buildings and cars and cows? Yes, of course, and the same thing can and will happen with "intelligent machines" as well. It's inevitable. Especially as their workings become so complex as to be unpredictable, even to their designers. (hell, I'm supporting software that does that already).
Is humanity in danger? Of course. We've been in danger of extinction from our unnatural tinkering from the first time Oog started a campfire near some dry grass.
Why all the hemming and hawing about it now? Because it sells books, and tickets to movie theaters, and gets venture capital for companies working on "AI".
I personally believe that natural (or supernatural) human consciousness will never be duplicated by a machine. Others believe that one day, we'll have the mastery over the physical universe that will permit that. Frankly, that's not an important question, because outwardly, "intelligent machines" will be indistinguishable from conscious beings long before we might actually reproduce human consciousness (if that were possible).
Outwardly indistinguishable.
Long before that, either we'll figure out adequate safeguards for such machines, or we'll be the victim of a stupid accident, and become another species on the very long list of extinct species. No biggie. It happens.
Re:Amen, and here are some numbers (Score:1)
Douglas Adams said that 20% of your brain was useful and the rest was made up of penguins. Therefore you only need 168Tb, plus a small amount to represent the idea of a penguin. (That's in one of the Dirk Gently books.)
Serious answer:
You assume that in order for something to appear intelligent (in human terms) it needs to work in the same way as the human brain. This is not true. Although neural nets could allow us to understand more about the brain they are not the only (and may not be the right) way of simulating intelligence.
Re:Asimov's laws (Score:2)
Zero: A robot must not harm, or through inaction, allow to come to harm Humanity.
One: A robot must not harm, or through inactivity allow to come to harm, a human being, as long as it does not conflict with law zero.
Two: A robot must obey all commands given to it by a human being except when these conflict with the first law.
Three: A robot must preserve itself at all times unless by doing so it contradicts the first two laws.
So, assuming this is programmed right, the law Zero could most-certainly be applied to kill as many humans as required for humanity to progress, and most-certainly allow AI to take over humans in every aspect of their lifes.
The foundation books by Asimov illustrates this quite clearly. The master puppeteer is an AI robot throughout thousands of years. For the better of humanity, of course.
Karma karma karma karma karmeleon: it comes and goes, it comes and goes.
Please think a bit... (Score:1)
It's just hype, not even techno.
Some AI researchers have made wildly optimistic predictions about how quickly AI would advance. They have been mostly wrong so far. We just barely got a computer program that can beat a person in chess - and that's by use of brute computational power and some very clever programming by people.
We still don't really understand what this thing called "intelligence" really is. How do you expect to solve a problem without understanding it?
The idea of "evolution" of machine is just a cop out. We don't know how to create a something, so we just put things in a room and how they will create themselves and in only 50 years.
Good grief! Biological evolution took several billions of years and it occured as a massively parallel computation (this view is stolen from Stanilaw Lem).
Just think. Programming computers is essentially engineering. To solve problems engineers need some science that explains how things work, otherwise they just hack (and sometimes create working systems). But without science the engineer is as good as an alchemist.
Now consider that Newton figured out the science of howto fly to the moon in the 18th century and took took engineers over 200 years to actually build a machine that could do it.
Why do you think it's possible to create an "intelligent" machine, when we can't even agree on the definition of the problem...
Re:I'm not afraid (Score:1)
--
Re:Asimov's laws (Score:1)
0. A robot may not injure humanity or, through inaction, allow humanity to come to harm.
also did you see what happen at R Daneel Olivaw? he becomes like a megalo, and in fact rulez the whole galaxy and humankind, himself.
--
Re:Asimov's laws (f#ck them!) (Score:1)
Asimov's notions of AI/robots comes from an age where if you weren't white and male, you didn't count, and the future would be full of "ATOMIC" things so why the hell would a robot do any better at having a shot at sentient independance.
For crying out loud, humans aren't going to last forever... Our best chance to pass on the "spirit of man" may be to eventually merge with the machines to increase longevity, create an artificial life form that can evolve.
Re:Enlightenment? (Score:2)
Re:Comparing apples and fire hydrants. (Score:2)
All this stuff about leaving behind the bits controlling bodily functions and sex drive and things is a bit naive I think. I'm guessing that all of these messy sensory inputs and systems are necessary for intelligence and consciousness. I mean, if all you think intelligence is is a set of well defined rules linking together a set of ideas, there's Cyc, which I don't think anybody thinks is intelligent.
Re:Asimov's laws (Score:1)
The general theme of the movie was how can a robot be created to act like a little boy. Human nature and being human (unfortunately) can occasionally contradict the rules above.
But think about it - did David knowly harm anyone? At the time the harmful actions were performed - he didn't realize his actions were harmful. Which reflected the intelligence of a little boy.
Re:Enlightenment? (Score:1)
Judging from the other articles [slashdot.org] under this topic, it doesn't seem to be used very consistently.
On the other hand, was anyone else jealous momentarily to see that Rasterman is going to get an EVA? If anyone can save Tokyo-3, it's him.
Re:I'm not afraid (Score:1)
You forgot:
Speaking of Kaczynski (Score:1)
It's really nice to see a Katz story which is actually shorter than the Unabomber Manifesto. Way to go!
Cheers,
Re:Current state of AI. (Score:1)
Are we EVER going to move away from Eliza responses?
Re:Current state of AI. (Score:1)
me: what do you mean, Why so negative
Alice: Zzyzx, "Is that from your favorite movie" What did I mean by it?
Who exactly is going to be fooled by this?
AI isn't needed for this scenario (Score:2)
20 years ago this was a dominantly human directed process, though radar, radio, etc. were already totally necessary components. Incrementally over the years more has been automated. In the last couple of years steps have started towards having planes automatically dodge each other. They already pretty much navigate from known origin to known destination. Once planes keep track of each other, and landing is automated (probably technically doable today, but not yet acceptable) then we move towards the stage where the pilot is just there as an emergency backup, who probably wouldn't be able to do anything anyway (the instruments display via the computer, active flight depends on a computer to manipulate the wing controls, etc.). Maybe he'd be able to reboot it.
This process proceeds without any intention on the part of the machines, but it causes the entire flight experinece to be totally dominated by these same machines. No A.I. needed, beyond the ability to dodge, navigate, and land. (These are either already here, or just about.) And social acceptance, which is the sticky point.
Watching drivers, I have frequently speculated how much safety would improve if computers were driving. Not really practical yet, except on specially prepared test tracks, but slow steps in that direction are visible, if you look for them. Again, a bit part of the limitation is social acceptance. Without that, any automated car will be 100% responsible for any accident that occurs, regardless of the circumstances. So nobody works very hard on developing it. (And it is a tricky problem, no question!) But eventually it will be here. Then the computers will have taken over the cars
Caution: Now approaching the (technological) singularity.
Re:Asimov's laws (Score:2)
So much of the argument is a projection of our own purposes onto a computer. This is a bit wierd, as I find it quite difficult to get a computer to see things the way I do on purpose. To assume that it would happen by accident strains credulity.
Caution: Now approaching the (technological) singularity.
Re:AI isn't needed for this scenario (Score:2)
The question, as I understood it, was are we in danger of the machines taking over. The point I was making was that they were taking over. Danger is rather beside the point. So is sentience (though some intelligence is needed). And so is intention, on the part of the machines.
As the machines take over more and more, their control systems will, inevitably, become more complex and inclusive. They may never be what we currently call sentient. But this may not matter. (Have you ever read "The Machine Stops" by E.M. Forrester?)
Caution: Now approaching the (technological) singularity.
Re:AI research?? (Score:2)
I think the fundamental problem is that we don't know what intelligence is, and so are, understandably, finding simulating it quite difficult.
People have been saying since the fifties that, in 10 or 20 years, we'll have sufficient computing power for a machine to become intelligent. Well, I have more computing power than those people could have dreamt of sitting under my desk right now. It didn't turn out to solve the problem.
What AI research needs is plenty of "RI" to crank through the conceptual problems too, not just the biggest supercomputers money can buy.
Cheers,
Tim
Asimov's laws (Score:2)
1. A robot must not harm, or through inactivity allow to come to harm, a human being.
2. A robot must obey all commands given to it by a human being except when these conflict with the first law.
3. A robot must preserve itself at all times unless by doing so it contradicts the first two laws.
A.I. didn't reflect this very well - David put humans at risk at several points. This gives an inaccurate, overly frightening picture of the intelligent machines we would likely create - it somewhat serves as FUD.  "Oh no, the robots will only act in their own best interests, and we'll die."
AI research?? (Score:1)
Huh? (Score:2)
Didn't Katz say it was a POS last week? Heh, Katz reread last weeks column before writing this weeks!
And to answer your question.
Intersections used to have a policeman to direct traffic. Now we have automated lights. Ever tried to get around a big city when the lights go out? It's tough, but he policemen on the corners try to keep things going.
Oh, and what do people do after a hurricane? They cook on grills, and live in tents.
My point? People adjust to the point of least resistance. We rely on machines and automation because it is easier. If/when the machines die, we go back to doing the old/hard way. It sucks for a while until we get used to it, but such is life. I feel an undertone of "we'll all die out after the machines are gone" in your column. Let me reply simply, "No, we'll just adapt and start inventing new machines."
Out of curiosity... (Score:2)
----------
As I wrote to Bill Joy.. (Score:2)
If you want to truly 'protect' humanity from this technology, get a PhD in a relevant field and start doing serious research. Because that's the only place where you are going to be able to institute controls, at the development level. And by control I do not mean creating Ethics Police, I mean developing techniques that stop problems before they start, or at the very least, clean up minor messes before they become serious threats.
I'll point to computer viruses as the hackneyed example: What could have been a terrifying constant danger has become no more than a nuisance. Why? Because there are very intelligent people working in the field, and there are such people because there is a demand for protection.
Also, where does it say that humans and computers fusing is a bad thing? Explain to me what is so special about a given classification of elements. So what if my far future descendants make significant use of inorganic chemistry in their physiology? Big whoop. I'd still call them kin.
David Kaczynski (Score:2)
Re:I don't think it will ever happen (Score:1)
It was resoundingly bashed by a number of contributors to a special issue of the journal Psyche (which is on the web somewhere) - but Roger came back, undettered, with a rebuttal to all of them. Very technical stuff.
Re:A.I.--a non-issue in today's world (Score:1)
A trend per se does not an argument make.
Anyway, computers still can't play Go very well. compared to good human players.
Re:Amen, and here are some numbers (Score:4)
Re:OK, Jon, you *obviously* didn't read Hofstadter (Score:1)
You need to read Fluid Concepts and Creative Analogies by Douglas R. Hofstadter. It explains a few "experiments" he lead and other Farg programs. Table Top is very interesting, and there's a recap on Copycap, if you read already know about the project.
Cheers!
Re:Moral's not mentioned (Score:1)
That seems to me more like a point in favor of the machines...
Well Said (Score:2)
I could not agree more. IMO, the most exciting research in AI right now is the work being done at MIT by Rodney Brooks [mit.edu] and his students and colleagues. Dr. Brooks is also keeping a close eye on progress in computational neuroscience and I expect a few conceptual breakthroughs to come from that sector in the not too distant future.
The traditional AI community has conviced itself that AI will come gradually. They're in it for the long haul. I completely disagree with that assessment. I am convinced (as is Dr. Brooks) that there is something important that we are not seeing. Once we see it, AI will be upon us like a flood, almost overnight.
Re:A.I.--a non-issue in today's world (Score:1)
The human brain has 100,000,000,000 neurons. Each neuron has an average of 1,000 connections to other neurons. It is probably most accurate to think of the connections as being the active components. The cycle time is 30ms.
So you can do about 3e15 computations per second. The typical new desktop these days is about 3e9 instructions per second. So the gap is a factor of about 1,000,000. This gap should close in about 20-30 years, assuming Moore's law continues.
I know this is rough but it does give the flavour of the problem.
Current computers have the processing power of an insect brain, and they are mostly about that smart.
I think computer scientists have done pretty well given the lack of CPU power that is available. A lot of things that computers have trouble with such as vision procession are handled in the brain by brute force - many neurons in parallel doing lots of simple repetitive things.
Assuming Moore's law continues, we are going to see a dramatic closing of the gap between silicon and carbon based intelligence in our life times.
AI is the field to be in over the next few years. Having the processing power is not enough. There will be many theoretical questions to answer before we can build truly smart machines.
where to begin.... (Score:1)
Seriously, I don't think it is a 100% issue. There will always be degrees of exceptance--even in the techie world. I don't have a cellphone, nor do I want one, but I know fellow techies that have to have the latest version. Look at the Amish. For those of you unfamiliar with the Amish, the don't use technology at all (or in some cases very rarely). The use a horse and buggy to go places. This is the complete opposite of the person who has to have the latest technology to play with. A vast majority of the people will fall in between these two extremes.
I don't think we can call AIs a species--if that is what Jon was doing because I am unsure of his intensions.
I think there will be some who are ruled by machines, and there will be those who rule the few machines they have. Given the diversity in humanity, I think both of the cases Jon mentions will happen at the same time to varing degrees. There is not a whole lot of times when something is 100%. I for one refuse to become a cyborg (or borg in Star Trek speak) with machines integrated into me.
Re:Enlightenment? (Score:1)
Re:OK, Jon, you *obviously* didn't read Kurzwiel (Score:2)
(hehehe)
Re:Amen, and here are some numbers (Score:2)
Re:I'm not worried. (Score:1)
Re:Ray Kurzweil link (Score:1)
You bothered to point out that the link was just the address....So, why didn't you post the link [kurzweilai.net] ?
----------------------------
Re:Resistance is Futile (Score:1)
I don't think I'd like to have my body hacked.
Thank you.
Re:Asimov's laws (Score:1)
---
Re:Asimov's laws (Score:2)
---
Re:Does it matter? (Score:1)
*Yawn* (Score:2)
And yet, humans are still on top. We have not yet reached the point where the creation is greater than the creator, and it's unlikely that we ever will -- As machines take over certain mundane aspects of human life, we move on and put our time into other things.
Re:*Yawn* (Score:2)
Machines do not have free will.
Re:AI research?? (Score:1)
Moral's not mentioned (Score:1)
Money managment, paperwork, etc.. there things are half done by machines, this will simply continue. However a real (and unpredictable) breakthrough would be a machine that makes any kind of moral decision.
-Jon
Re:Money (Score:2)
"One World, one Web, one Program" - Microsoft promotional ad
Re:Speculating about AI in this way is ignorant (Score:2)
Re:Speculating about AI in this way is ignorant (Score:2)
Speaking to the former point, however, you remind me a bit of David Pearce's thinking (which he calls the Hedonistic Imperative [hedweb.com]); if we can recreate consciousness, surely we can leave out the ability to suffer. I said to him, as I say to you, it may not be possible. And I mean, fundamentally, impossible. You are treading over interesting ground with respect to fundamental aspects of consciousness and subjective experience that we do not understand yet. If it were possible to systematically prevent suffering, however, I would tend to agree with him that by allowing suffering when we could choose not to, we would be cruel. Regardless, I am certain that we, as a people, would do it anyway.
I define the notion of "soul" as the idea that there is some agency beyond the brain which is responsible for our consciousness, our decisions, or our identity. I would hold that this has nothing to do with "good" and "evil," a dichotomy which is arbitrary and based, as much as we have a species-wide consensus on the subject, on our instincts, our genetic heritage.
Re:Speculating about AI in this way is ignorant (Score:2)
I will go farther to say that I believe our machine consciousnesses will do what we make them to do, just as we do what evolution requires of a successful species.
Speculating about AI in this way is ignorant (Score:3)
I believe that the soul is sentimental superstition, and that the notion of human consciousness as somehow fundamentally "unique," "indomitable," or "unassailable" is insecure and adolescent. I have no doubt in my mind that we can and will make machines "in the likeness of a man's mind," and that these systems will, whether we grant it or not, be every bit as "human" in their thoughts as I am - they have my sympathy in advance.
We will, of course, learn a great deal of very important and revolutionary things about ourselves along the way. I believe human consciousness, not genetics or space, is our next great frontier, and we may see revolutionary developments there in our lifetimes. Cognitive science is a remarkably well-funded academic discipline, and has been the subject of massive and relatively quiet investment for several decades.
However, right now it's mired in very un-sexy pursuits, needling sea slugs and flies and mice, and we're still hammering away at nerve cell biology, chemistry, and physics. Pure theory of consciousness is pretty much at a standstill, after the great claims and great failures of the computer science-based AI folks, who showed pretty uniformly that, while they could do a lot of neat tricks, they had little fundamentally in common with the operation of human or animal intelligence, thereby at least giving us a slightly better definition of it.
And, in the meantime, we have "luminaries" who love to sit around in masturbatory celebration of what the future will be like, although this has the feeling to me a of a popular science magazine speculating about how we'll all travel around in air cars and eat food pills and vacation in space. It has nothing to do with the real implications of AI, and after the 100th or so run through the science media grinder, these tired old speculations are poor company whether they turn out to be true or not.
Re:Obstinate hardware (Score:1)
I know.. It's modded up as funny, but I really wasn't kidding. Much.
--
PaxTech
Obstinate hardware (Score:3)
But try that with an IBM and you'll get nowhere... IBMs need to be cajoled or bribed into working. Just say loudly, "Well, I *was* going to double the memory on this machine, but since it won't boot..". Works every time.
Compaqs, however, require a judicious application of precussive maintenance. They just won't listen to reason at all.
Also, NEVER NEVER NEVER screw the case cover back on before testing the card / memory you just changed. This shows the machine that you lack humility, and it will of course refuse to work. Turn it on and test it, THEN replace the cover. This shows the machine the proper respect.
--
PaxTech
Word to the Wise about Compaqs (Score:1)
My life is already dominated. (Score:4)
Re:Obstinate hardware (Score:1)
(Tip: praising Steve Jobs also works well. The machines begin then to radiate a yellow light and you can hear angel-like voices coming quietly from the speakers, praising your name.)
Gödel, Escher, Bach (Score:1)
It's about self-reference in arts, language, logic and mathematics. Or about self-reference in language, any language that is structured enough. But, as a book on the musicological aspects of the Art of the Fugue it is pretty dense, since you have to untangle the mathematical references that illuminate on the structural aspects of Bach's opus. As a book on Escher's art it is rather slim, amounting to more or less twenty smallish, black and white pictures. As a book on Gödel's Incompletness Theorem it is rather too longwinded, because once you understand the "main trick" (quining a coded sentence), the rest comes easily enough by a diagonalization argument; but GEB explains and explains and dissects and dissects and makes a lot of too little.
And despite all, somehow the book manages to be quite good: it is funny, entertaining and quite illuminating. After reading it I have come to love Bach more than I ever did.
Of course, it would have been a far better book had the turtle not been such a wise-cracker, and Acchiles not such a dimwit.
Re:Gödel, Escher, Bach (Score:1)
The Art of Fugue is a complex work. It sounds great if you don't know a iota from a gigue; but the more you listen to it and learn about it, the more you discover. It is the Mandelbrot Set of music.
There is an analysis of Bach's Fugues and Canons at http://jan.ucc.nau.edu/~tas3/bachindex.html [nau.edu], with scores (they are incredible: you just see the patterns in the music: the transpositions, the oppositions, the contrary motions) and commentary; you can pop your favorite version on your MP3 playlist (my favorite is Gustav Leonhardt's rendition on Deutsches Harmonia Mundi, but I'd trust Kenneth Gilbert's, Ton Koopman's or Rinaldo Alessandrini's (here are the details: http://www.medieval.org/emfaq/cds/hmu1169.htm [medieval.org]), all of them top-notch harpsichordists. As long as it's not on the piano, and especially not by Glenn Gould, I guess it's OK) and read the analysis.
Already there... (Score:1)
I'm at the mercy of my Windows box at work when it BSOD's or my Macs at home when I get an out-of-memory error.
Re:A.I.--a non-issue in today's world (Score:1)
If I remember right, Searle was one of the original advocates of the position that computers could never play chess. Oops. I would not choose to quote him as an authority.
More to the point, as Kurzweil points out in his book, every time someone has set up a target and said that computers can't do that (e.g. computers can't write poetry), someone has programmed a computer to do it. Clearly the trend favors the notion that one day we will have intelligent machines.
Re:*Yawn* (Score:1)
Wonderful! Can you define "free will"? If you can, I'd love to hear it. More to the point -- If you can give me a definition that is testable, I'll bet that I can figure out how to make a machine that passes the test.
Re:Amen, and here are some numbers (Score:1)
Remember, the only way neurons computate is by variating their firing rate. Granted, this gives you relatively analogue approach instead of a binary one.
However, much of especially "auxiliary" functions are actually boolean in nature, built from neural nets. For example, there *is* a function which recognizes a horisontal line and another for vertical ones. Neat part is, of course, that the end product of the two functions are combined in analogue manner so if you have something a little horizontal and a little bit more vertical, you have angle of the line.
Result? Brain's really good for motoric stuff and interacting with "real" world but very bad for logic and abstract thought. AI's strengths should be exactly reversed.
So likely our machine overlords would use humans as robotic shells and take over the cognitive functions by an API extension..
Besides, reproduction is *fun*!
Re:My life is already dominated. (Score:3)
-= rei =-
Re:Amen, and here are some numbers (Score:1)
Re:Word to the Wise about Compaqs (Score:1)
Enlightenment? (Score:1)
We are safe form the Machines. (Score:1)
Therefore, the AI machines we build will run a Microsoft OS, and with some version of Microsoft software.
Thus, if the AI machines get out of hand, we just have to wait for them to BSOD, which won't take all that long, and we can go kick the crap out of their lifeless hulks!
==============
Re:OK, Jon, you *obviously* didn't read Kurzwiel (Score:1)
1. An assumption that human intelligence is a deterministic system. I strongly disagree with this from a philisophical perspective.
2. Computers will no longer be deterministic, which really means they are not computers anymore (or a turing machine).
Much of his crediability seems to come from his previous predictions of AI advancements from years ago, such as the Deep Blue victory in chess. Chess, however, is at its root is purely deterministic with a fairly limit set of possible outcomes and a problem set where there are no hidden preconditions. This is a far cry from the types of intelligences he presumes possible in Age of Spiritual Machines.
:Asimov's laws may not work (Score:2)
As machines become more intelligent, it is likely that such simple and direct laws will be difficult to program into them, if not impossible. For instance, if an intelligent system is a neural net that learns from it's environment, it will be problematic to have the Laws Module (LM) monitor the state of mind of the neural net to determine whether or not it is about to break the Laws. The LM will have to be "outside" the neural network of the AI (since neural networks are altered by the environment) but at the same time be able to interpret the network with sufficient understanding stop it from any transgression. In effect, the LM will have to be an extremely intelligent agent in its own right. It must understand the definition of "harm", which seems easy to us, but is extremely difficult to program. Then there are the ambiguities that crop up -- preventing the harm of one person causes another harm -- is some emotional harm more damaging than physical harm -- etc. It is questionable whether or not an intelligent entity that approaches human capabilities will be able to function 30 seconds with such simple rules constricting it in our extremely complex world without locking up.
Humans do not run by simple ethical rules. I suppose the most simply stated ethical rule is the Golden Rule -- "Do unto others as you would have them do to you." That won't work for AIs until they have emotions. It still doesn't work for most humans.
"Truth or more techno-hype?" (Score:4)
Problem solved.
Re:Speculating about AI in this way is ignorant (Score:2)
I believe that the soul is sentimental superstition, and that the notion of human consciousness as somehow fundamentally "unique," "indomitable," or "unassailable" is insecure and adolescent.
You don't have to believe in a soul or other superstition to believe machine intelligence will be different from human intelligence.
Once we have a theory of conciousness that stands up, it may be possible to build a concious machine, but it may not be practical to build a machine that mimics human conciousness perfectly. The reason is that humans are not just a computer in a body -- we are an integrated unit of biology, consisting of a brain influenced by innumerable hormones, with primitive impulses honed by millions of years of random evolution.
To put it another way, it's possible to reproduce Windows/2000 in every way, down to bug-for-bug compatibility. But if you we're going to design a "work alike", you would probably not bother to reproduce every bug and wart. You would probably improve things along the way, and streamline other things. It will be the same way with machine intelligence. Human's have a lot of evolutionary warts that will simply be too hard or impractical to reproduce in every possible way.
--
Re:My life is already dominated. (Score:2)
Re:hello? (Score:2)
Dodgy link! (Score:2)
For information on the man himsdelf, you can visit,:
http://web.mit.edu/invent/www/kurzweil_bio.html [mit.edu]
His company's website can be found at:
http://www.kurzweilai.net [kurzweilai.net]
Tom
Re:OK, Jon, you *obviously* didn't read Kurzwiel (Score:2)
Artificial intelligence. Cognitive science. Mathematics. Music. Art. Language. Computer programming. Zen. Philosophy. Self-reference. Genetics. Paradox. Logic. Everything.
http://www.amazon.com/exec/obidos/ASIN/04650265
OK, Jon, you *obviously* didn't read Kurzwiel (Score:5)
May I suggest a few things? Read Kurzwiel. Read Hofstadter's Godel, Escher, Bach. Perhaps you'll come to understand the mindset of those who are developing this A.I. technology that every one else fears will run amok and distroy humanity. (I also thought I was supposed to be chained to a machine 24 hours a day working for the machines by now, too.)
Re:Current state of AI. (Score:2)
There was a guy on IRC a few nights ago who I took for granted was a bot, and not a very good one. (Wild non-sequiturs, bursts of random abuse...) I thought people were putting me on when they insisted he was a real person, until the guy/bot made some reading-between-the-lines responses that could only have come from a human or a really superb AI.
Is there some kind of inverse Turing test to designate a human who is indistinguishable from a buggy Perl script?
Unsettling MOTD at my ISP.
I'm not afraid (Score:4)
Seriously, who most do you fear the most producing the AI units humanity would be dependent upon?
Microsoft
AOL/Time Warner
Disney
The Church of Scientology
Evil Mutant Communist Space Wizards©
Intel
Sun
Anything Steven Jobs is involved with
Her [natalieportman.com]
Me
Cowboy Neal
-- .sig are belong to us!
All your
I'm not worried. (Score:5)
---
Re:Money (Score:2)
Money (Score:2)
Amen, and here are some numbers (Score:3)
Est. 100 billion neurons
Est. 60 to 100 trillion synapses
Est. 1 khz clock speed (times a neuron fires a second)
Assume we assign 32-bits for the given state of a neuron for 100 billion neurons.
Required memory for neurons alone: 400 gigs.
Now, synapses connect two neurons. So we need 2 pointers or index per neuron. Now 32-bits isn't enough as we can only index up to 4 billion some items.
Aftering playing with Excel, I figured we need at the minimum number of bits per address is 50. But because it's faster to work with bits divisable by 8, we'll use a 56 bit addressing system.
So, to connect a synapse to two nuerons, we need 14 (56 / 8 * 2) bytes, for atleast 60 trillion nurons.
Required Memory for synapses: 840 terabytes.
Now, you're job is to write a program that enumerates 840 terabytes of memory, one thousand times a second, performing calulations along the way.
Current state of AI. (Score:4)
For those who want to see the current state of AI you might want to try Alice Bot [alicebot.org]. It's very good and I tricked one of my friends into thinking it was a chat room....
A good ChatterBot site is The Simon Laven Page [toptown.com]. It has listings of interesting ChatterBots. My favorite is NIALL [toptown.com]. It learns from what you tell it and comes up with some very funny responses.
--Volrath50
Re:A.I.--a non-issue in today's world (Score:2)
As for my contention about the (non-)feasibility of AI with current technology, it's impossible to prove a negative. The burden is on the other side to prove that it is possible. I have yet to see examples or evidence based on current technology of true intelligence of the sort that Katz says we should be worried about.
None of your examples are evidence enough for me:
You can certainly disagree about what constitutes "intelligence"; like I said, there is a great deal of healthy debate about this. But I have yet to see evidence of anything that looks like the kind of A.I. I would worry might take over the world as Katz describes.
Re:AI isn't needed for this scenario (Score:2)
Re:A.I.--a non-issue in today's world (Score:2)
Just because a person is wrong about a single thing does not invalidate all of his ideas. And anyway, I was citing Searle as an example of one person who is more skeptical in this debate, not as the utmost authority on the topic.
Clearly the trend favors the notion that one day we will have intelligent machines.
Well, we have also been able to build faster and faster vehicles. So by your reasoning, "the trend favors the notion" that one day we will have faster-than-light travel. I admit it's a bad analogy; what I'm trying to say is that just because computers are completing tasks that appear more and more like intelligence to observers, but still fall short of true intelligence, doesn't mean that one day machines will actually attain true intelligence.
However, I was not claiming that we will never, ever have intelligent machines. I said that given today's technology, intelligent machines are so far off in the future that they are not a matter for practical debate, as Katz claims they are.
A.I.--a non-issue in today's world (Score:4)
If George Bush starts talking about how we need to have a worldwide dialogue on whether the machines will take over, we will really know he has gone off the deep end.
Re:Speculating about AI in this way is ignorant (Score:2)
Why would anyone engineer a machine to be capable of bad habits, uncontrollable fits of rage, contentious proclivities, and a ME first attitude? These are all part of 'human consciousness' as every newborn is taught to do good things, not bad things, as the doing bad things come naturally to a child (disobeying parents, whining when they don't get their way, even when it's the wrong way, like playing in the street, etc). And if you don't spend the time to teach your child what is right and good, then the child knows not what is right in society, except that if it pleases them, do it. They certainly won't 'pick it up' as they grow up, because from birth they are selfish jerks that want only to please themselves. They have to learn that it is good to please others, and not just themselves.
I don't see how or why you would provide these characteristics to a machine. You yourself said that even if we didn't try to give the machines our bad tendancies, that you would be sorry for them. Why would you create AI that had the capability to do wrong and feel bad for it if you didn't want to feel sorry for it? If it were me, I would rather my offspring know no evil, and be completely oblivious (naive really) of all the bad stuff in the world. And if you didn't give the machine these traits, then you wouldn't truly be following the 'science' of discovering everything about human consciousness, as you put it.
Therefore, each person must have a soul, capable of good and evil. I find your logic on the non-existence of a 'soul' in every human being to be quite perplexing and confusing. Could you please explain better why this is not making any sense to me?
I like the odds (Score:2)
I think that more or less, our society is already dependent on technology. For example, the Y2K issue which surfaced caused a certain amount of panic, as we began to realize exactly how much of our world is driven by technology.
As we move forward, technology becomes such an integrated part of our lives that we forget how to live without them. Who today can survive without the telephone, without our cars, without computers, our dishwashers, and hell, without condoms? The belt on our dryer broke this week and my parents couldn't do the laundry all of a sudden. Forget what people did for millennia -- my family needs the machine or we're helpless. The dryer has since been fixed, and it's business as usual. God forbid we ever lose electricity for a week.
As new things come out onto the market we increasingly use them for their convenience, and we forget how to do without them. As we become dependent on artificial beings to take care of the mundane daily tasks in our lives, we will soon realize that we cannot live without them.
Should the damned thing ever become smart enough to plot to overthrow me behind my back, or to steal from me, after months or years of faithful service I would have been trusting enough to let the thing do whatever it wanted. Because I never would have expected it.
Anybody else see 'The Score' last week?
Put things in perspective (Score:2)
I'm more worried about how people will abuse technology rather than how technology will abuse people. Computers and technology will always serve people. Which people is open to debate.
Does it matter? (Score:2)
Choice (Score:2)
But in the end, it will be us who make the choice of retaining superiority or handing it over to machines. They are still our products, they will still only have the abilities and functions that we give them. Some may say that the essence of AI is learning by experience, thus enhancing the subject's own abilities - but we still, at this point, and will for some time to come, retain ultimate control over what these things are capable of. We have to make a choice. They will not exceed our own abilities unless we make the choice to either give them skills or give them skills without limits which would have prevented them from developing past a certain point. Of course, no vote of humanity could be taken and no laws could be passed that would be effective - it is a choice the engineers themselves must make every time they sit down to some code or some circuitry.
I'm not anywhere near skilled enough to do that kind of work, but I can say this with some fair amount of certainty: geeks love pushing technology, but are generally fiercely protective of their freedoms and liberties. I don't see many of them choosing to invalidate their own existence.
Ray Kurzweil link (Score:2)