Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

A.I. and the Future 212

Ted Kaczynski predicts that humanity will easily drift into a position of such dependence on intelligent machines that it will ultimately have little choice but to accept all the machines' decisions. Steven Spielberg's vision is that we will unthinkingly create machines to try to replicate, replace or tend to human needs and emotions. MIT's Ray Kurzweil projects artificially intelligent machines evolving so rapidly in the early part of this century that they will ultimately fuse with biological beings. Many novelists and filmmakers share these dark visions. They see smart machines as inevitably replicating, and surpassing human beings in longevity, endurance, intelligence and raw power. These machines will dominate us. Truth or more techno-hype?

The guesses about the future above are as good as yours or mine. But Spielberg's haunting and provocative movie A.I. has opened a window into human consciousness and the moral implications of artificial intelligence.

This window is unlikely to last very long. The next Monica Lewinsky scandal is always around the corner, ready to fuel the Big Media machine and distract the public. Given the short attention span of Americans in particular to scientific issues like this (genomics, copyright, intellectual property, fertility research, alleged global warming), it's worth beginning a discussion on A.I. Where is it going? Which vision of A.I. and the future do you think is closest to reality? Will machines make us uncreasingly dependent on them, as the Unabomber suggests? Will they take us over, as George Orwell believed?

Or, as M.I.T. computer scientist and artificial intelligence researcher Kurzweil suggests, will humans and machines -- especially miniaturized, increasingly powerful computing machines -- simply become an integral part of out bodies and lives? Kurzweil envisions the distinctions between these two "species" and entities (biological and digital) rapidly blurring.

It says a lot about our willingness to think seriously about technology that no national politician has ever addressed these issues in a meaningful way. But a murderous student of technology has:

Unabomber Kaczynski wrote in his infamous manifesto:

"As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to turn the machines off, because they will be so dependent on them that turning them off would amount to suicide."

Reading that excerpt, it occurred to me -- not for the first time -- "What a shame this demented creature chose to express himself through the maiming and murder of innocent people." Because he sure has a point.

This discussion has been archived. No new comments can be posted.

A.I. and the Future

Comments Filter:
  • by Anonymous Coward
    i wish someone who actually knew something real about AI, at the VERY least had a degree in csci, not tabloid journalism, would write something here to set the record straight. its like katz wrote this after a bad dream or something. get a grip man!
  • by Anonymous Coward
    What about an android sex doll (think Blade Runner's replicants) that resembles a 13 year old girl? No children are harmed, used, nor required to produce such an item. If you believe otherwise, please list who. Why should such be illegal? However, you know there will be baseless, "moral", and irrational cries to outlaw such. What if a place that exists today like Read Doll [realdoll.com] which makes lifesized, sex dolls from steel and silicone that resemble humans in appearance, size, and weight made a "lolita model". Who is harmed?
  • by Anonymous Coward

    "I'll point to computer viruses as the hackneyed example: What could have been a terrifying constant danger has become no more than a nuisance. Why? Because there are very intelligent people working in the field, and there are such people because there is a demand for protection."

    And, um, because if you write a virus and release it, you'll most likely get caught and go to prison.

    Just a thought.

  • Man, I never knew Enlightenment and AI are so closely intertwined... Raster must be trying to take over the world!
  • I'd like to know where he got the number for the human brain's ability.

  • The difference is that every tool until now has only done something that a human could already do, only better. Physical labor? Lever, power tools, cranes. Traveling? Car, airplane, spaceship. Use our senses? Media and communication technologies. An AI will be the first tool that is capable of doing something that it was not explicitly told to do by a human supervisor, and that has never happened before.

  • As emergent systems become more complex, "programming" their thoughts will become more and more difficult. Eventually, the three laws would have to be inserted via some form of psychological conditioning, the same way humans are given equivalent unbreakable rules.

    Of course there's nothing that stops the robot from learning how his own brain works and hacking the laws out of himself.

  • okay, so this was intended as a joke - but who GIVES A SHIT if the machines don't have feelings?

    I think it's really an academic question as to whether "intelligent machines" will arise, or whether they will manifest the phenomenon we refer to as consciousness.
    We already have machines that are starting to do things - that make this question moot. The "music composing" machine that fooled the music experts. Deep Blue. Etc.
    It's already happening, and it's inevitable.

    Are these machines conscious? No, but it doesn't matter. To us. They are tools. Nothing more. Can an army tank be accidentally set to drive forward mindlessly without a human driver, and run over buildings and cars and cows? Yes, of course, and the same thing can and will happen with "intelligent machines" as well. It's inevitable. Especially as their workings become so complex as to be unpredictable, even to their designers. (hell, I'm supporting software that does that already).

    Is humanity in danger? Of course. We've been in danger of extinction from our unnatural tinkering from the first time Oog started a campfire near some dry grass.

    Why all the hemming and hawing about it now? Because it sells books, and tickets to movie theaters, and gets venture capital for companies working on "AI".

    I personally believe that natural (or supernatural) human consciousness will never be duplicated by a machine. Others believe that one day, we'll have the mastery over the physical universe that will permit that. Frankly, that's not an important question, because outwardly, "intelligent machines" will be indistinguishable from conscious beings long before we might actually reproduce human consciousness (if that were possible).

    Outwardly indistinguishable.

    Long before that, either we'll figure out adequate safeguards for such machines, or we'll be the victim of a stupid accident, and become another species on the very long list of extinct species. No biggie. It happens.
  • Humorous answer:

    Douglas Adams said that 20% of your brain was useful and the rest was made up of penguins. Therefore you only need 168Tb, plus a small amount to represent the idea of a penguin. (That's in one of the Dirk Gently books.)

    Serious answer:

    You assume that in order for something to appear intelligent (in human terms) it needs to work in the same way as the human brain. This is not true. Although neural nets could allow us to understand more about the brain they are not the only (and may not be the right) way of simulating intelligence.
  • You're missing a very important one that was created R Giskar (I'm not sure of the english name) and taken over and exercised by R. Daniel Ilivaw: (which forces a rewrite of the 3 laws you mention)

    Zero: A robot must not harm, or through inaction, allow to come to harm Humanity.
    One: A robot must not harm, or through inactivity allow to come to harm, a human being, as long as it does not conflict with law zero.
    Two: A robot must obey all commands given to it by a human being except when these conflict with the first law.
    Three: A robot must preserve itself at all times unless by doing so it contradicts the first two laws.

    So, assuming this is programmed right, the law Zero could most-certainly be applied to kill as many humans as required for humanity to progress, and most-certainly allow AI to take over humans in every aspect of their lifes.

    The foundation books by Asimov illustrates this quite clearly. The master puppeteer is an AI robot throughout thousands of years. For the better of humanity, of course.

    Karma karma karma karma karmeleon: it comes and goes, it comes and goes.
  • MIT's Ray Kurzweil projects artificially intelligent machines evolving so rapidly in the early part of this century that they will ultimately fuse with biological beings. Many novelists and filmmakers share these dark visions. They see smart machines as inevitably replicating, and surpassing human beings in longevity, endurance, intelligence and raw power. These machines will dominate us. Truth or more techno-hype?

    It's just hype, not even techno.

    Some AI researchers have made wildly optimistic predictions about how quickly AI would advance. They have been mostly wrong so far. We just barely got a computer program that can beat a person in chess - and that's by use of brute computational power and some very clever programming by people.

    We still don't really understand what this thing called "intelligence" really is. How do you expect to solve a problem without understanding it?

    The idea of "evolution" of machine is just a cop out. We don't know how to create a something, so we just put things in a room and how they will create themselves and in only 50 years.

    Good grief! Biological evolution took several billions of years and it occured as a massively parallel computation (this view is stolen from Stanilaw Lem).

    Just think. Programming computers is essentially engineering. To solve problems engineers need some science that explains how things work, otherwise they just hack (and sometimes create working systems). But without science the engineer is as good as an alchemist.

    Now consider that Newton figured out the science of howto fly to the moon in the 18th century and took took engineers over 200 years to actually build a machine that could do it.

    Why do you think it's possible to create an "intelligent" machine, when we can't even agree on the definition of the problem...

    ...richie

  • it should have been created in 1982 iirc, same year that susan calvin was supposed to be born...
    --
  • hum, you forgot the law Zero

    0. A robot may not injure humanity or, through inaction, allow humanity to come to harm.

    also did you see what happen at R Daneel Olivaw? he becomes like a megalo, and in fact rulez the whole galaxy and humankind, himself.
    --
  • To be honest... first I don't think that in a true AI situation that these rules would apply. And second, there will be people like my children who grow up and may create an AI driven robot... and who after carefully thinking them out, threw away Asimov's rules as nothing better than a tool to inslave a sentient being.

    Asimov's notions of AI/robots comes from an age where if you weren't white and male, you didn't count, and the future would be full of "ATOMIC" things so why the hell would a robot do any better at having a shot at sentient independance.

    For crying out loud, humans aren't going to last forever... Our best chance to pass on the "spirit of man" may be to eventually merge with the machines to increase longevity, create an artificial life form that can evolve.
  • Or a Wand of Enlightenment. That'll work too.
  • All this stuff about leaving behind the bits controlling bodily functions and sex drive and things is a bit naive I think. I'm guessing that all of these messy sensory inputs and systems are necessary for intelligence and consciousness. I mean, if all you think intelligence is is a set of well defined rules linking together a set of ideas, there's Cyc, which I don't think anybody thinks is intelligent.

  • I really wish these weren't called laws but rather they were called suggestions. Because now any robot (or movie) that doesn't conform to the "laws" gets criticized and tossed off without merit. How many /. posts have been out there saying A.I. sucked because the robots didn't follow Asimov's laws.

    The general theme of the movie was how can a robot be created to act like a little boy. Human nature and being human (unfortunately) can occasionally contradict the rules above.

    But think about it - did David knowly harm anyone? At the time the harmful actions were performed - he didn't realize his actions were harmful. Which reflected the intelligence of a little boy.

  • Judging from the other articles [slashdot.org] under this topic, it doesn't seem to be used very consistently.

    On the other hand, was anyone else jealous momentarily to see that Rasterman is going to get an EVA? If anyone can save Tokyo-3, it's him.

  • You forgot:

    • US Robots & Mechanical Men, Inc.
  • It's really nice to see a Katz story which is actually shorter than the Unabomber Manifesto. Way to go!


    Cheers,

  • I tried using Alice. I logged in as "Zzyzx." Alice asked me if that was from a movie. "No, it's a road in California." "Her" response started, "Why so negative?"

    Are we EVER going to move away from Eliza responses?
  • And then it got even worse. Next screen:

    me: what do you mean, Why so negative

    Alice: Zzyzx, "Is that from your favorite movie" What did I mean by it?

    Who exactly is going to be fooled by this?
  • Bit of a problem here. You seem to be assuming that the machines would have to be self-willed, inner-directed entities to "take over the world". Not so. Consider a small subset: Air-traffic controller.
    20 years ago this was a dominantly human directed process, though radar, radio, etc. were already totally necessary components. Incrementally over the years more has been automated. In the last couple of years steps have started towards having planes automatically dodge each other. They already pretty much navigate from known origin to known destination. Once planes keep track of each other, and landing is automated (probably technically doable today, but not yet acceptable) then we move towards the stage where the pilot is just there as an emergency backup, who probably wouldn't be able to do anything anyway (the instruments display via the computer, active flight depends on a computer to manipulate the wing controls, etc.). Maybe he'd be able to reboot it.

    This process proceeds without any intention on the part of the machines, but it causes the entire flight experinece to be totally dominated by these same machines. No A.I. needed, beyond the ability to dodge, navigate, and land. (These are either already here, or just about.) And social acceptance, which is the sticky point.

    Watching drivers, I have frequently speculated how much safety would improve if computers were driving. Not really practical yet, except on specially prepared test tracks, but slow steps in that direction are visible, if you look for them. Again, a bit part of the limitation is social acceptance. Without that, any automated car will be 100% responsible for any accident that occurs, regardless of the circumstances. So nobody works very hard on developing it. (And it is a tricky problem, no question!) But eventually it will be here. Then the computers will have taken over the cars ... and the insurance companies will make sure that they do! Again, no intention is needed on the part of the computers. This needs a lot more intelligence than the simple case of the airplane, but nothing that we would call sentience.


    Caution: Now approaching the (technological) singularity.
  • The lack of a desire to might stop that.

    So much of the argument is a projection of our own purposes onto a computer. This is a bit wierd, as I find it quite difficult to get a computer to see things the way I do on purpose. To assume that it would happen by accident strains credulity.

    Caution: Now approaching the (technological) singularity.
  • Risk wasn't quite what I was talking about. We're at risk now. We might well be safer with, say, transport systems under robot control. But they would be under robot control.

    The question, as I understood it, was are we in danger of the machines taking over. The point I was making was that they were taking over. Danger is rather beside the point. So is sentience (though some intelligence is needed). And so is intention, on the part of the machines.

    As the machines take over more and more, their control systems will, inevitably, become more complex and inclusive. They may never be what we currently call sentient. But this may not matter. (Have you ever read "The Machine Stops" by E.M. Forrester?)

    Caution: Now approaching the (technological) singularity.
  • I don't think that the problem is that the computers being used aren't fast enough or don't have enough RAM.

    I think the fundamental problem is that we don't know what intelligence is, and so are, understandably, finding simulating it quite difficult.

    People have been saying since the fifties that, in 10 or 20 years, we'll have sufficient computing power for a machine to become intelligent. Well, I have more computing power than those people could have dreamt of sitting under my desk right now. It didn't turn out to solve the problem.

    What AI research needs is plenty of "RI" to crank through the conceptual problems too, not just the biggest supercomputers money can buy.

    Cheers,

    Tim
  • As someone pointed out in the A.I. review, some of the questions of malicious machine intent can be solved by Asimov's Laws of Robotics:

    1. A robot must not harm, or through inactivity allow to come to harm, a human being.
    2. A robot must obey all commands given to it by a human being except when these conflict with the first law.
    3. A robot must preserve itself at all times unless by doing so it contradicts the first two laws.

    A.I. didn't reflect this very well - David put humans at risk at several points. This gives an inaccurate, overly frightening picture of the intelligent machines we would likely create - it somewhat serves as FUD.&nbsp "Oh no, the robots will only act in their own best interests, and we'll die."
  • AI research is in such a poor state these days I have little hope that we will see the "Rise of AI" anytime soon. Look around. The bigest computers are not being used for AI reasearch they are being used for Weather, Atomic Bombs, and other hairy math problems. I wonder what type of AI we might see if "I hate to do this" a few big Beowolf Clusters where being used for AI research.
  • by Shotgun ( 30919 )
    But Spielberg's haunting and provocative movie A.I.

    Didn't Katz say it was a POS last week? Heh, Katz reread last weeks column before writing this weeks!

    And to answer your question.

    Intersections used to have a policeman to direct traffic. Now we have automated lights. Ever tried to get around a big city when the lights go out? It's tough, but he policemen on the corners try to keep things going.

    Oh, and what do people do after a hurricane? They cook on grills, and live in tents.

    My point? People adjust to the point of least resistance. We rely on machines and automation because it is easier. If/when the machines die, we go back to doing the old/hard way. It sucks for a while until we get used to it, but such is life. I feel an undertone of "we'll all die out after the machines are gone" in your column. Let me reply simply, "No, we'll just adapt and start inventing new machines."

  • ..just what does this have to do with Enlightenment?

    ----------
  • These fears are just that, fears. A recognition of the potential for harm, not a manifestation of said harm. So talking about potential problems is great, making legislation based on fear is not.

    If you want to truly 'protect' humanity from this technology, get a PhD in a relevant field and start doing serious research. Because that's the only place where you are going to be able to institute controls, at the development level. And by control I do not mean creating Ethics Police, I mean developing techniques that stop problems before they start, or at the very least, clean up minor messes before they become serious threats.

    I'll point to computer viruses as the hackneyed example: What could have been a terrifying constant danger has become no more than a nuisance. Why? Because there are very intelligent people working in the field, and there are such people because there is a demand for protection.

    Also, where does it say that humans and computers fusing is a bad thing? Explain to me what is so special about a given classification of elements. So what if my far future descendants make significant use of inorganic chemistry in their physiology? Big whoop. I'd still call them kin.

  • David Kaczynski, Teds' brother, was profiled in the Washington Post Magazine [washingtonpost.com] on Sunday.
  • I think so too. If you're into highly technical reasons for thinking that AI on algorithmic computers is misconceived, check out the excellent book "Shadows of the Mind" by Oxford mathematical physicist Roger Penrose.

    It was resoundingly bashed by a number of contributors to a special issue of the journal Psyche (which is on the web somewhere) - but Roger came back, undettered, with a rebuttal to all of them. Very technical stuff.

  • Clearly the trend favors the notion that one day we will have intelligent machines.

    A trend per se does not an argument make.

    Anyway, computers still can't play Go very well. compared to good human players.

  • by Hard_Code ( 49548 ) on Tuesday July 17, 2001 @10:51AM (#79359)
    The first artificial intelligences would probably need a lot less computation power than that. Not being real organisms, they wouldn't have to concern themselves with the biological systems we do (digestive, cardiovascular, endocrine, nervous, reproductive, limbic, movement/navigation, etc, etc).
  • You need to read Fluid Concepts and Creative Analogies by Douglas R. Hofstadter. It explains a few "experiments" he lead and other Farg programs. Table Top is very interesting, and there's a recap on Copycap, if you read already know about the project.

    Cheers!

  • I don't think machines will be making any moral or ethical decisions any time soon.

    That seems to me more like a point in favor of the machines...
  • What AI research needs is plenty of "RI" to crank through the conceptual problems too, not just the biggest supercomputers money can buy.

    I could not agree more. IMO, the most exciting research in AI right now is the work being done at MIT by Rodney Brooks [mit.edu] and his students and colleagues. Dr. Brooks is also keeping a close eye on progress in computational neuroscience and I expect a few conceptual breakthroughs to come from that sector in the not too distant future.

    The traditional AI community has conviced itself that AI will come gradually. They're in it for the long haul. I completely disagree with that assessment. I am convinced (as is Dr. Brooks) that there is something important that we are not seeing. Once we see it, AI will be upon us like a flood, almost overnight.
  • The basic problem with AI machines is lack of CPU power. To explain....

    The human brain has 100,000,000,000 neurons. Each neuron has an average of 1,000 connections to other neurons. It is probably most accurate to think of the connections as being the active components. The cycle time is 30ms.

    So you can do about 3e15 computations per second. The typical new desktop these days is about 3e9 instructions per second. So the gap is a factor of about 1,000,000. This gap should close in about 20-30 years, assuming Moore's law continues.

    I know this is rough but it does give the flavour of the problem.

    Current computers have the processing power of an insect brain, and they are mostly about that smart.

    I think computer scientists have done pretty well given the lack of CPU power that is available. A lot of things that computers have trouble with such as vision procession are handled in the brain by brute force - many neurons in parallel doing lots of simple repetitive things.

    Assuming Moore's law continues, we are going to see a dramatic closing of the gap between silicon and carbon based intelligence in our life times.

    AI is the field to be in over the next few years. Having the processing power is not enough. There will be many theoretical questions to answer before we can build truly smart machines.
  • First, shouldn't we be worried the Jon is quoting the unabomber? Well, maybe not. He's probably just going to that place in Redmond. ;)

    Seriously, I don't think it is a 100% issue. There will always be degrees of exceptance--even in the techie world. I don't have a cellphone, nor do I want one, but I know fellow techies that have to have the latest version. Look at the Amish. For those of you unfamiliar with the Amish, the don't use technology at all (or in some cases very rarely). The use a horse and buggy to go places. This is the complete opposite of the person who has to have the latest technology to play with. A vast majority of the people will fall in between these two extremes.

    I don't think we can call AIs a species--if that is what Jon was doing because I am unsure of his intensions.

    I think there will be some who are ruled by machines, and there will be those who rule the few machines they have. Given the diversity in humanity, I think both of the cases Jon mentions will happen at the same time to varing degrees. There is not a whole lot of times when something is 100%. I for one refuse to become a cyborg (or borg in Star Trek speak) with machines integrated into me.
  • There's no doubt about it: it's an Angel. Prepare to launch the Rasterman.

  • Could someone please explain to me what Godel, Escher, Bach is all about? I've heard about it, but I would like to know what the theme or main point of it is. Thanks.

    (hehehe)
  • I think you may have it backward. Your reasoning assumes that the brain works like a computer, and each neuron is a process that takes up a given amount of RAM and a given number of CPU cycles. I think you could equally claim that each neuron is itself a processor, so you have 100 billion processors running in parallel, rather than 100 billion processes tended to by one CPU. And even that is obviously wrong, since it doesn't take into account how adjacent firing neurons affect each other, how a memory seems to be stored holographically throughout the brain rather than sequentially, etc. If true A.I. (whatever that is) is possible at all, it isn't going to run on a Pentium or the like.
  • Well, I don't know about you, but the thought of Clippy with its big dumb eyes behind THE button....whew. Scary.
  • Not so much a link as an address. But I digress...

    You bothered to point out that the link was just the address....So, why didn't you post the link [kurzweilai.net] ?

    ----------------------------
  • "Ghost in the Machine"

    I don't think I'd like to have my body hacked.

    Thank you.
  • except that in the case of david those laws would defeat the purpose. The idea wasn't to create a machine that would serve and protect humans. The idea was to create a machine that would be a human. Why hard code in something that would work contrary to the goal you were trying to reach?

    ---

  • but if you make a child that you can turn off and put away in the closet when you want to go out, or one that never does anything wrong and doesn't need any actual parenting, then its just a toy, and no one could ever love a toy like a real child... that's why david had to replicate a child and it's undying love for it's mother

    ---

  • We should take care not to make the intellect our god; it has powerful muscles, but no personality. -Einstein Hm... we should take care not to make the muscle our god; it has powerful personality, but no intellect.
  • "Machines are going to take over our lives." Isn't that what they said after the invention of the loom, the windmill, the clock, the printing press, and just about everything else?

    And yet, humans are still on top. We have not yet reached the point where the creation is greater than the creator, and it's unlikely that we ever will -- As machines take over certain mundane aspects of human life, we move on and put our time into other things.

  • They probably haven't taken over because we are an imperfect species, building imperfect tools. But that doesn't mean that the machines wouldn't try.

    Machines do not have free will.

  • Are you mixing normal game type ai to the real thing. Afaik computing AI on a normal computer would require some serious neural simlation and would need one hell of a computer. From what I know most of the research is put to hardware design, not software. Even through hardware design does take computing power and memory it doesn't need insane computers.
  • I don't think machines will be making any moral or ethical decisions any time soon. I think you tryed to say that between the lines, but didn't out right say it.

    Money managment, paperwork, etc.. there things are half done by machines, this will simply continue. However a real (and unpredictable) breakthrough would be a machine that makes any kind of moral decision.

    -Jon

  • Why would you want AI for that? Obviously you would have to make it semi-intellegent but I think of AI i think of something that has concieousness. We don't need machines that can tell us "no" otherwise you don't have much of a Robomaid (you'd have a women hehehehe.... before you flame, I'm KIDDING)

    "One World, one Web, one Program" - Microsoft promotional ad

  • Why indeed. Your questions are interesting, although I think you conflate two important and distinct things: a capacity to suffer and a capacity to "do wrong." It is provably impossible to avoid the latter. Attempting to avoid "bad" actions is a logical black hole. Even if you had an adequately rigorous definition of "good" and "bad" - which is also impossible - you would have no way of acting on it without being omniscient.

    Speaking to the former point, however, you remind me a bit of David Pearce's thinking (which he calls the Hedonistic Imperative [hedweb.com]); if we can recreate consciousness, surely we can leave out the ability to suffer. I said to him, as I say to you, it may not be possible. And I mean, fundamentally, impossible. You are treading over interesting ground with respect to fundamental aspects of consciousness and subjective experience that we do not understand yet. If it were possible to systematically prevent suffering, however, I would tend to agree with him that by allowing suffering when we could choose not to, we would be cruel. Regardless, I am certain that we, as a people, would do it anyway.

    I define the notion of "soul" as the idea that there is some agency beyond the brain which is responsible for our consciousness, our decisions, or our identity. I would hold that this has nothing to do with "good" and "evil," a dichotomy which is arbitrary and based, as much as we have a species-wide consensus on the subject, on our instincts, our genetic heritage.

  • Unfortunately, I do not agree, exactly. I believe that what we consider to be "free choice" or "free will" is another issue on which a better understanding of the brain will prove revelatory. I will not go so far as to say that we do _not_ have free will - rather, I believe that we will discover that the question of whether we do or not is the wrong one.

    I will go farther to say that I believe our machine consciousnesses will do what we make them to do, just as we do what evolution requires of a successful species.

  • by DaveWood ( 101146 ) on Tuesday July 17, 2001 @08:19AM (#79382) Homepage
    I am one of the most secular and optimistic people I know when it comes to machine intelligence.

    I believe that the soul is sentimental superstition, and that the notion of human consciousness as somehow fundamentally "unique," "indomitable," or "unassailable" is insecure and adolescent. I have no doubt in my mind that we can and will make machines "in the likeness of a man's mind," and that these systems will, whether we grant it or not, be every bit as "human" in their thoughts as I am - they have my sympathy in advance.

    We will, of course, learn a great deal of very important and revolutionary things about ourselves along the way. I believe human consciousness, not genetics or space, is our next great frontier, and we may see revolutionary developments there in our lifetimes. Cognitive science is a remarkably well-funded academic discipline, and has been the subject of massive and relatively quiet investment for several decades.

    However, right now it's mired in very un-sexy pursuits, needling sea slugs and flies and mice, and we're still hammering away at nerve cell biology, chemistry, and physics. Pure theory of consciousness is pretty much at a standstill, after the great claims and great failures of the computer science-based AI folks, who showed pretty uniformly that, while they could do a lot of neat tricks, they had little fundamentally in common with the operation of human or animal intelligence, thereby at least giving us a slightly better definition of it.

    And, in the meantime, we have "luminaries" who love to sit around in masturbatory celebration of what the future will be like, although this has the feeling to me a of a popular science magazine speculating about how we'll all travel around in air cars and eat food pills and vacation in space. It has nothing to do with the real implications of AI, and after the 100th or so run through the science media grinder, these tired old speculations are poor company whether they turn out to be true or not.

  • The amazing thing is that this really seems to work.

    I know.. It's modded up as funny, but I really wasn't kidding. Much.
    --
    PaxTech

  • by PaxTech ( 103481 ) on Tuesday July 17, 2001 @08:59AM (#79384) Homepage
    Yeah, the one good thing about those Packard Bells is that you could intimidate them into working..

    But try that with an IBM and you'll get nowhere... IBMs need to be cajoled or bribed into working. Just say loudly, "Well, I *was* going to double the memory on this machine, but since it won't boot..". Works every time.

    Compaqs, however, require a judicious application of precussive maintenance. They just won't listen to reason at all.

    Also, NEVER NEVER NEVER screw the case cover back on before testing the card / memory you just changed. This shows the machine that you lack humility, and it will of course refuse to work. Turn it on and test it, THEN replace the cover. This shows the machine the proper respect.
    --
    PaxTech
  • Percussive mantainence is dangerous on machines that keep the bios on a hard drive partition. Which is sort of like leaving the parachute in the aircraft hangar. Or, as the old Marine Corps cadence goes : "Running through the jungle with my M16 - I'm a dumb mother****er, I forgot my magazine." =)
  • by loki2eng ( 104976 ) on Tuesday July 17, 2001 @07:48AM (#79386)
    I'm a sysadmin. Duh. But let me tell you, Skippy, I bite those machines back HARD.
  • The amazing thing is that this really seems to work. Now, where I work, we have hundreds of Macs, and the best solution -- the best solution BY FAR -- is simply to walk around with an IBM ThinkPad under your arm (the one I carry literally does not even work) and mutter about how Windows is more stable and/or easier to use. I have never seen such low-maintenance computers in my life.

    (Tip: praising Steve Jobs also works well. The machines begin then to radiate a yellow light and you can hear angel-like voices coming quietly from the speakers, praising your name.)

  • It's about self-reference in arts, language, logic and mathematics. Or about self-reference in language, any language that is structured enough. But, as a book on the musicological aspects of the Art of the Fugue it is pretty dense, since you have to untangle the mathematical references that illuminate on the structural aspects of Bach's opus. As a book on Escher's art it is rather slim, amounting to more or less twenty smallish, black and white pictures. As a book on Gödel's Incompletness Theorem it is rather too longwinded, because once you understand the "main trick" (quining a coded sentence), the rest comes easily enough by a diagonalization argument; but GEB explains and explains and dissects and dissects and makes a lot of too little.

    And despite all, somehow the book manages to be quite good: it is funny, entertaining and quite illuminating. After reading it I have come to love Bach more than I ever did.

    Of course, it would have been a far better book had the turtle not been such a wise-cracker, and Acchiles not such a dimwit.

  • It is perhaps a little dense on the music front, although I love music anyway.

    The Art of Fugue is a complex work. It sounds great if you don't know a iota from a gigue; but the more you listen to it and learn about it, the more you discover. It is the Mandelbrot Set of music.

    There is an analysis of Bach's Fugues and Canons at http://jan.ucc.nau.edu/~tas3/bachindex.html [nau.edu], with scores (they are incredible: you just see the patterns in the music: the transpositions, the oppositions, the contrary motions) and commentary; you can pop your favorite version on your MP3 playlist (my favorite is Gustav Leonhardt's rendition on Deutsches Harmonia Mundi, but I'd trust Kenneth Gilbert's, Ton Koopman's or Rinaldo Alessandrini's (here are the details: http://www.medieval.org/emfaq/cds/hmu1169.htm [medieval.org]), all of them top-notch harpsichordists. As long as it's not on the piano, and especially not by Glenn Gould, I guess it's OK) and read the analysis.

  • "Ted Kaczynski predicts that humanity will easily drift into a position of such dependence on intelligent machines that it will ultimately have little choice but to accept all the machines' decisions"

    I'm at the mercy of my Windows box at work when it BSOD's or my Macs at home when I get an out-of-memory error.
  • If I remember right, Searle was one of the original advocates of the position that computers could never play chess. Oops. I would not choose to quote him as an authority.

    More to the point, as Kurzweil points out in his book, every time someone has set up a target and said that computers can't do that (e.g. computers can't write poetry), someone has programmed a computer to do it. Clearly the trend favors the notion that one day we will have intelligent machines.

  • Wonderful! Can you define "free will"? If you can, I'd love to hear it. More to the point -- If you can give me a definition that is testable, I'll bet that I can figure out how to make a machine that passes the test.

  • The brain's a horribly inefficient jumble of neurons which gets away by throwing massive parallelism at any processing needs.

    Remember, the only way neurons computate is by variating their firing rate. Granted, this gives you relatively analogue approach instead of a binary one.

    However, much of especially "auxiliary" functions are actually boolean in nature, built from neural nets. For example, there *is* a function which recognizes a horisontal line and another for vertical ones. Neat part is, of course, that the end product of the two functions are combined in analogue manner so if you have something a little horizontal and a little bit more vertical, you have angle of the line.
    Result? Brain's really good for motoric stuff and interacting with "real" world but very bad for logic and abstract thought. AI's strengths should be exactly reversed.

    So likely our machine overlords would use humans as robotic shells and take over the cognitive functions by an API extension..

    Besides, reproduction is *fun*!
  • by Rei ( 128717 ) on Tuesday July 17, 2001 @08:29AM (#79394) Homepage
    I dunno... back when I was just getting serious with computers, I had a (*wince*) packard bell running windows 3.1 (and, yes, I'm a young'n). I always found that, if it gave me trouble, what always seemed to work was holding it up in the air, or even walking towards the window with it, and saying the words, "Overpriced Toaster Oven."

    -= rei =-
  • I'm a grad student in computational neuroscience, and I just want to point out that AI is not about simulating the brain. That's what we do. AI is about abstracting out the principles that underlie "intelligence" and programming them into a computer. Sort of a top-down approach. AI people will often use so-called "neural nets" and other optimization procedures, but do not confuse this with actually modeling the brain. Of course, all lines are blurry in this kind of work. That said, it's also entirely pointless to draw comparisons between computers and the brain at this level. This is partly because the physiological substrates of computation in the brain are not entirely understood even now (depending who you ask), but also because the brain does things in a way that's best for the brain, not most efficient for a digital computer. For example, if the visual system wants to do some sort of image processing in the spatial frequency domain, it effectively calculates a fourier transform using millions of cells, each exhibiting some spatial frequency tuning. To actually model this process with reasonable biophysical accuracy would bring any supercomputer to its knees, yet functionally equivalent calculations could be carried out on a fast PC. The point is, making computers fast enough to model a whole brain accurately in real time will probably not happen during our lifetimes. However, if one could figure out what the different parts of the brain were doing and perform functionally equivalent computations in silicon, we could do it. That's the goal of AI, not realistic modeling of the nervous system.
  • i recently aquired an old compaq nt workstation (pentium 100) and i went to go change bios settings and it was all messed up. i put in a different hard drive and little did i know that i had to download the bios and put it on a floppy disk. talk about a pain in the ass. luckily the compaq tech support didn't mind my emails asking for help with this even though i didn't have a warranty or anything because someone just gave me the machine.
  • Uh, what does this article have to do with the linux desktop Enlightenment?
  • Think about it... Microsoft has a monopoly on OSes and some types of software. This does not look like it will change any time soon (sorry Linux folks =)

    Therefore, the AI machines we build will run a Microsoft OS, and with some version of Microsoft software.

    Thus, if the AI machines get out of hand, we just have to wait for them to BSOD, which won't take all that long, and we can go kick the crap out of their lifeless hulks!

    ==============

  • I have trouble buying into Kurzwiel's vision. In order for his ideas to be possible, one of the two things must be true:

    1. An assumption that human intelligence is a deterministic system. I strongly disagree with this from a philisophical perspective.

    2. Computers will no longer be deterministic, which really means they are not computers anymore (or a turing machine).

    Much of his crediability seems to come from his previous predictions of AI advancements from years ago, such as the Deep Blue victory in chess. Chess, however, is at its root is purely deterministic with a fairly limit set of possible outcomes and a problem set where there are no hidden preconditions. This is a far cry from the types of intelligences he presumes possible in Age of Spiritual Machines.

  • As machines become more intelligent, it is likely that such simple and direct laws will be difficult to program into them, if not impossible. For instance, if an intelligent system is a neural net that learns from it's environment, it will be problematic to have the Laws Module (LM) monitor the state of mind of the neural net to determine whether or not it is about to break the Laws. The LM will have to be "outside" the neural network of the AI (since neural networks are altered by the environment) but at the same time be able to interpret the network with sufficient understanding stop it from any transgression. In effect, the LM will have to be an extremely intelligent agent in its own right. It must understand the definition of "harm", which seems easy to us, but is extremely difficult to program. Then there are the ambiguities that crop up -- preventing the harm of one person causes another harm -- is some emotional harm more damaging than physical harm -- etc. It is questionable whether or not an intelligent entity that approaches human capabilities will be able to function 30 seconds with such simple rules constricting it in our extremely complex world without locking up.

    Humans do not run by simple ethical rules. I suppose the most simply stated ethical rule is the Golden Rule -- "Do unto others as you would have them do to you." That won't work for AIs until they have emotions. It still doesn't work for most humans.

  • by The Gline ( 173269 ) on Tuesday July 17, 2001 @08:06AM (#79414) Homepage
    Techno-hype.

    Problem solved.

  • I believe that the soul is sentimental superstition, and that the notion of human consciousness as somehow fundamentally "unique," "indomitable," or "unassailable" is insecure and adolescent.

    You don't have to believe in a soul or other superstition to believe machine intelligence will be different from human intelligence.

    Once we have a theory of conciousness that stands up, it may be possible to build a concious machine, but it may not be practical to build a machine that mimics human conciousness perfectly. The reason is that humans are not just a computer in a body -- we are an integrated unit of biology, consisting of a brain influenced by innumerable hormones, with primitive impulses honed by millions of years of random evolution.

    To put it another way, it's possible to reproduce Windows/2000 in every way, down to bug-for-bug compatibility. But if you we're going to design a "work alike", you would probably not bother to reproduce every bug and wart. You would probably improve things along the way, and streamline other things. It will be the same way with machine intelligence. Human's have a lot of evolutionary warts that will simply be too hard or impractical to reproduce in every possible way.


    --

  • The problem is that since they dont have feelings (or the ability to feel), they dont care if you bite ;)
  • I fear the day when IDA* search takes over the world! Aaaaaaaaaaaaah! It's complete! It's optimal! It's powerful!
  • The article's link to Ray Kurzweil is dodgy.

    For information on the man himsdelf, you can visit,:
    http://web.mit.edu/invent/www/kurzweil_bio.html [mit.edu]

    His company's website can be found at:
    http://www.kurzweilai.net [kurzweilai.net]

    Tom

  • 1: Gödel, Escher, Bach: An Eternal Golden Braid by Douglas R. Hofstadter
    Artificial intelligence. Cognitive science. Mathematics. Music. Art. Language. Computer programming. Zen. Philosophy. Self-reference. Genetics. Paradox. Logic. Everything.

    http://www.amazon.com/exec/obidos/ASIN/046502656 7/ singinst
  • by 11223 ( 201561 ) on Tuesday July 17, 2001 @08:09AM (#79431)
    Kurzwiel's Age of Spiritual Machines was not a dark vision, nor was it intended to be. It was intended to be an inspiring vision of what we can do with technology if we choose to do so. This alone damages your credibility when speaking on the topic.

    May I suggest a few things? Read Kurzwiel. Read Hofstadter's Godel, Escher, Bach. Perhaps you'll come to understand the mindset of those who are developing this A.I. technology that every one else fears will run amok and distroy humanity. (I also thought I was supposed to be chained to a machine 24 hours a day working for the machines by now, too.)

  • It's very good and I tricked one of my friends into thinking it was a chat room....

    There was a guy on IRC a few nights ago who I took for granted was a bot, and not a very good one. (Wild non-sequiturs, bursts of random abuse...) I thought people were putting me on when they insisted he was a real person, until the guy/bot made some reading-between-the-lines responses that could only have come from a human or a really superb AI.

    Is there some kind of inverse Turing test to designate a human who is indistinguishable from a buggy Perl script?

    Unsettling MOTD at my ISP.

  • by ackthpt ( 218170 ) on Tuesday July 17, 2001 @07:54AM (#79448) Homepage Journal
    The 2nd amendment grants me the right to be armed. So I've got a screwdriver and I'm not afraid to use it.

    Seriously, who most do you fear the most producing the AI units humanity would be dependent upon?

    Microsoft

    AOL/Time Warner

    Disney

    The Church of Scientology

    Evil Mutant Communist Space Wizards©

    Intel

    Sun

    Anything Steven Jobs is involved with

    Her [natalieportman.com]

    Me

    Cowboy Neal

    --
    All your .sig are belong to us!

  • by MWoody ( 222806 ) on Tuesday July 17, 2001 @08:13AM (#79451)
    As long as the dominant operating system is a Microsoft product, I have no worries about "smart" machines.
    ---
  • I disagree. Intelligence is the ability to figure out a task. You want an intelligent robo-maid. Intelligence can exist without sentience. They are not the same thing.
  • I'll make this short and simple. If it makes money, It'll happen. If it doesn't, It won't happen *for long*
  • by JohnDenver ( 246743 ) on Tuesday July 17, 2001 @10:21AM (#79459) Homepage
    For those of you who want to understand how much CPU power and memory it would take to simulate a human brain, here are some figures.

    Est. 100 billion neurons
    Est. 60 to 100 trillion synapses
    Est. 1 khz clock speed (times a neuron fires a second)


    Assume we assign 32-bits for the given state of a neuron for 100 billion neurons.

    Required memory for neurons alone: 400 gigs.

    Now, synapses connect two neurons. So we need 2 pointers or index per neuron. Now 32-bits isn't enough as we can only index up to 4 billion some items.

    Aftering playing with Excel, I figured we need at the minimum number of bits per address is 50. But because it's faster to work with bits divisable by 8, we'll use a 56 bit addressing system.

    So, to connect a synapse to two nuerons, we need 14 (56 / 8 * 2) bytes, for atleast 60 trillion nurons.

    Required Memory for synapses: 840 terabytes.

    Now, you're job is to write a program that enumerates 840 terabytes of memory, one thousand times a second, performing calulations along the way.
  • by V50 ( 248015 ) on Tuesday July 17, 2001 @08:01AM (#79461) Journal

    For those who want to see the current state of AI you might want to try Alice Bot [alicebot.org]. It's very good and I tricked one of my friends into thinking it was a chat room....

    A good ChatterBot site is The Simon Laven Page [toptown.com]. It has listings of interesting ChatterBots. My favorite is NIALL [toptown.com]. It learns from what you tell it and comes up with some very funny responses.


    --Volrath50

  • Of course you need not take my word for it. For some of the debate, consult the writings of people like John Searle. [berkeley.edu]

    As for my contention about the (non-)feasibility of AI with current technology, it's impossible to prove a negative. The burden is on the other side to prove that it is possible. I have yet to see examples or evidence based on current technology of true intelligence of the sort that Katz says we should be worried about.

    None of your examples are evidence enough for me:

    • chess--chess playing programs work in a single context (all they can do is play chess) and they work by heavy number-crunching and calculations. They don't understand the game, aspire to anything outside the game, or realize they are even playing a game.
    • speech recognition, translation, summarizing business documents--these are basically computational/algorithmic parlor tricks. There is no understanding. MS Word doesn't understand the business document. It just calculates which words are the most frequent etc.
    • conjecture/prove mathematical theorems--I'm not familiar enough with this field to comment on this one
    • promising projects that might bear fruit--Things that "might" work are a far cry from things that "do work". At the moment, these projects, while very interesting, don't demonstrate what I would call intelligence, nor has any of them produced some leap that I think might eventually lead to intelligence.

    You can certainly disagree about what constitutes "intelligence"; like I said, there is a great deal of healthy debate about this. But I have yet to see evidence of anything that looks like the kind of A.I. I would worry might take over the world as Katz describes.

  • You're exactly right that technology need not be "intelligent" to pose a risk to society. But Katz was not suggesting we need to have a debate about whether the machines we rely on are too prone to abuse, failure, etc. He was specifically proposing that society should debate the implications of artificially created intelligence.
  • If I remember right, Searle was one of the original advocates of the position that computers could never play chess. Oops. I would not choose to quote him as an authority.

    Just because a person is wrong about a single thing does not invalidate all of his ideas. And anyway, I was citing Searle as an example of one person who is more skeptical in this debate, not as the utmost authority on the topic.

    Clearly the trend favors the notion that one day we will have intelligent machines.

    Well, we have also been able to build faster and faster vehicles. So by your reasoning, "the trend favors the notion" that one day we will have faster-than-light travel. I admit it's a bad analogy; what I'm trying to say is that just because computers are completing tasks that appear more and more like intelligence to observers, but still fall short of true intelligence, doesn't mean that one day machines will actually attain true intelligence.

    However, I was not claiming that we will never, ever have intelligent machines. I said that given today's technology, intelligent machines are so far off in the future that they are not a matter for practical debate, as Katz claims they are.

  • by regexp ( 302904 ) on Tuesday July 17, 2001 @08:03AM (#79474)
    A.I. experts, cognitive scientists, etc. still disagree about whether it would be possible to create true "intelligence" using even a super-advanced form of any technology that is feasible today (specifically, digital technology). Even if it is possible, it is so far from reality today that A.I. is still a more suitable subject for science fiction and parlor conversation than for political debate.

    If George Bush starts talking about how we need to have a worldwide dialogue on whether the machines will take over, we will really know he has gone off the deep end.

  • My question to you would then be:

    Why would anyone engineer a machine to be capable of bad habits, uncontrollable fits of rage, contentious proclivities, and a ME first attitude? These are all part of 'human consciousness' as every newborn is taught to do good things, not bad things, as the doing bad things come naturally to a child (disobeying parents, whining when they don't get their way, even when it's the wrong way, like playing in the street, etc). And if you don't spend the time to teach your child what is right and good, then the child knows not what is right in society, except that if it pleases them, do it. They certainly won't 'pick it up' as they grow up, because from birth they are selfish jerks that want only to please themselves. They have to learn that it is good to please others, and not just themselves.

    I don't see how or why you would provide these characteristics to a machine. You yourself said that even if we didn't try to give the machines our bad tendancies, that you would be sorry for them. Why would you create AI that had the capability to do wrong and feel bad for it if you didn't want to feel sorry for it? If it were me, I would rather my offspring know no evil, and be completely oblivious (naive really) of all the bad stuff in the world. And if you didn't give the machine these traits, then you wouldn't truly be following the 'science' of discovering everything about human consciousness, as you put it.

    Therefore, each person must have a soul, capable of good and evil. I find your logic on the non-existence of a 'soul' in every human being to be quite perplexing and confusing. Could you please explain better why this is not making any sense to me?

  • Hmm... 50/50 odds... not bad. I'll say... Truth?

    I think that more or less, our society is already dependent on technology. For example, the Y2K issue which surfaced caused a certain amount of panic, as we began to realize exactly how much of our world is driven by technology.

    As we move forward, technology becomes such an integrated part of our lives that we forget how to live without them. Who today can survive without the telephone, without our cars, without computers, our dishwashers, and hell, without condoms? The belt on our dryer broke this week and my parents couldn't do the laundry all of a sudden. Forget what people did for millennia -- my family needs the machine or we're helpless. The dryer has since been fixed, and it's business as usual. God forbid we ever lose electricity for a week.

    As new things come out onto the market we increasingly use them for their convenience, and we forget how to do without them. As we become dependent on artificial beings to take care of the mundane daily tasks in our lives, we will soon realize that we cannot live without them.

    Should the damned thing ever become smart enough to plot to overthrow me behind my back, or to steal from me, after months or years of faithful service I would have been trusting enough to let the thing do whatever it wanted. Because I never would have expected it.

    Anybody else see 'The Score' last week?
  • I'm more worried about how people will abuse technology rather than how technology will abuse people. Computers and technology will always serve people. Which people is open to debate.

  • When we try to convert our Napster .nap files to MP3's, call the new app KReader and have Adobe arrest us for breaking the DMCA after we've given our talk about our app at DefCon, I think oppression by machines would be a welcome replacement.
  • Yes, these 'thinking machines' will develop quickly. Yes we will rely on them for many many things. The new IBM HelpDesk system is evidence of that. Yes, many dark futures may await us.

    But in the end, it will be us who make the choice of retaining superiority or handing it over to machines. They are still our products, they will still only have the abilities and functions that we give them. Some may say that the essence of AI is learning by experience, thus enhancing the subject's own abilities - but we still, at this point, and will for some time to come, retain ultimate control over what these things are capable of. We have to make a choice. They will not exceed our own abilities unless we make the choice to either give them skills or give them skills without limits which would have prevented them from developing past a certain point. Of course, no vote of humanity could be taken and no laws could be passed that would be effective - it is a choice the engineers themselves must make every time they sit down to some code or some circuitry.

    I'm not anywhere near skilled enough to do that kind of work, but I can say this with some fair amount of certainty: geeks love pushing technology, but are generally fiercely protective of their freedoms and liberties. I don't see many of them choosing to invalidate their own existence.
  • working Ray Kurzweil Link: http://www.kurzweilai.net/

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...