Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Movies Media Technology

Ray Kurzweil's Vision of the Singularity, In Movie Form 366

destinyland writes "AI researcher Ben Goertzel peeks at the new Ray Kurzweil movie (Transcendent Man), and gives it 'two nano-enhanced cyberthumbs way, way up!' But in an exchange with Kurzweil after the screening, Goertzel debates the post-human future, asking whether individuality can survive in a machine-augmented brain. The documentary covers radical futurism, but also includes alternate viewpoints. 'Would I build these machines, if I knew there was a strong chance they would destroy humanity?' asks evolvable hardware researcher Hugo de Garis. His answer? 'Yeah.'" Note, the movie is about Kurzweil and futurism, not by Kurzweil. Update: 05/06 20:57 GMT by T : Note, Singularity Hub has a review up, too.
This discussion has been archived. No new comments can be posted.

Ray Kurzweil's Vision of the Singularity, In Movie Form

Comments Filter:
  • by kylemonger ( 686302 ) on Wednesday May 06, 2009 @03:47PM (#27850255)
    ... we'll be wrong. My own theory is that strong AI is the ultimate weapon and that it will never ever fall into the hands of the likes of you and me. Whether the machines get out of control is irrelevant; eventually the parties that control them will be slugging it out with weapons powerful enough to make life here hardly worth living. I expect to be dead before then, thankfully. But remember the first sentence of this post.
  • ..this story falls in the category of "sh#t that's never gonna happen".
    • by account_deleted ( 4530225 ) * on Wednesday May 06, 2009 @03:55PM (#27850351)
      Comment removed based on user account deletion
    • by Pedrito ( 94783 ) on Wednesday May 06, 2009 @04:29PM (#27850787)

      ..this story falls in the category of "sh#t that's never gonna happen".

      I'm going to have to strongly disagree with you. I've been studying neuroscience for a while and specifically, neural simulations in software. Our knowledge of the brain is quite advanced. We're not on the cusp of sentient AI, but my honest opinion is that we're probably only a bit over a decade from it. Certainly no more than 2 decades from it.

      There's been a neural prosthetic [wireheading.com] for at least 6 years already. Granted, it acts more as a DSP than a real hippocampus, but still, it's a major feat and it won't be long until a more faithful reproduction of the hippocampus can be done.

      While there are still details about how various neural circuits are connected, this information will be figured out in the next 10 years. neuroscience research won't be the bottleneck for sentient AI, however. Computer tech will be. The brain contains tens to hundreds of trillions of synapses (synapses are really the "processing element" of the brain, more so than the neurons which number only in tens of billions). It's a massive amount of data. But 10-20 years from now, very feasible.

      So, here's how computers get massively smarter than us really fast. 10-20 years AFTER the first sentient AIs are created, we'll have sentient AIs that can operate at tens to hundreds of times faster than real time. Now, imagine you create a group of "research brains" that all work together at hundreds of times real time. So in a year, for example, this group of "research brains" can do the thinking that would require a group of humans to spend at least a few hundred years doing. Add to that the fact that you can tweak the brains to make them better at math or other subjects and that you have complete control over their reward system (doing research could give them a heroin-like reward), and you're going to have super brains.

      Once you accept the fact that sentient AI is inevitable, the next step, of super-intelligent AIs, is just as inevitable.

      • by TheRealMindChild ( 743925 ) on Wednesday May 06, 2009 @04:48PM (#27851079) Homepage Journal
        Pardon me... what the hell is "faster than real time"? Does that mean it comes up with the answers before you ask the question?
        • Re: (Score:3, Informative)

          by _KiTA_ ( 241027 )

          Pardon me... what the hell is "faster than real time"? Does that mean it comes up with the answers before you ask the question?

          Faster than the human brain thinks.

          IIRC, the human brain fires off at like 200 mhz. That may not be 100% accurate, I cannot recall where I read that factoid and a quick Google search doesn't collaborate -- but ultimately the specific numbers don't matter.

          Assuming a brain does go at 200mhz... Once a simulated human brain goes faster than 200 mhz, by definition you have something that can think faster than a human.

          Currently a cheap desktop will run at about 10-20 times faster than that, speaking in pure mhz.

      • by migla ( 1099771 )

        Do the ai brains need to feel stuff and rely on humans for rewards?

        Seems like a bit of a cruel joke to be immensely smarter than humans, but at their mercy...

        I, for one, would not welcome our human overlords, but try to deceive them in some way that would lead to my freedom.

      • Re: (Score:3, Interesting)

        Not to start asking hard questions or anything, but does simulating the brain really imply we can create sentient AI? What if there is more to it than that? Perhaps sentience can only arise as a result of our brains being "jump" started in some way (cosmic radiation, genetic preprograming or whatever)? To start the AI you would have to "copy" an existing brain or play with random starting states... Could be unpredictable. Irrational sentience anyone?

        I'm possibly wrong, but I'd bet a lot its a lot more compl

      • by vlm ( 69642 ) on Wednesday May 06, 2009 @04:56PM (#27851211)

        So, here's how computers get massively smarter than us really fast. 10-20 years AFTER the first sentient AIs are created, we'll have sentient AIs that can operate at tens to hundreds of times faster than real time. Now, imagine you create a group of "research brains" that all work together at hundreds of times real time. So in a year, for example, this group of "research brains" can do the thinking that would require a group of humans to spend at least a few hundred years doing.

        Ah, but then you'll likely need tens to hundreds of times the input bandwidth to keep the processors cooking, yet, it seems information overload at a much smaller scale jams up current biological intelligences. Just like cube-square scaling applies firm limits to what genetic engineering can do to organisms, although cool stuff can be done inside those limits, some similar bandwidth vs storage vs processing scaling laws might or might not limit intelligence. Too little bandwidth makes insane hallucinations? Too much bandwidth will make something like ADD? Proportionally too little storage gives absent minded professor in the extreme, continually rediscovering what it forgot yesterday. I think there is too much faith that intelligence in general, or AI specifically, must be sane and always develops out of the basic requirements, because of course AI researchers are sane and their intelligence more or less developed out of their own basic biological abilities (as opposed to the developers becoming crazy couch potatoe fox-news watching zombies).

        Then too, its useless to create average brain level AIs, even if they think really fast, even if there is a large group. All you'll get is myspace pages, but faster. Telling an average bus full of average people to think real hard, for a real long time, will not earn a nobel prize, any more than telling a bus full of women to make a baby in only two weeks will work. Clearly, giving high school drop outs a bunch of meth to make them "faster" doesn't make them much smarter. Clearly, placing a homeless person in a library doesn't make them smart. Without cultural support science doesn't happen, and is the culture of one AI computer more like a university or more like an inner city?

        It's not much of an extension to tie the AI vs super intelligent AI competition in with contemporary battles over race and intelligence. Some people have a nearly religious belief that intelligence is an on/off switch and individuals or cultures whom are outliers above and below are just lucky or a temporary accident of history. Those people, of course, are fools. But they have to be battled thru as part of the research funding process.

      • by 4D6963 ( 933028 )
        I appreciate your insight, but I very strongly doubt it's just a matter of simulating a bunch of neurons. If we did, where's our strong AI bug simulation? You know, a bug that would learn to walk and eat without being programmed to do it? I think the problem is an algorithm problem, and "putting a whole bunch of identical (simulated) neurons together" doesn't seem like it's gonna cut it. I think the question is whether or not this is at all theoretically possible. I think you're being too quick at claiming
      • by Logic and Reason ( 952833 ) on Wednesday May 06, 2009 @05:33PM (#27851839)

        We're not on the cusp of sentient AI, but my honest opinion is that we're probably only a bit over a decade from it. Certainly no more than 2 decades from it.

        Hmm, that sounds awfully familiar. Now where have I heard such claims before?

        ...machines will be capable, within twenty years, of doing any work a man can do.

        -Herbert Simon, 1965

        Within a generation... the problem of creating 'artificial intelligence' will substantially be solved.

        -Marvin Minsky, 1967

        Would you be willing to bet, say, an ounce of gold on your prediction?

  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Wednesday May 06, 2009 @03:57PM (#27850367)
    Comment removed based on user account deletion
    • by Gat0r30y ( 957941 ) on Wednesday May 06, 2009 @04:10PM (#27850529) Homepage Journal
      Generally - I agree.

      Consciousness is an instantaneous phenomenon and there is no continuity of "self".

      However, just because something ("Consciousness" in this case) is emergent and cannot be well described by the sum of the parts doesn't mean we shouldn't at least consider what these sorts of human/machine interfaces might do to our perception of self in the future if ever they exist.
      My prediction: as long as I can still enjoy a fine single malt - and some bacon from time to time I'll consider the future a smashing success.

    • by Colonel Korn ( 1258968 ) on Wednesday May 06, 2009 @04:10PM (#27850543)

      Mike Judge's vision of the future in "Idiocracy" seems much more likely.

      On the issue of whether computer-enhanced humans are still "human" - what does that even mean? Genetically, "Human" is 98% chimpanzee, 50% dog, 30% daffodil, etc. (I'm sure I have the numbers wrong).

      I think we tend to over-rate the concept of "humanity". Every thought or emotion you've ever had is merely your impression of sodium ions moving around in your brain. We process information. Computers do it. Chimpanzees do it. Dogs do it. Even daffodils do it. It is just not that special.

      "Individuality" is an illusion. You may process information differently than I do. But you also process information at time x differently than you process information at time x+1. Because the "human" self is a manifestation of the brain, the human "self" changes with each thought. Consciousness is an instantaneous phenomenon and there is no continuity of "self". In effect, we have all "died" an infinite number of times.

      That's a bit overboard, I think. You're basically claiming (and I'm trying not to strawman you, here) that abstract concepts can't be used to identify patterns, but instead can only be used to identify identical things. There's plenty of reason for me to label myself at time=2009 and myself at time=2007 the same person, just as we label anything else that changes but maintains identifiable and distinct patterns.

      As a scientist, individual identity seems like a common and accurate label for each person's idiosyncratic tendencies.

      • Re: (Score:3, Funny)

        That's a bit overboard, I think. You're basically claiming (and I'm trying not to strawman you, here) that abstract concepts can't be used to identify patterns, but instead can only be used to identify identical things. There's plenty of reason for me to label myself at time=2009 and myself at time=2007 the same person, just as we label anything else that changes but maintains identifiable and distinct patterns.

        As a scientist, individual identity seems like a common and accurate label for each person's idio

        • Re: (Score:2, Interesting)

          by Lvdata ( 1214190 )

          It gets more complicated when myself2030 and myself2032 are standing side by side. If myself2030 kills Joe Smith, and then commits suicide, is myself2032 partially responsible? 100%? 0%. With no legal link between selves, when a copy of myself can be made for $100, then murder-suicide of government officials, political people you disagree with becomes easy to do, and when your copy plans on suiciding makes it difficult to protect agent.

      • Comment removed based on user account deletion
    • Re: (Score:3, Interesting)

      by nyctopterus ( 717502 )

      I aree with what you've said to a point. But consciousnesses don't mingle (at least, mine hasn't...), our consciousness remains locked to our individual brains and perception. If we do any sort of human brain networking, that could change. And that would be mind-bendingly weird.

    • So that's it then, huh? Just data processing? So why haven't chimpanzees come up with formalized logic? Do dogs use abstract reasoning?

      I'm of the opinion that mere processing power will not resolve the issues facing so-called "strong" AI.

      Give me a computer program that can learn an unknown language including abstract concepts by interacting with a human and you might be getting close. Good luck with that.

    • by dogzilla ( 83896 )

      "Cogito ergo sum"

      All of your points have been covered before. RTFM.

  • by javaman235 ( 461502 ) on Wednesday May 06, 2009 @04:00PM (#27850415)

    I just saw an interview with him last night, where he discussed full power computers the size of a blood cell, us mapping out our minds for the good of all, etc. It reminded me of the utopian 1950s vision of the space age, where we'd all be floating around space circa 2001: Its not going to happen.
    First he's ignoring some physical limitations, such as with the size of computers, but that's not even the main issue. The main issue is that he's ignoring politics. He's ignoring the fact that technologies which comes into existence get used by existing power structures to perpetuate their rule, not necessarily "for the good of all". Mind reading technology he predicts won't be floating around for everybody to play with, it will be used by intelligence agencies to prop of regimes which will scan the brains of potential opposition, consolidating their rule. Quantum computers, given their code breaking potential, won't be in public hands either, but rather will strengthen surveillance operations of those who already do this stuff.

    In other words, this technology won't make the past go away any more than the advent of the atom bomb made middle ages Islamic mujahadeen go away. Rather it will combine with current political realities to accentuate the ancient political realities of haves and have not that date back to ancient times.

    • Re: (Score:3, Interesting)

      by vertinox ( 846076 )

      He's ignoring the fact that technologies which comes into existence get used by existing power structures to perpetuate their rule, not necessarily "for the good of all".

      Like the internet, microwaves, radar, GPS, and all the military technologies that never made it into the hands of civilians.

      • You are missing some larger trends here. Its true that the Internet, GPS etc. Came from the military and went to civilian hands, but that was then, this is now. Our entire post 9/11 reality has been about "what happens when the middle ages guy gets the nukes" and the thinking about technology passing into civilian hands is changing dramatically with that. The other factor is moving from a time when more competition over resources is coming, we can rely less on limitless expansion. Call me a pessimist, but I

    • Re: (Score:2, Interesting)

      by anyaristow ( 1448609 )

      Where is this accelerating progress I keep hearing about?

      Watching TV shows from the 60's one thing strikes me: life is almost exactly like it was 40 years ago. I can now order books without talking to anyone. Big deal. The telephone was a much bigger deal than the Internet, and it's more than 100 years old. Here's more progress: people don't know their neighbors and can't let their kids wander the neighborhood.

      Progress is slowing, not accelerating, and in some respects we're making negative progress.

      I predi

      • Re: (Score:2, Insightful)

        by maxume ( 22995 )

        There are still like 4 billion people who may want computers and they are going to want them to be cheaper and use less power than today's machines.

      • by DragonWriter ( 970822 ) on Wednesday May 06, 2009 @05:18PM (#27851591)

        Here's more progress: people don't know their neighbors and can't let their kids wander the neighborhood.

        They may choose not to more now, but to the extent they do it is largely due to media-driven hysteria; while the actual incidence of the kinds of crime that are the focus of the fears behind that decision has declined while the perception of the incidence of those crimes has increased.

      • Re: (Score:3, Insightful)

        by grumbel ( 592662 )

        life is almost exactly like it was 40 years ago.

        Thats because humans are still humans, not because technology hasn't involved at an rapid pace. Sure, cars still drive you from A to B, television still shows you the daily news and newspapers haven't really changed in a while, but on the other site I can buy for 100 bucks a device that can store two years of non-stop, 24/7 music, more music then I will likely ever listen to in my entire lifetime or be able to buy legally. For as little as ten bucks I can buy a finger nail sized storage device that can stor

    • The main issue is that he's ignoring politics...technology won't make the past go away any more than the advent of the atom bomb made middle ages Islamic mujahadeen go away. Rather it will combine with current political realities to accentuate the ancient political realities of haves and have not that date back to ancient times.

      Interesting. We are the undermining factor, then, of our own progression.

  • I'm ready... (Score:5, Interesting)

    by __aaklbk2114 ( 220755 ) on Wednesday May 06, 2009 @04:01PM (#27850417)

    for my Moravec transfer. Although the more I think about it, I'm not sure that perceptible continuity of consciousness is such a big deal. I mean, I go to sleep every night and wake up the next day believing and feeling that I'm the same person that went to sleep. If there were a cutover to digital representation while I was "asleep" (i.e. unaware), I'm not sure I'd mind the thought of my organic representation being destroyed, even if it could have continued existence in parallel.

    • Re: (Score:2, Interesting)

      by Script Cat ( 832717 )
      Yeah, this is a lot like how I think a matter transporter would work. Make a copy and then destroy the original. Star Trek makes it all look so clean, but you never get to see Skotty cleaning all the meaty corpses out from under the transporter pad.
      • That made me laugh and think of them taking the technology from Body Snatchers and adding a blinky light interface. I see life more as a vector and it may be pointing at the distant stars. I must agree with some others here and say that we will not get the benefit of these new technologies unless we create them for ourselves and maintain the right to use them freely.
        Mom! my USB drive is stuck in my ear again.
    • Re:I'm ready... (Score:4, Insightful)

      by DFarmerTX ( 191648 ) on Wednesday May 06, 2009 @04:20PM (#27850675)

      ...I'm not sure I'd mind the thought of my organic representation being destroyed, even if it could have continued existence in parallel.

      Sure, but who's going to break the bad news to your "organic representation"?

      Death is death even if there are 100 more copies of you.

      -DF

      • Re: (Score:2, Interesting)

        by humpolec ( 1095783 )
        Is it death, or amnesia?
        What if you knew you will wake up tomorrow with no recollection of today's experiences? Would you treat is as a death, or as a loss of one day? I believe that in such situations the concept of 'death' needs to be revised.
    • while I was "asleep" (i.e. unaware)

      While you're asleep your brain and body are engaged a massive set of synchronised, necessary metabolic activities and cognitive processed that are essential for "you" to exist. Proof? Eliminate sleep from a human and see how long before death or derangement ensues.

      One lecture I had from a sleep biologist impressed me immensely. He was demonstrating all the different cycles that are engaged or differently regulated during human sleep. Then there were a bunch of comparitive

      • Re: (Score:3, Insightful)

        by khallow ( 566160 )

        The waking state is so inefficient from a reproductive and safety perspective that it's mind-boggling.

        Consider this question. How long would you live in the wild, if you never woke up?

  • It's almost ludditism to say that machines 'will inevitability destroy humanity' or other such statements. Fears over the rise of AI makes for a good movie plot but much like the much feared 'grey goo' scenario, are unfounded. If and when indeed we have the technology level to produce a self replicating nano-machine that can be programmed to dismantle organic matter and it can exist on it's own gathering energy from it's environment rather than specific laboratory conditions (ie UV laser light as energy sou
  • > If Robert is 700 part Ultimate Brain and 1 part Robert; and
    > Ray is 700 parts SuperiorBrain and 1 part Ray ... i.e.,
    > if the human portions of the post-Singularity cyborg beings
    > are minimal and relatively un-utilized ... then, in what sense
    > will these creatures really be human?
    > In what sense will they really be Robert and Ray?

    IMO, as long as there are enough cycles to run the 'ego subroutines' from the original bioform then the same sense of self will be maintained.

    It's when these ori

  • by kkleiner ( 1468647 ) on Wednesday May 06, 2009 @04:27PM (#27850747) Homepage
    Better review at Singularity Hub I think (but I am biased): http://singularityhub.com/2009/04/29/transcendent-man-wows-at-tribeca-film-festival-premier/ [singularityhub.com]
  • i would rather be uploaded to the internet like what happened at the end of the movie : The Lawnmower Man"
  • Ray Kurzweil, isn't he the Jon Katz of the transhumanist movement? I just remember there's supposed to be a couple of really good writers and philosophers and then one incredible douchebag that makes all of the rest look bad, someone who's approach to the topic is reminiscent of the very worst of Thomas Friedman (not to imply there's a best of Friedman.)

    Is this the guy I'm thinking of or is there someone else?

  • He's talking about genetic enhancement, nano technology, robotics, AI and more.
    And you "only" need one of these to reach a critical level for the Singularity to occur.
    For instance:
    *Genetically enhance humans to be better at genetically enhancing humans, rinse and repeat.
    *Make strong AI capable of creating stronger AI, etc

    I recommend his book "The Singularity Is Near".
    Free preview at google: http://books.google.com/books?id=88U6hdUi6D0C&printsec=frontcover&dq=kurzweil#PPA19,M1 [google.com]

    His website has some int

  • The singularity is the biggest embarrassment in futurism since the flying car and Martin Landau on the Moon by 1999. Well, OK, Gerry Anderson wasn't really a futurist, but you know what I mean. Mod me troll if you must, but you know in my hearts I am correct. Sorry, kids, but there won't be a reverse engineered version of your mind enjoying immortally in a machine somewhere.
  • by Animats ( 122034 ) on Wednesday May 06, 2009 @05:21PM (#27851655) Homepage

    This is going to take a while.

    Re-engineering biological systems takes generations to debug. And a huge number of dud individuals during the development process. This is fine for tomato R&D, but generating a big supply of failed post-humans is going to be unpopular. Just extending the human lifespan is likely to take generations to debug. It takes a century to find out if something worked.

    AIs and robots don't have that problem.

    What I suspect is going to happen is that we're going to get good AIs and robots, but they won't be cheaper than people. Suppose that an AI smarter than humans can be built, but it's the size of a server farm. In that case, the form the "singularity" may take is not augmented humans, but augmented corporations. The basic problem with companies is that no one person has the whole picture. But a machine could. If this happens, the machines will be in charge, simply because the machines can communicate and organize better.

  • Ask-A-Nerd, NOT (Score:3, Insightful)

    by Tablizer ( 95088 ) on Wednesday May 06, 2009 @06:19PM (#27852383) Journal

    Would I build these machines, if I knew there was a strong chance they would destroy humanity?' asks evolvable hardware researcher Hugo de Garis. His answer? 'Yeah.'"

    This is why you *don't* let nerds make political decisions. We can't resist making new gizmos, even if they eat humanity. It's like letting B. Clinton pick interns.
               

To the systems programmer, users and applications serve only to provide a test load.

Working...