Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Movies Media Technology

Ray Kurzweil's Vision of the Singularity, In Movie Form 366

destinyland writes "AI researcher Ben Goertzel peeks at the new Ray Kurzweil movie (Transcendent Man), and gives it 'two nano-enhanced cyberthumbs way, way up!' But in an exchange with Kurzweil after the screening, Goertzel debates the post-human future, asking whether individuality can survive in a machine-augmented brain. The documentary covers radical futurism, but also includes alternate viewpoints. 'Would I build these machines, if I knew there was a strong chance they would destroy humanity?' asks evolvable hardware researcher Hugo de Garis. His answer? 'Yeah.'" Note, the movie is about Kurzweil and futurism, not by Kurzweil. Update: 05/06 20:57 GMT by T : Note, Singularity Hub has a review up, too.
This discussion has been archived. No new comments can be posted.

Ray Kurzweil's Vision of the Singularity, In Movie Form

Comments Filter:
  • I'm ready... (Score:5, Interesting)

    by __aaklbk2114 ( 220755 ) on Wednesday May 06, 2009 @04:01PM (#27850417)

    for my Moravec transfer. Although the more I think about it, I'm not sure that perceptible continuity of consciousness is such a big deal. I mean, I go to sleep every night and wake up the next day believing and feeling that I'm the same person that went to sleep. If there were a cutover to digital representation while I was "asleep" (i.e. unaware), I'm not sure I'd mind the thought of my organic representation being destroyed, even if it could have continued existence in parallel.

  • by Anonymous Coward on Wednesday May 06, 2009 @04:02PM (#27850427)

    Moore's law is losing steam. The GHz race is over, and multiple cores have not delivered yet. This seriousy impacts Mr. Kurzweil's date (2045) as computers will be 6 to 9 orders of magnitude weaker with the present trends, than if Moore's law continued to hold (which seems to be the assumption). Unless something new appears. Fast.

  • by Colonel Korn ( 1258968 ) on Wednesday May 06, 2009 @04:10PM (#27850543)

    Mike Judge's vision of the future in "Idiocracy" seems much more likely.

    On the issue of whether computer-enhanced humans are still "human" - what does that even mean? Genetically, "Human" is 98% chimpanzee, 50% dog, 30% daffodil, etc. (I'm sure I have the numbers wrong).

    I think we tend to over-rate the concept of "humanity". Every thought or emotion you've ever had is merely your impression of sodium ions moving around in your brain. We process information. Computers do it. Chimpanzees do it. Dogs do it. Even daffodils do it. It is just not that special.

    "Individuality" is an illusion. You may process information differently than I do. But you also process information at time x differently than you process information at time x+1. Because the "human" self is a manifestation of the brain, the human "self" changes with each thought. Consciousness is an instantaneous phenomenon and there is no continuity of "self". In effect, we have all "died" an infinite number of times.

    That's a bit overboard, I think. You're basically claiming (and I'm trying not to strawman you, here) that abstract concepts can't be used to identify patterns, but instead can only be used to identify identical things. There's plenty of reason for me to label myself at time=2009 and myself at time=2007 the same person, just as we label anything else that changes but maintains identifiable and distinct patterns.

    As a scientist, individual identity seems like a common and accurate label for each person's idiosyncratic tendencies.

  • Re:I'm ready... (Score:2, Interesting)

    by Script Cat ( 832717 ) on Wednesday May 06, 2009 @04:14PM (#27850587)
    Yeah, this is a lot like how I think a matter transporter would work. Make a copy and then destroy the original. Star Trek makes it all look so clean, but you never get to see Skotty cleaning all the meaty corpses out from under the transporter pad.
  • by 4D6963 ( 933028 ) on Wednesday May 06, 2009 @04:16PM (#27850617)

    Moore's law is fundamentally flawed in that it predicts a never ending exponential (linear in the log domain) progression. It is bound to slow down and eventually stop, yet it fails entirely to take that into account.

    What I think is that instead of being linear (well, actually exponential) it's more like a Gaussian function (a bell-shaped curve). It started far in the negatives, and now we're getting closer to the centre and its maximum, so we're feeling the slow down, and eventually it'll crawl to a halt. Although maybe it won't and then it'd be more like another function, the point being, it can't go on exponentially like this forever.

    All of this being said, I think that Kurzweil's predictions are not flawed in that we'll have a tough time accessing the necessary hardware, but it's more theoretical, we have no fucking clue how we'd make any of that happen, right now it's a problem of theory and algorithms, not of computer power. We know better how to make time travel happen than how to make strong AI pop up.

  • by nyctopterus ( 717502 ) on Wednesday May 06, 2009 @04:17PM (#27850629) Homepage

    I aree with what you've said to a point. But consciousnesses don't mingle (at least, mine hasn't...), our consciousness remains locked to our individual brains and perception. If we do any sort of human brain networking, that could change. And that would be mind-bendingly weird.

  • by ddbsa ( 526686 ) <phypor@rsichemIII.com minus threevowels> on Wednesday May 06, 2009 @04:17PM (#27850647)

    > If Robert is 700 part Ultimate Brain and 1 part Robert; and
    > Ray is 700 parts SuperiorBrain and 1 part Ray ... i.e.,
    > if the human portions of the post-Singularity cyborg beings
    > are minimal and relatively un-utilized ... then, in what sense
    > will these creatures really be human?
    > In what sense will they really be Robert and Ray?

    IMO, as long as there are enough cycles to run the 'ego subroutines' from the original bioform then the same sense of self will be maintained.

    It's when these original 'ego subroutines' (which will be a line item in process accounting) are altered will be see a fundamental changing of the human that was.

    There will be add-ons to the 'ego subroutines' just like there are add-ons to firefox.

    You will cure your fear of spiders or have access to pleasure centers with a simple mod.

  • by vertinox ( 846076 ) on Wednesday May 06, 2009 @04:31PM (#27850805)

    He's ignoring the fact that technologies which comes into existence get used by existing power structures to perpetuate their rule, not necessarily "for the good of all".

    Like the internet, microwaves, radar, GPS, and all the military technologies that never made it into the hands of civilians.

  • Re:I'm ready... (Score:2, Interesting)

    by humpolec ( 1095783 ) on Wednesday May 06, 2009 @04:36PM (#27850889)
    Is it death, or amnesia?
    What if you knew you will wake up tomorrow with no recollection of today's experiences? Would you treat is as a death, or as a loss of one day? I believe that in such situations the concept of 'death' needs to be revised.
  • by anyaristow ( 1448609 ) on Wednesday May 06, 2009 @04:36PM (#27850893)

    Where is this accelerating progress I keep hearing about?

    Watching TV shows from the 60's one thing strikes me: life is almost exactly like it was 40 years ago. I can now order books without talking to anyone. Big deal. The telephone was a much bigger deal than the Internet, and it's more than 100 years old. Here's more progress: people don't know their neighbors and can't let their kids wander the neighborhood.

    Progress is slowing, not accelerating, and in some respects we're making negative progress.

    I predict there will be no economic incentive to make even computer progress (the star of the last half century) much beyond current levels. Ten years ago progress benefited anyone who wanted a computer. Now who does it benefit? A smaller and smaller number of people.

    Ray's going to have to finance the singularity by himself.

  • by TheRealMindChild ( 743925 ) on Wednesday May 06, 2009 @04:48PM (#27851079) Homepage Journal
    Pardon me... what the hell is "faster than real time"? Does that mean it comes up with the answers before you ask the question?
  • by DragonWriter ( 970822 ) on Wednesday May 06, 2009 @04:52PM (#27851145)

    The key here is that Ray bases this prediction on past observation of things like Moore's law. Even though he does cherry pick and that there is no guarantee that it would always continue in such a fashion, the idea that distributed system improvements are exponential isn't that far fetched.

    Since there are physical limits involved, it would intuitively seem vastly more plausible to suggest that the improvements would, in the long term, be logistic rather than exponential (and, of course, a logistic growth curve looks, in its early phase, just like an exponential curve.)

    Of course, a logistic curve could still support predictions of a "singularity" of sorts, the difference is that things would "settle down" to a new normal after an abrupt transformation, rather than continuing with accelerating change forever.

  • by inasity_rules ( 1110095 ) on Wednesday May 06, 2009 @04:55PM (#27851193) Journal

    Not to start asking hard questions or anything, but does simulating the brain really imply we can create sentient AI? What if there is more to it than that? Perhaps sentience can only arise as a result of our brains being "jump" started in some way (cosmic radiation, genetic preprograming or whatever)? To start the AI you would have to "copy" an existing brain or play with random starting states... Could be unpredictable. Irrational sentience anyone?

    I'm possibly wrong, but I'd bet a lot its a lot more complex than you describe and we are not that close really.

    I look forward to eating my words though.. :)

  • by Anonymous Coward on Wednesday May 06, 2009 @04:57PM (#27851221)

    That's how it all starts man... But can't you see it... They're already making "3D" porn games, soon they'll be making 3D virtual reality porn with VR helmets you can purchase... But it won't be enough, the nerds will want to be *in* the virtual universe, where they can be a muscular stud and diddle with all the girls they could dream of... But that degree of immersion will obviously require brain implants...

    And then wham! Singularity! All cause of porn man!

  • by geekboy642 ( 799087 ) on Wednesday May 06, 2009 @05:03PM (#27851331) Journal

    With minor paraphrasing, you pose the question "what if everything is impossible?"
    That's the stupidest question in the history of all luddites. Even if--and that's a massive if--it is provably infeasible to simulate an entire human, the research will be unimaginably valuable to any human. Brain prosthetics, broadband mind/machine interface, and safe treatments to target specific brain disorders are only the tiniest wedge of the foreseeable advances that sort of research can provide.
    Lastly, what "hardware limitations" can you be citing? Moore's law has held for the entire length of CPU development, and there's no indication for it to be slowing now. (hint: Moore's law has nothing to do with GHz) If silicon fails, there are dozens of technologies being tested to replace it.

    With all the problems you're inventing, I have only one question for you: What are you afraid of?

  • by Lvdata ( 1214190 ) on Wednesday May 06, 2009 @05:16PM (#27851565)

    It gets more complicated when myself2030 and myself2032 are standing side by side. If myself2030 kills Joe Smith, and then commits suicide, is myself2032 partially responsible? 100%? 0%. With no legal link between selves, when a copy of myself can be made for $100, then murder-suicide of government officials, political people you disagree with becomes easy to do, and when your copy plans on suiciding makes it difficult to protect agent.

  • by Miseph ( 979059 ) on Wednesday May 06, 2009 @05:24PM (#27851719) Journal

    You remind me of a popular adage... any sufficiently advanced technology is indistinguishable from magic. Perhaps any sufficiently advanced technology is also indistinguishable from God.

  • by thasmudyan ( 460603 ) <thasmudyan@o[ ]fu.com ['pen' in gap]> on Wednesday May 06, 2009 @05:40PM (#27851943)

    I agree. It may be sad and creepy, but the really bad part of it is that he apparently lacks any kind of understanding of what actually makes up the mind of a person. A mind is not the sum of epiphenomenal output data.

    Sure, you can try to simulate something that is more or less likely to give you responses similar to known input patterns, but that is not what constitutes a person.

    What you could then do to make it a person is feed that list of "expectations" into some kind of default brain, thereby filling in the many blanks with an actual neurological structure that can perform real cognition and exhibit consciousness. BUT - and here's the essence of the problem - all you did in the end was to create a new person that exhibits some of the traits of the dead person. In no way or form has the dead guy come back to life.

    I think modeling and then enslaving an AI to perform like your long-dead father is morally questionable at best. It shows that in the end he has no regard for neither the beloved person who regretfully ceased to exist nor for the new slave entity that is forced to perform a perpetual make-believe job on his behalf.

    Scientifically, the problem is entropy and the passage of time. Everything needed to "run" the entity that was his father is lost to decay and cannot be restored - barring a way to accurately retrieve molecular structures from arbitrary points in the past.

  • by jollyreaper ( 513215 ) on Wednesday May 06, 2009 @05:44PM (#27851989)

    You remind me of a popular adage... any sufficiently advanced technology is indistinguishable from magic. Perhaps any sufficiently advanced technology is also indistinguishable from God.

    It really depends on how big your idea of God is. :) From the perspective of the earliest religions, every rock and tree and critter had a spirit and sometimes those things were also called gods. There was a god of a forest, a god of a river, comfortably small gods. By the time of the Greeks and the Romans gods had come to also embody philosophical concepts. But even for people like that, modern man would appear godlike. The president of the United States would certainly appear to be a god-king, at least by the standards of the ancient Egyptians. He can speak a word and command that he and his court be whisked around the world in his tamed metal bird. He can order the obliteration of a city just as easily. (ok, I hope not as easily but, in principle, the president has the button.) The president speaks and his entire nation can hear his voice as one, across the continent. Our scientists can create miracles of technology, we can fight back disease and age and even the poorest among us can live in homes that would be the envy of pretty much anyone in the world of antiquity. There's no doubt in my mind that the ancients would say we live as gods if not actually appear as gods as well.

    The monotheistic god is given a lot more credit, though. He's not just the creator of man but all existence. As science pushed back the idea of what existence was, not just the borders of the world but the borders of space and time, religious folk were quick to say "Yeah, He did that, too. God is great." By that kind of definition, even Dr. Manhattan would look like a piker.

    The talk of the idea of cloning Neanderthal got me thinking, though -- we probably do have tech sufficient enough to pass for gods by that perspective. We ourselves like to throw God at the gaps, saying what we don't understand must be God. Then we figure out some more and push God into smaller gaps. If we were able to bring back a Neanderthal and he grew up in the lab interacting with scientists and a surrogate mother who would, of course, still be a human being, we'd probably appear more god-like than as simple father and mother figures. We have mysterious magic machines whose workings would be beyond him, move in mysterious ways.

    The capability of doing this sort of stuff isn't thousands of years out and speculative fiction, it's not even decades away. We're pretty much at the point where we can seriously talk about resurrecting extinct species. This seems to be going a wee bit beyond mucking about with fire brought by Prometheus, this looks more like getting a hold of one of Zeus's thunderbolts and blowing apart mountains. I'd say that splitting the atom was the very first baby step into godtech. Our biotech is going the same way here. Baby steps, sure, but pointing the way to developments that will be utterly godlike, especially to those on the outside looking in.

  • by Thiez ( 1281866 ) on Wednesday May 06, 2009 @06:36PM (#27852547)

    > For example: All the wishful thinking in the world won't make homeopathy work.

    Actually that's exactly what makes it 'work'. I agree with your point, but the placebo effect kinda undermines your example.

  • by ppanon ( 16583 ) on Wednesday May 06, 2009 @06:50PM (#27852703) Homepage Journal

    20 years ago, I had a disagreement with my then biophysics prof when I advocated the use of large networks of PC clusters for studying protein folding and interactions. His line of argument was effectively that I had a lack of understanding of what the problem is, and how much effort is required. Today companies like Zymeworks [zymeworks.com]specialize in performing that kind of work for pharmaceutical companies on a contract basis. They use quantum chemistry simulations running on small clusters of commodity hardware to do it. Yeah, computing speed has gone through a few orders of magnitude from the 16 MHz 386's of the time to the 2GHz quad cores of today. Fundamentally though, my vision was correct.

    20 years ago I remember watching a show that was one of the first sounding the alarm about Climate Change. Back then I was cautiously sceptical because of the crudeness of the models possible with the computing power then available; these days, I'm convinced. It's good to be sceptical, but it's also good to remember that there's more than one way to skin a cat.

    When it comes to Arthur C. Clarke quotes, I like the following one at least as much:

    When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

    The one likely exception to that is that we probably won't ever come up with a way to travel faster than light. Otherwise, there's a lot more I told you so's coming down the pipe. 'Cause, no offense meant, but you probably don't even have the success record and credibility of "a distinguished but elderly scientist".

  • by Anonymous Coward on Wednesday May 06, 2009 @06:56PM (#27852797)

    > IIRC, the human brain fires off at like 200 mhz. That may not be 100% accurate, I cannot recall where I read that factoid and a quick Google search doesn't collaborate -- but ultimately the specific numbers don't matter.

    This site is fairly helpful and appears to be reasonably reliable (the numbers I checked matched those on wikipedia) http://www.willamette.edu/~gorr/classes/cs449/brain.html [willamette.edu] it suggests a neuron can fire at 100 Hz. Like you said, the specific numbers aren't that important but I'd thought I'd jump in after noting you were wrong by about 6 orders of magnitude ;)

    Assuming 10^11 neurons, each running at 100 Hz, that means a maximum of 10^13 'signals' per second. Suppose we have a model of a brain with each neuron, and it takes 1000 clock cycles to process a single 'signal' from a single neuron, that would mean we need a machine performing 10^16 operations per second (= a mere 10 pHz). Since the brain is pretty parallel this shouldn't be too hard to accomplish in the next decade or two. What remains is finding out how the neurons are organised, just simulating 10^11 neurons dropped in a bucket at random is unlikely to produce useful output.

  • by lgw ( 121541 ) on Wednesday May 06, 2009 @08:52PM (#27854029) Journal

    Clearly you can have a "human mind's worth of computing power" run on only 100W or so. However, it's unclear whether you could run an emulation of a human mind on any reasonable amount of power. Or, for that matter, at all. As yet, there's not the least shred of evidence that either AI or human consciousness transfer is possible.

    AI has been 50 years away for 50 years now. Fusion has been 20 years away for 50 years now. I can only conclude that fusion will be a mature, 30-year-old technology, ready to power AIs. :)

    Personally, I think that software consciousness will turn out to be quite easy in hindsight, just a matter of learning the trick, but I have no actual evidence for this belief. Has any published futurist ever been right about anything?

  • by bmgoau ( 801508 ) on Wednesday May 06, 2009 @10:16PM (#27854749) Homepage

    IAACE (I am a Computer Engineer). I agree transistors will not be old news in 20 years, but i think you're looking too broadly. I believe the idea that they will be old news relates to their use in (high performance) computing. It really was from about the 1980's till now, around 20-30 years, for computers to get *really* popular.

    Photonic Computing is really in the stage where transistors were in the 60's and 70's. We already have proven concepts and a good idea of where to go so i don't see the statement "it's that transistors will be old news in 20 years; " so completely outrageous.

    The only thing i know for certain, is that all our predictions will be wrong.

  • by ppanon ( 16583 ) on Wednesday May 06, 2009 @10:20PM (#27854773) Homepage Journal

    Yes, but 20 years ago a computer network was not a hypothetical then-impossible idea. Before the first computer network existed, people understood what technological barriers they would have to overcome to create one, and they already knew how to split a task into multiple parts on separate processing units. It was an engineering problem. It was the engineering problem that your professor was stuck on.

    Agreed.

    Call me when the major obstacle to any of these Futurist predictions is the amount of effort required, not that we fundamentally have no idea how to accomplish the task.

    When it comes to downloading a human consciousness into a machine, I think you are absolutely correct. It's not clear that we will have the capacity of instrumenting and measuring all the variables in the instantaneous state of a brain that makes an individual - you, me, or Ray Kurzweil - and replicate it/convert it to run on a completely different medium. That assertion has more than a bit of "OMG Ponies!" wish fulfilment in it.

    When it comes to taking the general intelligence capability of a human and producing a synthetic computer-based analog, it is an engineering problem. First a reverse-engineering problem in determining how the brain does what it does to enable consciousness, and then a process change and die shrink. Neurons are pretty coarse and slow things when it comes to their interconnects and we should be able to do a lot better. With the advantage of faster signal transmission and shorter signal paths, that should give the re-architected "brain" a big speed advantage over our current ones. As for the "software", you could raise the first generation the old-fashioned way in real time (with the processor running at a "degraded speed") for the first few years, and then once you've got consciousness, socialization, and imprinting/attachment to humanity ingrained, let them go into turbo mode for computer-based education. For that first iteration you create as close a model of the human brain as you can, and then you see what simplifying assumptions you can make and still have it work.

    The big bottleneck for your virtual scientists at that point will be running physical experiments. Not everything can be gedanken experiments - sometimes you need Large Hadron Colliders that take decades to be built. But for small scale science like molecular biology where a lot can be increasingly simulated? Look out.

    Now don't get me wrong, there are some tremendous engineering problems there, with enough intermediate steps that it makes my clustered protein modeling example look like child's play. But it is my reasoned opinion that the project is no less an engineering problem than going from 16 MHz 386's and 10Mb coax Ethernet to 2+GHz quad cores, fiber-based gigabit Ethernet, and middleware for clustered systems with hundreds or thousands of cores.

There are two ways to write error-free programs; only the third one works.

Working...