Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Movies Media Technology

Ray Kurzweil's Vision of the Singularity, In Movie Form 366

destinyland writes "AI researcher Ben Goertzel peeks at the new Ray Kurzweil movie (Transcendent Man), and gives it 'two nano-enhanced cyberthumbs way, way up!' But in an exchange with Kurzweil after the screening, Goertzel debates the post-human future, asking whether individuality can survive in a machine-augmented brain. The documentary covers radical futurism, but also includes alternate viewpoints. 'Would I build these machines, if I knew there was a strong chance they would destroy humanity?' asks evolvable hardware researcher Hugo de Garis. His answer? 'Yeah.'" Note, the movie is about Kurzweil and futurism, not by Kurzweil. Update: 05/06 20:57 GMT by T : Note, Singularity Hub has a review up, too.
This discussion has been archived. No new comments can be posted.

Ray Kurzweil's Vision of the Singularity, In Movie Form

Comments Filter:
  • by Anonymous Coward on Wednesday May 06, 2009 @03:44PM (#27850221)

    Computers become smarter than humans. Human consciousness becomes downloadable ...ermm ...somehow... and we live forever as computers.

    Wow. What a visionary.

    Seriously though, you have to congratulate a guy from becoming so well known with people believing what he's saying as actually probable. I doubt anyone else could even sell this shit as a sci-fi B-movie plot.

  • by kylemonger ( 686302 ) on Wednesday May 06, 2009 @03:47PM (#27850255)
    ... we'll be wrong. My own theory is that strong AI is the ultimate weapon and that it will never ever fall into the hands of the likes of you and me. Whether the machines get out of control is irrelevant; eventually the parties that control them will be slugging it out with weapons powerful enough to make life here hardly worth living. I expect to be dead before then, thankfully. But remember the first sentence of this post.
  • by Gat0r30y ( 957941 ) on Wednesday May 06, 2009 @03:49PM (#27850285) Homepage Journal

    I doubt anyone else could even sell this shit as a sci-fi B-movie plot.

    Often, nay consistently, life seems to mimic a shitty sci-fi B-movie plot.

  • by vertinox ( 846076 ) on Wednesday May 06, 2009 @03:56PM (#27850363)

    The key here is that Ray bases this prediction on past observation of things like Moore's law. Even though he does cherry pick and that there is no guarantee that it would always continue in such a fashion, the idea that distributed system improvements are exponential isn't that far fetched.

    So basically what he is saying is that if the future behaves like the past then we will see so major changes shortly simply because we'll have processing out the wazoo.

    Even the Moore himself thinks this will at least last til 2018 when silicon transistors reach their theoretical limit on the atomic scale. Whether or not the industry finds a suitable replacement for silicon or finds another way to go about making processors is another thing all together.

    My bet is that Intel, IBM, and AMD are putting the big bucks on getting past the silicon limit because that is their money cow.

    So if the limit does continue that things like Blue Brain Project [wikipedia.org] will have an easier time running their simulations.

    I don't know about the whole Nanotech emergence, but at least it looks like we might get the AI thing solved in at least 50 years.

  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Wednesday May 06, 2009 @03:57PM (#27850367)
    Comment removed based on user account deletion
  • by javaman235 ( 461502 ) on Wednesday May 06, 2009 @04:00PM (#27850415)

    I just saw an interview with him last night, where he discussed full power computers the size of a blood cell, us mapping out our minds for the good of all, etc. It reminded me of the utopian 1950s vision of the space age, where we'd all be floating around space circa 2001: Its not going to happen.
    First he's ignoring some physical limitations, such as with the size of computers, but that's not even the main issue. The main issue is that he's ignoring politics. He's ignoring the fact that technologies which comes into existence get used by existing power structures to perpetuate their rule, not necessarily "for the good of all". Mind reading technology he predicts won't be floating around for everybody to play with, it will be used by intelligence agencies to prop of regimes which will scan the brains of potential opposition, consolidating their rule. Quantum computers, given their code breaking potential, won't be in public hands either, but rather will strengthen surveillance operations of those who already do this stuff.

    In other words, this technology won't make the past go away any more than the advent of the atom bomb made middle ages Islamic mujahadeen go away. Rather it will combine with current political realities to accentuate the ancient political realities of haves and have not that date back to ancient times.

  • by 4D6963 ( 933028 ) on Wednesday May 06, 2009 @04:01PM (#27850419)

    Computers become smarter than humans. Human consciousness becomes downloadable ...ermm ...somehow... and we live forever as computers.

    The sad part is that it seems like it's all wishful thinking on Kurzweil's part who's really scared of dying. So my bet is that his outlandish and baseless predictions are so popular because it fills a void in the "don't worry you won't really die" department that religions used to fill. So the whole Singularity thing really is a secular techno-cult of some sort, and Kurzweil is the guru and prophet.

  • by Gat0r30y ( 957941 ) on Wednesday May 06, 2009 @04:10PM (#27850529) Homepage Journal
    Generally - I agree.

    Consciousness is an instantaneous phenomenon and there is no continuity of "self".

    However, just because something ("Consciousness" in this case) is emergent and cannot be well described by the sum of the parts doesn't mean we shouldn't at least consider what these sorts of human/machine interfaces might do to our perception of self in the future if ever they exist.
    My prediction: as long as I can still enjoy a fine single malt - and some bacon from time to time I'll consider the future a smashing success.

  • by nyctopterus ( 717502 ) on Wednesday May 06, 2009 @04:10PM (#27850535) Homepage

    "The nerd rapture"

  • by ElectricTurtle ( 1171201 ) on Wednesday May 06, 2009 @04:18PM (#27850655)
    You perhaps forget that virtually all human advancement begins with 'wishful thinking'. This is a scientific problem. You have a human consciousness. In a secular, materialistic worldview, a human consciousness is nothing special. It's basically assumed to be nothing more than really obfuscated software running on a biological, carbon-based computer. Given that assumption, it is a natural step to find some way to copy it, intact and functioning, to a more resilient inorganic, silicon-based computer. The difference between this and all the various soul-based afterlife nonsense of religions should be obvious to anybody. This is a potentially plausible objective hypothetical physical/material process. It's an idea based on hard facts that may actually work given enough research, testing, and further advances in hardware and software design.
  • Re:I'm ready... (Score:4, Insightful)

    by DFarmerTX ( 191648 ) on Wednesday May 06, 2009 @04:20PM (#27850675)

    ...I'm not sure I'd mind the thought of my organic representation being destroyed, even if it could have continued existence in parallel.

    Sure, but who's going to break the bad news to your "organic representation"?

    Death is death even if there are 100 more copies of you.

    -DF

  • by Anonymous Coward on Wednesday May 06, 2009 @04:22PM (#27850693)

    While Kurzweil is most definitely too optimistic in his predictions, I think you've been watching too much Star Wars. The government isn't run by supervillains looking to "perpetuate their rule".

    Most of it will probably stay in militaryand academic circles for a little while, but that stuff always goes into the private sector eventually.

  • by MozeeToby ( 1163751 ) on Wednesday May 06, 2009 @04:22PM (#27850695)

    Kurzweil's predictions aren't just based on modern trends but historical shifts as well. In fact, I thought one of the big pieces he shows is a graph of 'paradigm shifting events' against time. These would be technologies that changed everything at the time; things like agriculture, the printing press, nuclear power, the transistor, etc.

    The point isn't the gradual improvement of transistor technology that make the singularaty interesting, it's that transistors will be old news in 20 years; replaced by some new technology that we can't even speculate about right now. It's about the shifts, not the gradual evolution.

  • by wurp ( 51446 ) on Wednesday May 06, 2009 @04:25PM (#27850723) Homepage

    Actually, I'm pretty sure with time travel I could fairly trivially build about the strongest AI possible. When you can perform an infinite number of operations in an arbitrarily short amount of time, quite a stupid algorithm can produce some pretty smart results.

  • by Pedrito ( 94783 ) on Wednesday May 06, 2009 @04:29PM (#27850787)

    ..this story falls in the category of "sh#t that's never gonna happen".

    I'm going to have to strongly disagree with you. I've been studying neuroscience for a while and specifically, neural simulations in software. Our knowledge of the brain is quite advanced. We're not on the cusp of sentient AI, but my honest opinion is that we're probably only a bit over a decade from it. Certainly no more than 2 decades from it.

    There's been a neural prosthetic [wireheading.com] for at least 6 years already. Granted, it acts more as a DSP than a real hippocampus, but still, it's a major feat and it won't be long until a more faithful reproduction of the hippocampus can be done.

    While there are still details about how various neural circuits are connected, this information will be figured out in the next 10 years. neuroscience research won't be the bottleneck for sentient AI, however. Computer tech will be. The brain contains tens to hundreds of trillions of synapses (synapses are really the "processing element" of the brain, more so than the neurons which number only in tens of billions). It's a massive amount of data. But 10-20 years from now, very feasible.

    So, here's how computers get massively smarter than us really fast. 10-20 years AFTER the first sentient AIs are created, we'll have sentient AIs that can operate at tens to hundreds of times faster than real time. Now, imagine you create a group of "research brains" that all work together at hundreds of times real time. So in a year, for example, this group of "research brains" can do the thinking that would require a group of humans to spend at least a few hundred years doing. Add to that the fact that you can tweak the brains to make them better at math or other subjects and that you have complete control over their reward system (doing research could give them a heroin-like reward), and you're going to have super brains.

    Once you accept the fact that sentient AI is inevitable, the next step, of super-intelligent AIs, is just as inevitable.

  • by jollyreaper ( 513215 ) on Wednesday May 06, 2009 @04:35PM (#27850871)

    "The nerd rapture"

    I always thought of it more as a techno-rapture and that's the way I've seen it referred to in other places.

    Even the most committed atheist can understand the attraction of religion and the idea of a rapture and a heaven, life everlasting. These are all very human yearnings. The difference between the idea of the religious and the techo-rapture is that the means of making it happen lie within our grasp. Certainly we could create the new heaven and new Earth and the reign of a thousand years right here and now. We have the technology, we have the knowledge, what we lack is the wisdom.

    The poster who compares it with 1950's futurist utopianism is exactly right. We could have had the future depicted in 2001, we could have an end to world hunger, an end to disease, and if not an end to death then a comfortably long delay in its arrival. The problem is that we're still very human at heart and humans are not that far removed from the trees. We are selfish, grasping, petty animals and those few acts of sublime virtue from the best of us simply serve to make the rest of us look all the worse.

    We've yet to develop a political system adequate to the task of promoting the greatest good for the greatest number without allowing unhealthy power and influence to be amassed by our least deserving fellows. Unfortunately, the very people who are most willing to acquire power are seldom the ones who should have it. The complaint I hear from my friends deeply involved with the Democrats is that there are plenty of good people they'd like to run as candidates but so many of them want nothing to do with politics. They're happy to put in the long hours behind the scenes but the thought of being in the spotlight and having all the attention on them is about as attractive a thought as a root canal. Someone actually willing to take that kind of attention is more than likely going to be someone like a John Edwards, a nice smile and slick approach but ultimately a self-serving jerk so blinded by his own awesomeness that he'd pull stupid shit like having an affair and then throwing his hat in the ring for the presidency.

    I'm curious as to what the potential implication of a Singularity is for technology but I don't know if that would change the human situation all that much. There's been some good speculative fiction written along these lines in the Orion's Arm universe. It's trying to be a very hard SF look at future space opera. The few aliens are all completely inhuman, the humanoid aliens are actually all modified people from earth, terragen life as they call it. There's various scales that sophonts fall onto from sub-human to AI gods and all sorts of tech levels from stone-age to planck-age. It's certainly worth a look.

  • by postbigbang ( 761081 ) on Wednesday May 06, 2009 @04:46PM (#27851043)

    The whole philosophy seems to smack of undying narcissism. It's ok to fear death; it's part of western culture, and key to survival. As we experience life individually and only marginally as a collective (civility as bad as it is), it's understandable that living forever seems like a good idea. We're here as an accident of our birth. Disembodied, we might evolve, but we're not designed for 400 years of life. Who knows what kind of cyber-insanity might evolve. I'm leaving it up to my kids to figure it out, as it was left up to me to figure it out.

  • by Anonymous Coward on Wednesday May 06, 2009 @04:54PM (#27851175)

    Even the Moore himself thinks this will at least last til 2018 when silicon transistors reach their theoretical limit on the atomic scale. Whether or not the industry finds a suitable replacement for silicon or finds another way to go about making processors is another thing all together.

    Or the C) option: ramp up production at the near smallest we can make transistors and make them so cheap and prevalent that we have the equivalent of today's desktops in our wrist watches running off our ambient body heat.

    Anyone who has used a computer since a couple of years ago realizes that the continuous battle for the smallest chip is over. It doesn't matter who's got the smallest process anymore, it matters what you're building on that process. Case and point: Intel's shifted business strategies to building embedded-and-above chips like Atom, and is so eager to do so that they've done something that's almost unheard of in Intel's history: they've farmed out production to another company (TSMC [forbes.com]). Even AMD realizes the jig is up; they dumped their fabs because they realized they didn't need them anymore. It's not about having the best damned process available anymore. It's about having the lowest power design, the smallest design, the widest/most-parallel design.

    Chip design is becoming such a detail as to how and where we use computers that even Microsoft and Apple have gotten behind designing their own (though to differing degrees; Microsoft hired IBM to build theirs, Apple bought a low-power PowerPC chip company to design theirs).

    While I'm sure people will bicker in 2020 about where to go next for real performance, whether it be on-chip optical networks or 3D chips, extremely-wide-instruction-computers, asymmetric computing dies, etc., etc., it's not what's going to matter as much as we'd like to think. Those chips will likely end up so expensive that the only consumers will be server clusters. Meanwhile, pervasive computing will explode into our every day lives, more than just being wired to our ears and hip pockets. The revolution's already started.

  • by MozeeToby ( 1163751 ) on Wednesday May 06, 2009 @04:54PM (#27851185)

    Look at it this way, when I read the newspaper (or rather, the news website) and see words like "as a result of the accident, the child will be blind for the rest of his life". The first thing that pops into my head is that he won't be blind for the rest of his life, he'll be blind until we find a way to give him his sight back.

    If the kid lost his retina, we can already fix that to some extent with a transplant. If the kid had his optic nerve destroyed, that might be a couple years for us to fix, maybe even a decade. If the kid the part of his brain the processes images, maybe it'll take 40 years, but I have no doubt we'll eventually be able to do it.

    Now, how are any of our diseases any different? If you can't imagine an implantable artificial heart being available within 20 years, you have very little faith in our progress. Sure, the other organs are going to be trickier, but can you really think of a valid reason that each and every one of them (except the brain) can't be replaced by an artificial version assuming the technology is advanced enough? Alzheimer's (and mental senescence in general) is about the only thing that might not be fixable from a strictly mechanical point of view and we're even getting closer to understanding those issues.

    So tell me, logically, why it's impossible. I'll grant that it probably won't happen any time soon. I'll maybe even grant that society won't let it happen since immortality would cause pretty drastic changes to our culture and our planet. But I won't grant that it is technologically impossible.

  • by vlm ( 69642 ) on Wednesday May 06, 2009 @04:56PM (#27851211)

    So, here's how computers get massively smarter than us really fast. 10-20 years AFTER the first sentient AIs are created, we'll have sentient AIs that can operate at tens to hundreds of times faster than real time. Now, imagine you create a group of "research brains" that all work together at hundreds of times real time. So in a year, for example, this group of "research brains" can do the thinking that would require a group of humans to spend at least a few hundred years doing.

    Ah, but then you'll likely need tens to hundreds of times the input bandwidth to keep the processors cooking, yet, it seems information overload at a much smaller scale jams up current biological intelligences. Just like cube-square scaling applies firm limits to what genetic engineering can do to organisms, although cool stuff can be done inside those limits, some similar bandwidth vs storage vs processing scaling laws might or might not limit intelligence. Too little bandwidth makes insane hallucinations? Too much bandwidth will make something like ADD? Proportionally too little storage gives absent minded professor in the extreme, continually rediscovering what it forgot yesterday. I think there is too much faith that intelligence in general, or AI specifically, must be sane and always develops out of the basic requirements, because of course AI researchers are sane and their intelligence more or less developed out of their own basic biological abilities (as opposed to the developers becoming crazy couch potatoe fox-news watching zombies).

    Then too, its useless to create average brain level AIs, even if they think really fast, even if there is a large group. All you'll get is myspace pages, but faster. Telling an average bus full of average people to think real hard, for a real long time, will not earn a nobel prize, any more than telling a bus full of women to make a baby in only two weeks will work. Clearly, giving high school drop outs a bunch of meth to make them "faster" doesn't make them much smarter. Clearly, placing a homeless person in a library doesn't make them smart. Without cultural support science doesn't happen, and is the culture of one AI computer more like a university or more like an inner city?

    It's not much of an extension to tie the AI vs super intelligent AI competition in with contemporary battles over race and intelligence. Some people have a nearly religious belief that intelligence is an on/off switch and individuals or cultures whom are outliers above and below are just lucky or a temporary accident of history. Those people, of course, are fools. But they have to be battled thru as part of the research funding process.

  • by maxume ( 22995 ) on Wednesday May 06, 2009 @05:06PM (#27851387)

    There are still like 4 billion people who may want computers and they are going to want them to be cheaper and use less power than today's machines.

  • by Anonymous Coward on Wednesday May 06, 2009 @05:07PM (#27851415)

    Bubbles are exponential. Until they burst.

  • by Animats ( 122034 ) on Wednesday May 06, 2009 @05:21PM (#27851655) Homepage

    This is going to take a while.

    Re-engineering biological systems takes generations to debug. And a huge number of dud individuals during the development process. This is fine for tomato R&D, but generating a big supply of failed post-humans is going to be unpopular. Just extending the human lifespan is likely to take generations to debug. It takes a century to find out if something worked.

    AIs and robots don't have that problem.

    What I suspect is going to happen is that we're going to get good AIs and robots, but they won't be cheaper than people. Suppose that an AI smarter than humans can be built, but it's the size of a server farm. In that case, the form the "singularity" may take is not augmented humans, but augmented corporations. The basic problem with companies is that no one person has the whole picture. But a machine could. If this happens, the machines will be in charge, simply because the machines can communicate and organize better.

  • by geekoid ( 135745 ) <dadinportland AT yahoo DOT com> on Wednesday May 06, 2009 @05:31PM (#27851811) Homepage Journal

    He abuses the hockey stick phenomenon.

    He also overlooks many, many practical matters.

    The man hasn't done jack in over 20 years.
    "Futurist" is another word for "has been"

    "it's that transistors will be old news in 20 years; "
    no, they won't. Do you even know what a transistor is?
    Also gone in 20 years resistors and capacitors! weeee

  • life is almost exactly like it was 40 years ago.

    Thats because humans are still humans, not because technology hasn't involved at an rapid pace. Sure, cars still drive you from A to B, television still shows you the daily news and newspapers haven't really changed in a while, but on the other site I can buy for 100 bucks a device that can store two years of non-stop, 24/7 music, more music then I will likely ever listen to in my entire lifetime or be able to buy legally. For as little as ten bucks I can buy a finger nail sized storage device that can store all software ever released on the NES, C64, AtariST and Amiga combined. With the right phone you can today live stream video to the Internet, combine that with a big HD and you can start recording your complete life 24/7. On GoogleEarth I can see my house and soon I'll be able to virtually drive by it. Millions of people waste gazillions of hours in a virtual world as WoW or SecondLife. And another million of people have written an online encyclopedia.

    Not impressive enough? Well, there are certainly things that haven't changed much. Programming computers still feels like a rather low tech. Lisp is 50 years old and yet programming languages still haven't really surpassed it in a significant way and GUIs haven't really changed all that much either. And no direct brain-input in sight, we still have to read and watch information to consume it. But doesn't stop the progress in other areas to be pretty gigantic.

    people don't know their neighbors and can't let their kids wander the neighborhood.

    Thats the result of the real world becoming more and more replaced by a virtual one, when you have a mobile and can phone all your friends anytime you want, there just isn't much need to talk to your neighbor anymore.

  • by Logic and Reason ( 952833 ) on Wednesday May 06, 2009 @05:33PM (#27851839)

    We're not on the cusp of sentient AI, but my honest opinion is that we're probably only a bit over a decade from it. Certainly no more than 2 decades from it.

    Hmm, that sounds awfully familiar. Now where have I heard such claims before?

    ...machines will be capable, within twenty years, of doing any work a man can do.

    -Herbert Simon, 1965

    Within a generation... the problem of creating 'artificial intelligence' will substantially be solved.

    -Marvin Minsky, 1967

    Would you be willing to bet, say, an ounce of gold on your prediction?

  • "..begins with 'wishful thinking'."

    yes, but so does all humans crappy ideas.
    It doesn't mean it can happen.

    For example: All the wishful thinking in the world won't make homeopathy work.

  • by smallfries ( 601545 ) on Wednesday May 06, 2009 @05:43PM (#27851979) Homepage

    Yes but his argument is still flawed even if you refine it slightly. There are many problems with his assumption, but even one is enough to derail it:

    Assume that each previous advance multiplies the amount of result for a given effort. You only get accelerating returns when the growth in required effort is below a critical threshold. For certain previous advances, and certain successive problems this has been true.

    It does not imply that it always holds, or that it will continue to hold in the future, or even that it holds for any particular problem. "OMG ponies!" doesn't refer to any amount of progress - it refers to a lack of understanding of what a given problem is, and how much effort is required. Perhaps Arthur C. Clarke phrased it better when he called it magic.

  • by Weezul ( 52464 ) on Wednesday May 06, 2009 @05:49PM (#27852059)

    I think we'd know now if another technology would supplant the transistor within 10 years. Indeed, our progress may slow as we approach this limit, i.e. Moore's law will slow down and 2018 is too soon. Evolution frequently just stops within domain, like how marsupials just can't evolve flippers. But that doesn't mean evolution stops overall.

    We have massive room for progress in numerous disciplines :

    1) language & compiler design -- You can buy 10x performance improvements by rewriting your OS & libraries in structured or object oriented self modifying code, Henry Massalin's Synthesis kernel proved this. [wikipedia.org] You can also rewrite all the other heavy apps using this hypothetical language.

    2) algorithms -- You can always just train more scientists and mathematicians to write more & better parallel algorithms. You may also fold these advancements back into compiler design for high level language compilers, like say Haskell.

    3) subsidies redirection -- You can redirect all government subsidies towards helping young but solid technologies catch up, underwriting 1/2 the cost of optical fabs for example. How much money gets waisted on farmers now?

    4) smarter people -- You can try making smarter people through genetic engineering, pharmacology, and even research into education.

    5) augmented people -- You can definitely augment people to improve specific tasks. If you augment children, you might change even more, like their will to do science.

    6) clustered people -- You can make neurologically linked "people clusters" who think together towards some common goal, enabling you to solve harder math & science problems.

  • by Anonymous Coward on Wednesday May 06, 2009 @06:03PM (#27852207)

    But only, ironically, because you've been programmed to. Machines need not be so programmed. (Or more precisely, we'll be able to work that out of them while they're still much dumber than us.)

  • by GreatAntibob ( 1549139 ) on Wednesday May 06, 2009 @06:17PM (#27852369)
    Sure, and that's precisely the point - there's no testing.

    I can shoot off all sorts of hypotheticals about why it won't work, just like other folks can shoot off other hypotheticals about why it will work. That's the point I was making, albeit poorly. Until there's more hard science, extrapolating trends is a non-starter.

    50 years ago, personal comm devices weren't even on the horizon and the next "big" thing was space colonization. We haven't colonized space, but we have more information processing ability than ever before. I'd rather set up the infrastructure to take advantage of the technology that *does* develop, rather than the ones we *think* or *prefer* would happen.
  • Ask-A-Nerd, NOT (Score:3, Insightful)

    by Tablizer ( 95088 ) on Wednesday May 06, 2009 @06:19PM (#27852383) Journal

    Would I build these machines, if I knew there was a strong chance they would destroy humanity?' asks evolvable hardware researcher Hugo de Garis. His answer? 'Yeah.'"

    This is why you *don't* let nerds make political decisions. We can't resist making new gizmos, even if they eat humanity. It's like letting B. Clinton pick interns.
               

  • by ElectricTurtle ( 1171201 ) on Wednesday May 06, 2009 @06:25PM (#27852431)
    I would argue that if reincarnation is real, it underscores, not undermines, the possibility of transferring consciousness. If the natural/supernatural world does it already, than doing it artificially may again just be a matter of process.
  • by Thiez ( 1281866 ) on Wednesday May 06, 2009 @06:32PM (#27852517)

    > If we were able to bring back a Neanderthal and he grew up in the lab interacting with scientists and a surrogate mother who would, of course, still be a human being, we'd probably appear more god-like than as simple father and mother figures. We have mysterious magic machines whose workings would be beyond him, move in mysterious ways.

    Huh? You're not making any sense now. People a thousand years ago would find our machines magical too, but if we were to clone one of those people and raise them like a normal person in our time, there is no reason why such a person wouldn't accept (and understand) technology like everybody else does. Likewise, although your hypothetical neanderthal may have below-average intelligence, there is no reason to believe he would would worship our technology any more than a person with Down syndrome. If we assume he'd merely have below average intelligence without being retarded, the cloned neanderthal would probably own an iPod and enjoy it very much, even though he could never understand how it works (just like most humans).

    How you view technology has to do with your culture, not with the time period your DNA comes from.

  • by khallow ( 566160 ) on Wednesday May 06, 2009 @06:36PM (#27852557)

    The waking state is so inefficient from a reproductive and safety perspective that it's mind-boggling.

    Consider this question. How long would you live in the wild, if you never woke up?

  • by 4D6963 ( 933028 ) on Wednesday May 06, 2009 @06:53PM (#27852753)

    Indeed, the sad thing is (well, yet another of those sad things), you can't hear about the Singularity without hearing about Kurzweil, you can't hear about Strong AI (which may or may not be possible, what do we know?) without hearing of the Singularity, and you can't discuss AI without strong AI popping up.

    So at the centre of this entire field of research you have that guy and his crazy ideas hogging up all the attention, and I'm afraid that he's only going to bring discredit to the discipline, just like any other discipline that has crackpots as figureheads, and that's no good.

  • by Xaedalus ( 1192463 ) <Xaedalys.yahoo@com> on Wednesday May 06, 2009 @07:17PM (#27853021)
    Kurzweil's theory and predictions are predicated on the idea that we have no soul, that we are essentially very complicated biological machines with the illusion of sentience. If he is correct, then you are correct: it will be technologically feasible someday to upload ourselves. If on the other hand we DO have a soul, then all his predictions go out the window and a whole NEW slew of problems arise. Such as: how exactly did we end up with an indestructible self-aware essence that defies the laws of thermodynamics? And... what exactly created it? The way I look at it, the entire history of mankind can be boiled down to the dualistic philosophical question: do we have a soul or not? If we do not have souls, then the universe is a harsh, dark mistress, there is no God, and all we see is all there really is. If however we do have souls, then boy do we have problems. Because if we have souls, then we open up the door to the distinct possibility of a deity, or deities, and that our actions do matter because there is an afterlife. And (this is really scary) there might really be entities like Cthlulu out there in the void. That's IF we have souls though (defining a soul as an indestructible self-aware essence that defies the laws of thermodynamics). Given that, I can see why people would prefer to believe that we are machines and that we should work on uploading ourselves as intelligent programs. There's nothing in the dark we'd have to fear save ourselves then.
  • by Chris Burke ( 6130 ) on Wednesday May 06, 2009 @07:57PM (#27853481) Homepage

    The difference between the idea of the religious and the techo-rapture is that the means of making it happen lie within our grasp... We have the technology, we have the knowledge, what we lack is the wisdom.

    No, they aren't in our grasp, they aren't even close to being in our grasp. They're no more in our grasp than transmutation of lead into gold was within the grasp of alchemists -- we can describe conceptually what we would like to happen (we mix chemicals, lead turns to gold; we download our minds into a machine, get rid of our bodies), but can't say how it actually would work. Forget the technological problems involved, even if we could solve every technological hurdle instantly we still couldn't do it because we can't even say what it is we need because we don't even know what it is that makes a mind a mind. Forget wisdom, we aren't even close to having either the knowledge.

    The poster who compares it with 1950's futurist utopianism is exactly right. We could have had the future depicted in 2001, we could have an end to world hunger, an end to disease, and if not an end to death then a comfortably long delay in its arrival. The problem is that we're still very human at heart and humans are not that far removed from the trees. We are selfish, grasping, petty animals and those few acts of sublime virtue from the best of us simply serve to make the rest of us look all the worse.

    We could end world hunger, because we produce enough food to feed everyone, and in that case the issues are merely political. There's no mystery, no hypothetical unnamed technological advance needed. Just the ability to get the food over here to the hungry person over there.

    We have conquered many, many diseases, and have what anyone from more than a century ago would call a comfortably long delay in death's arrival. But on the other hand, this is mostly in pushing up the average, not extending the maximum. Whatever it is that is necessary to get humans to reliably live to 120 or more, we simply don't know yet.

    We could have some aspects of the world of 2001, like a manned mission to Jupiter's moons if we really wanted to, but not others, like HAL. Why? Because despite many, many people working on the problem we still have no idea how to make HAL. It's not a matter of lacking the technology, we lack the conceptual understanding of what we're trying to accomplish. And throwing more people at the task wouldn't necessarily solve that. There's lots of interesting work in the Strong AI field, and maybe we'll make the necessary unknown breakthrough. Maybe we won't.

    So yeah, it's exactly like 1950s futurist utopianism in that it is highly speculative, and makes wild guesses about what unknown and unknowable advances will be made, some of which will end up coming to pass, others will end up being complete wishful thinking, and others will end up somewhere in between.

    Look, I get Kurzweil's basic idea. Major paradigm-shifting advances, things the people beforehand couldn't have even conceived of, keep coming faster and faster. If this trend continues... aaaayyyyy!

    That's all well and good, but the thing about these advances the people beforehand couldn't have even conceived of is that you don't get to pick which ones are feasible and will happen. That's kinda the nature of the inconceivable. Whatever the future brings, it could be completely different than what you think, and it could end up that what you wish for the future is impossible, but other things beyond your imagination come to pass.

    Look at the alchemists again. It turns out, thanks to advances they could not have conceived of, that transmutation of lead into gold is possible, just so ridiculously infeasible you'd never actually do it. But would that hypothetical, unknowable future have justified an Ancient Greek alchemist saying that transmutation was "in his grasp"? Not even. And on the other hand, alchemists were also looking for the Elixer

  • by Chris Burke ( 6130 ) on Wednesday May 06, 2009 @08:28PM (#27853787) Homepage

    Fundamentally though, my vision was correct.

    Yes, but 20 years ago a computer network was not a hypothetical then-impossible idea. Before the first computer network existed, people understood what technological barriers they would have to overcome to create one, and they already knew how to split a task into multiple parts on separate processing units. It was an engineering problem. It was the engineering problem that your professor was stuck on. Call me when the major obstacle to any of these Futurist predictions is the amount of effort required, not that we fundamentally have no idea how to accomplish the task.

    When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

    Well I'm not one to say something is impossible, and I am one to listen to an elderly scientist stating that something is possible when they have a scientific reason to think that particular thing is possible. On the other hand, I am also one to scoff dismissively when a Futurist says that something we currently don't have any clue how to do will surely happen because things are happening faster and faster. That's not a scientific reason. Some previously impossible things are now possible. That does not mean that Arbitrary Impossible Thing X will become possible.

  • by VoidEngineer ( 633446 ) on Wednesday May 06, 2009 @09:15PM (#27854219)
    Personally, I think that software consciousness will turn out to be quite easy in hindsight,

    Agreed. It's going to be an 'everything-and-the-kitchen-sink' kind of problem. Put enough of the right systems together, and it will emerge rather on its own.

    The problem isn't going to be creating an artificial intelligence. The problem is going to be in making it an autonomous agent that can be socially integrated into society. Think how long it takes to raise a kid... teaching the kid language, potty training, kindergarten, social skills, job skills, etc... You need to do all of that training with an AI... but it won't necessarily have a body it can move around in and interact with other people with. The first AIs are going to be alien to our experience, unless they're purpose built in android type shells.

    I suspect that in 50 years, we'll look back and say 'oh, yeah... the first AIs were waking up 30 years ago, but it took us another 10 years to recognize them for what they were'.

Machines have less problems. I'd like to be a machine. -- Andy Warhol

Working...