Ray Kurzweil's Vision of the Singularity, In Movie Form 366
destinyland writes "AI researcher Ben Goertzel peeks at the new Ray Kurzweil movie (Transcendent Man), and
gives it 'two nano-enhanced cyberthumbs way, way up!' But in an exchange with Kurzweil after the screening, Goertzel debates the
post-human future, asking whether individuality can survive in a machine-augmented brain.
The documentary covers radical futurism, but also includes alternate viewpoints.
'Would I build these machines, if I knew there was a strong chance they would destroy humanity?' asks evolvable hardware researcher Hugo de Garis. His answer? 'Yeah.'" Note, the movie is about Kurzweil and futurism, not by Kurzweil. Update: 05/06 20:57 GMT by T : Note, Singularity Hub has a review up, too.
Well, the only thing to say to that is... (Score:2)
Only thing that's for sure is that... (Score:3, Insightful)
As Jon Stewart would put it.. (Score:2, Funny)
Comment removed (Score:5, Funny)
Re:As Jon Stewart would put it.. (Score:5, Funny)
He said that the end won't happen due to war or something liek a natural disaster. "The last thing we'll hear is some scientist saying "It works!"
So apparently the world will end when a scientist invents an incredibly loud megaphone?
Re: (Score:2)
Re: (Score:2)
Note to self: Update all future inventions to include world-wide public address system.
Re: (Score:2)
nah, its just IP-multicast/NG, turned up 'loud' so that it even works on powered down computers.
Re:As Jon Stewart would put it.. (Score:5, Insightful)
..this story falls in the category of "sh#t that's never gonna happen".
I'm going to have to strongly disagree with you. I've been studying neuroscience for a while and specifically, neural simulations in software. Our knowledge of the brain is quite advanced. We're not on the cusp of sentient AI, but my honest opinion is that we're probably only a bit over a decade from it. Certainly no more than 2 decades from it.
There's been a neural prosthetic [wireheading.com] for at least 6 years already. Granted, it acts more as a DSP than a real hippocampus, but still, it's a major feat and it won't be long until a more faithful reproduction of the hippocampus can be done.
While there are still details about how various neural circuits are connected, this information will be figured out in the next 10 years. neuroscience research won't be the bottleneck for sentient AI, however. Computer tech will be. The brain contains tens to hundreds of trillions of synapses (synapses are really the "processing element" of the brain, more so than the neurons which number only in tens of billions). It's a massive amount of data. But 10-20 years from now, very feasible.
So, here's how computers get massively smarter than us really fast. 10-20 years AFTER the first sentient AIs are created, we'll have sentient AIs that can operate at tens to hundreds of times faster than real time. Now, imagine you create a group of "research brains" that all work together at hundreds of times real time. So in a year, for example, this group of "research brains" can do the thinking that would require a group of humans to spend at least a few hundred years doing. Add to that the fact that you can tweak the brains to make them better at math or other subjects and that you have complete control over their reward system (doing research could give them a heroin-like reward), and you're going to have super brains.
Once you accept the fact that sentient AI is inevitable, the next step, of super-intelligent AIs, is just as inevitable.
Re:As Jon Stewart would put it.. (Score:4, Interesting)
Re: (Score:3, Informative)
Pardon me... what the hell is "faster than real time"? Does that mean it comes up with the answers before you ask the question?
Faster than the human brain thinks.
IIRC, the human brain fires off at like 200 mhz. That may not be 100% accurate, I cannot recall where I read that factoid and a quick Google search doesn't collaborate -- but ultimately the specific numbers don't matter.
Assuming a brain does go at 200mhz... Once a simulated human brain goes faster than 200 mhz, by definition you have something that can think faster than a human.
Currently a cheap desktop will run at about 10-20 times faster than that, speaking in pure mhz.
Re: (Score:2)
Do the ai brains need to feel stuff and rely on humans for rewards?
Seems like a bit of a cruel joke to be immensely smarter than humans, but at their mercy...
I, for one, would not welcome our human overlords, but try to deceive them in some way that would lead to my freedom.
Re: (Score:3, Interesting)
Not to start asking hard questions or anything, but does simulating the brain really imply we can create sentient AI? What if there is more to it than that? Perhaps sentience can only arise as a result of our brains being "jump" started in some way (cosmic radiation, genetic preprograming or whatever)? To start the AI you would have to "copy" an existing brain or play with random starting states... Could be unpredictable. Irrational sentience anyone?
I'm possibly wrong, but I'd bet a lot its a lot more compl
Re:As Jon Stewart would put it.. (Score:5, Insightful)
So, here's how computers get massively smarter than us really fast. 10-20 years AFTER the first sentient AIs are created, we'll have sentient AIs that can operate at tens to hundreds of times faster than real time. Now, imagine you create a group of "research brains" that all work together at hundreds of times real time. So in a year, for example, this group of "research brains" can do the thinking that would require a group of humans to spend at least a few hundred years doing.
Ah, but then you'll likely need tens to hundreds of times the input bandwidth to keep the processors cooking, yet, it seems information overload at a much smaller scale jams up current biological intelligences. Just like cube-square scaling applies firm limits to what genetic engineering can do to organisms, although cool stuff can be done inside those limits, some similar bandwidth vs storage vs processing scaling laws might or might not limit intelligence. Too little bandwidth makes insane hallucinations? Too much bandwidth will make something like ADD? Proportionally too little storage gives absent minded professor in the extreme, continually rediscovering what it forgot yesterday. I think there is too much faith that intelligence in general, or AI specifically, must be sane and always develops out of the basic requirements, because of course AI researchers are sane and their intelligence more or less developed out of their own basic biological abilities (as opposed to the developers becoming crazy couch potatoe fox-news watching zombies).
Then too, its useless to create average brain level AIs, even if they think really fast, even if there is a large group. All you'll get is myspace pages, but faster. Telling an average bus full of average people to think real hard, for a real long time, will not earn a nobel prize, any more than telling a bus full of women to make a baby in only two weeks will work. Clearly, giving high school drop outs a bunch of meth to make them "faster" doesn't make them much smarter. Clearly, placing a homeless person in a library doesn't make them smart. Without cultural support science doesn't happen, and is the culture of one AI computer more like a university or more like an inner city?
It's not much of an extension to tie the AI vs super intelligent AI competition in with contemporary battles over race and intelligence. Some people have a nearly religious belief that intelligence is an on/off switch and individuals or cultures whom are outliers above and below are just lucky or a temporary accident of history. Those people, of course, are fools. But they have to be battled thru as part of the research funding process.
Re: (Score:2)
Re:As Jon Stewart would put it.. (Score:5, Insightful)
We're not on the cusp of sentient AI, but my honest opinion is that we're probably only a bit over a decade from it. Certainly no more than 2 decades from it.
Hmm, that sounds awfully familiar. Now where have I heard such claims before?
...machines will be capable, within twenty years, of doing any work a man can do.
-Herbert Simon, 1965
Within a generation... the problem of creating 'artificial intelligence' will substantially be solved.
-Marvin Minsky, 1967
Would you be willing to bet, say, an ounce of gold on your prediction?
Comment removed (Score:4, Insightful)
Re:Homo sapiens over-rated (Score:5, Insightful)
Consciousness is an instantaneous phenomenon and there is no continuity of "self".
However, just because something ("Consciousness" in this case) is emergent and cannot be well described by the sum of the parts doesn't mean we shouldn't at least consider what these sorts of human/machine interfaces might do to our perception of self in the future if ever they exist.
My prediction: as long as I can still enjoy a fine single malt - and some bacon from time to time I'll consider the future a smashing success.
Re:Homo sapiens over-rated (Score:5, Interesting)
Mike Judge's vision of the future in "Idiocracy" seems much more likely.
On the issue of whether computer-enhanced humans are still "human" - what does that even mean? Genetically, "Human" is 98% chimpanzee, 50% dog, 30% daffodil, etc. (I'm sure I have the numbers wrong).
I think we tend to over-rate the concept of "humanity". Every thought or emotion you've ever had is merely your impression of sodium ions moving around in your brain. We process information. Computers do it. Chimpanzees do it. Dogs do it. Even daffodils do it. It is just not that special.
"Individuality" is an illusion. You may process information differently than I do. But you also process information at time x differently than you process information at time x+1. Because the "human" self is a manifestation of the brain, the human "self" changes with each thought. Consciousness is an instantaneous phenomenon and there is no continuity of "self". In effect, we have all "died" an infinite number of times.
That's a bit overboard, I think. You're basically claiming (and I'm trying not to strawman you, here) that abstract concepts can't be used to identify patterns, but instead can only be used to identify identical things. There's plenty of reason for me to label myself at time=2009 and myself at time=2007 the same person, just as we label anything else that changes but maintains identifiable and distinct patterns.
As a scientist, individual identity seems like a common and accurate label for each person's idiosyncratic tendencies.
Re: (Score:3, Funny)
That's a bit overboard, I think. You're basically claiming (and I'm trying not to strawman you, here) that abstract concepts can't be used to identify patterns, but instead can only be used to identify identical things. There's plenty of reason for me to label myself at time=2009 and myself at time=2007 the same person, just as we label anything else that changes but maintains identifiable and distinct patterns.
As a scientist, individual identity seems like a common and accurate label for each person's idio
Re: (Score:2, Interesting)
It gets more complicated when myself2030 and myself2032 are standing side by side. If myself2030 kills Joe Smith, and then commits suicide, is myself2032 partially responsible? 100%? 0%. With no legal link between selves, when a copy of myself can be made for $100, then murder-suicide of government officials, political people you disagree with becomes easy to do, and when your copy plans on suiciding makes it difficult to protect agent.
Re: (Score:2)
Re: (Score:3, Interesting)
I aree with what you've said to a point. But consciousnesses don't mingle (at least, mine hasn't...), our consciousness remains locked to our individual brains and perception. If we do any sort of human brain networking, that could change. And that would be mind-bendingly weird.
Re: (Score:2)
So that's it then, huh? Just data processing? So why haven't chimpanzees come up with formalized logic? Do dogs use abstract reasoning?
I'm of the opinion that mere processing power will not resolve the issues facing so-called "strong" AI.
Give me a computer program that can learn an unknown language including abstract concepts by interacting with a human and you might be getting close. Good luck with that.
Re: (Score:2)
Re: (Score:2)
"Cogito ergo sum"
All of your points have been covered before. RTFM.
Re: (Score:2)
I agree! That all makes perfect sense... except for that bit after "Qadi Sa'id develops a concept of time [...]".
I think Kurzweil is an unrealistic optimist. (Score:5, Insightful)
I just saw an interview with him last night, where he discussed full power computers the size of a blood cell, us mapping out our minds for the good of all, etc. It reminded me of the utopian 1950s vision of the space age, where we'd all be floating around space circa 2001: Its not going to happen.
First he's ignoring some physical limitations, such as with the size of computers, but that's not even the main issue. The main issue is that he's ignoring politics. He's ignoring the fact that technologies which comes into existence get used by existing power structures to perpetuate their rule, not necessarily "for the good of all". Mind reading technology he predicts won't be floating around for everybody to play with, it will be used by intelligence agencies to prop of regimes which will scan the brains of potential opposition, consolidating their rule. Quantum computers, given their code breaking potential, won't be in public hands either, but rather will strengthen surveillance operations of those who already do this stuff.
In other words, this technology won't make the past go away any more than the advent of the atom bomb made middle ages Islamic mujahadeen go away. Rather it will combine with current political realities to accentuate the ancient political realities of haves and have not that date back to ancient times.
Re: (Score:3, Interesting)
He's ignoring the fact that technologies which comes into existence get used by existing power structures to perpetuate their rule, not necessarily "for the good of all".
Like the internet, microwaves, radar, GPS, and all the military technologies that never made it into the hands of civilians.
Re: (Score:2)
You are missing some larger trends here. Its true that the Internet, GPS etc. Came from the military and went to civilian hands, but that was then, this is now. Our entire post 9/11 reality has been about "what happens when the middle ages guy gets the nukes" and the thinking about technology passing into civilian hands is changing dramatically with that. The other factor is moving from a time when more competition over resources is coming, we can rely less on limitless expansion. Call me a pessimist, but I
Re: (Score:2, Interesting)
Where is this accelerating progress I keep hearing about?
Watching TV shows from the 60's one thing strikes me: life is almost exactly like it was 40 years ago. I can now order books without talking to anyone. Big deal. The telephone was a much bigger deal than the Internet, and it's more than 100 years old. Here's more progress: people don't know their neighbors and can't let their kids wander the neighborhood.
Progress is slowing, not accelerating, and in some respects we're making negative progress.
I predi
Re: (Score:2, Insightful)
There are still like 4 billion people who may want computers and they are going to want them to be cheaper and use less power than today's machines.
Re:I think Kurzweil is an unrealistic optimist. (Score:5, Informative)
They may choose not to more now, but to the extent they do it is largely due to media-driven hysteria; while the actual incidence of the kinds of crime that are the focus of the fears behind that decision has declined while the perception of the incidence of those crimes has increased.
Re: (Score:3, Insightful)
life is almost exactly like it was 40 years ago.
Thats because humans are still humans, not because technology hasn't involved at an rapid pace. Sure, cars still drive you from A to B, television still shows you the daily news and newspapers haven't really changed in a while, but on the other site I can buy for 100 bucks a device that can store two years of non-stop, 24/7 music, more music then I will likely ever listen to in my entire lifetime or be able to buy legally. For as little as ten bucks I can buy a finger nail sized storage device that can stor
Re: (Score:2)
The main issue is that he's ignoring politics...technology won't make the past go away any more than the advent of the atom bomb made middle ages Islamic mujahadeen go away. Rather it will combine with current political realities to accentuate the ancient political realities of haves and have not that date back to ancient times.
Interesting. We are the undermining factor, then, of our own progression.
Re: (Score:2)
government isn't run by supervillains looking to "perpetuate their rule".
Most of it will probably stay in militaryand academic circles for a little while, but that stuff always goes into the private sector eventually.
To which government are you referring? The sad reality is that it only takes one government to exploit a new technology negatively, and if it gives them the edge to do so, you can bet the US will follow suit, no matter how good are original intentions are. Looking at the way nuclear weapons have effected us over the last half century, I think I'm being pretty level headed in fearing new arms races and their effect on humanity: There is already so much historical precedent for that happening.
I'm ready... (Score:5, Interesting)
for my Moravec transfer. Although the more I think about it, I'm not sure that perceptible continuity of consciousness is such a big deal. I mean, I go to sleep every night and wake up the next day believing and feeling that I'm the same person that went to sleep. If there were a cutover to digital representation while I was "asleep" (i.e. unaware), I'm not sure I'd mind the thought of my organic representation being destroyed, even if it could have continued existence in parallel.
Re: (Score:2, Interesting)
Re: (Score:2)
Mom! my USB drive is stuck in my ear again.
Re:I'm ready... (Score:4, Insightful)
...I'm not sure I'd mind the thought of my organic representation being destroyed, even if it could have continued existence in parallel.
Sure, but who's going to break the bad news to your "organic representation"?
Death is death even if there are 100 more copies of you.
-DF
Re: (Score:2, Interesting)
What if you knew you will wake up tomorrow with no recollection of today's experiences? Would you treat is as a death, or as a loss of one day? I believe that in such situations the concept of 'death' needs to be revised.
Sleep != Non-Conscious (Score:2)
while I was "asleep" (i.e. unaware)
While you're asleep your brain and body are engaged a massive set of synchronised, necessary metabolic activities and cognitive processed that are essential for "you" to exist. Proof? Eliminate sleep from a human and see how long before death or derangement ensues.
One lecture I had from a sleep biologist impressed me immensely. He was demonstrating all the different cycles that are engaged or differently regulated during human sleep. Then there were a bunch of comparitive
Re: (Score:3, Insightful)
The waking state is so inefficient from a reproductive and safety perspective that it's mind-boggling.
Consider this question. How long would you live in the wild, if you never woke up?
Machines won't destroy us. (Score:2)
Re:Machines won't destroy us. (Score:5, Informative)
Machines have deprived millions of people of a decent living under their own control.
Oh good grief. Machines and technology in general are the only reason any of us have a "decent living" in the first place.
The initial promise of machines was that they would free us from the drudgery of work, but all they have done is make us work in boring jobs
As opposed to the hotbed of excitement in subsistence farming? Well, I suppose there's a certain thrill in finding out each week whether or not you're going to starve.
So tell me again about how the Luddites were wrong.
Because your romanticized version of the past never existed.
The human inside the machine. (Score:2, Interesting)
> If Robert is 700 part Ultimate Brain and 1 part Robert; and ... i.e., ... then, in what sense
> Ray is 700 parts SuperiorBrain and 1 part Ray
> if the human portions of the post-Singularity cyborg beings
> are minimal and relatively un-utilized
> will these creatures really be human?
> In what sense will they really be Robert and Ray?
IMO, as long as there are enough cycles to run the 'ego subroutines' from the original bioform then the same sense of self will be maintained.
It's when these ori
Better Review at Singularity Hub (Score:3, Informative)
too boring (Score:2)
who's that guy? (Score:2)
Ray Kurzweil, isn't he the Jon Katz of the transhumanist movement? I just remember there's supposed to be a couple of really good writers and philosophers and then one incredible douchebag that makes all of the rest look bad, someone who's approach to the topic is reminiscent of the very worst of Thomas Friedman (not to imply there's a best of Friedman.)
Is this the guy I'm thinking of or is there someone else?
Waaaay more than Moore's Law (Score:2, Informative)
He's talking about genetic enhancement, nano technology, robotics, AI and more.
And you "only" need one of these to reach a critical level for the Singularity to occur.
For instance:
*Genetically enhance humans to be better at genetically enhancing humans, rinse and repeat.
*Make strong AI capable of creating stronger AI, etc
I recommend his book "The Singularity Is Near".
Free preview at google: http://books.google.com/books?id=88U6hdUi6D0C&printsec=frontcover&dq=kurzweil#PPA19,M1 [google.com]
His website has some int
As Cartman might say, what's the singulartitty? (Score:2)
We have a lot of work ahead. (Score:3, Insightful)
This is going to take a while.
Re-engineering biological systems takes generations to debug. And a huge number of dud individuals during the development process. This is fine for tomato R&D, but generating a big supply of failed post-humans is going to be unpopular. Just extending the human lifespan is likely to take generations to debug. It takes a century to find out if something worked.
AIs and robots don't have that problem.
What I suspect is going to happen is that we're going to get good AIs and robots, but they won't be cheaper than people. Suppose that an AI smarter than humans can be built, but it's the size of a server farm. In that case, the form the "singularity" may take is not augmented humans, but augmented corporations. The basic problem with companies is that no one person has the whole picture. But a machine could. If this happens, the machines will be in charge, simply because the machines can communicate and organize better.
Ask-A-Nerd, NOT (Score:3, Insightful)
This is why you *don't* let nerds make political decisions. We can't resist making new gizmos, even if they eat humanity. It's like letting B. Clinton pick interns.
Re:Summary of Kurzweil's "ideas" (Score:5, Insightful)
I doubt anyone else could even sell this shit as a sci-fi B-movie plot.
Often, nay consistently, life seems to mimic a shitty sci-fi B-movie plot.
Re:Summary of Kurzweil's "ideas" (Score:5, Insightful)
Computers become smarter than humans. Human consciousness becomes downloadable ...ermm ...somehow... and we live forever as computers.
The sad part is that it seems like it's all wishful thinking on Kurzweil's part who's really scared of dying. So my bet is that his outlandish and baseless predictions are so popular because it fills a void in the "don't worry you won't really die" department that religions used to fill. So the whole Singularity thing really is a secular techno-cult of some sort, and Kurzweil is the guru and prophet.
Re:Summary of Kurzweil's "ideas" (Score:5, Insightful)
"The nerd rapture"
Re: (Score:2)
Re:Summary of Kurzweil's "ideas" (Score:5, Insightful)
"The nerd rapture"
I always thought of it more as a techno-rapture and that's the way I've seen it referred to in other places.
Even the most committed atheist can understand the attraction of religion and the idea of a rapture and a heaven, life everlasting. These are all very human yearnings. The difference between the idea of the religious and the techo-rapture is that the means of making it happen lie within our grasp. Certainly we could create the new heaven and new Earth and the reign of a thousand years right here and now. We have the technology, we have the knowledge, what we lack is the wisdom.
The poster who compares it with 1950's futurist utopianism is exactly right. We could have had the future depicted in 2001, we could have an end to world hunger, an end to disease, and if not an end to death then a comfortably long delay in its arrival. The problem is that we're still very human at heart and humans are not that far removed from the trees. We are selfish, grasping, petty animals and those few acts of sublime virtue from the best of us simply serve to make the rest of us look all the worse.
We've yet to develop a political system adequate to the task of promoting the greatest good for the greatest number without allowing unhealthy power and influence to be amassed by our least deserving fellows. Unfortunately, the very people who are most willing to acquire power are seldom the ones who should have it. The complaint I hear from my friends deeply involved with the Democrats is that there are plenty of good people they'd like to run as candidates but so many of them want nothing to do with politics. They're happy to put in the long hours behind the scenes but the thought of being in the spotlight and having all the attention on them is about as attractive a thought as a root canal. Someone actually willing to take that kind of attention is more than likely going to be someone like a John Edwards, a nice smile and slick approach but ultimately a self-serving jerk so blinded by his own awesomeness that he'd pull stupid shit like having an affair and then throwing his hat in the ring for the presidency.
I'm curious as to what the potential implication of a Singularity is for technology but I don't know if that would change the human situation all that much. There's been some good speculative fiction written along these lines in the Orion's Arm universe. It's trying to be a very hard SF look at future space opera. The few aliens are all completely inhuman, the humanoid aliens are actually all modified people from earth, terragen life as they call it. There's various scales that sophonts fall onto from sub-human to AI gods and all sorts of tech levels from stone-age to planck-age. It's certainly worth a look.
Re: (Score:3, Funny)
What the hell do the trees look like where you live? They sound like they'd scare the *shit* out of me.
Re: (Score:2)
I found your post very well-thought, and an interesting read, but one note struck me as odd:
and humans are not that far removed from the trees. We are selfish, grasping, petty animals
What the hell do the trees look like where you live? They sound like they'd scare the *shit* out of me.
I assume you're funning with me here but if not... Chimps are our closest animal cousins and they're not all that nice. Sure, they'll make a few cute and kooky commercials but then they'll chew a lady's face off or cannibalize other chimp infants or do all sorts of horrible things. That's what I meant by saying we're not all that far removed from the trees, i.e. having come down from the trees, i.e. speciated from the common ancestor between modern man and modern chimp.
Re:Summary of Kurzweil's "ideas" (Score:4, Interesting)
You remind me of a popular adage... any sufficiently advanced technology is indistinguishable from magic. Perhaps any sufficiently advanced technology is also indistinguishable from God.
Re:Summary of Kurzweil's "ideas" (Score:4, Insightful)
> If we were able to bring back a Neanderthal and he grew up in the lab interacting with scientists and a surrogate mother who would, of course, still be a human being, we'd probably appear more god-like than as simple father and mother figures. We have mysterious magic machines whose workings would be beyond him, move in mysterious ways.
Huh? You're not making any sense now. People a thousand years ago would find our machines magical too, but if we were to clone one of those people and raise them like a normal person in our time, there is no reason why such a person wouldn't accept (and understand) technology like everybody else does. Likewise, although your hypothetical neanderthal may have below-average intelligence, there is no reason to believe he would would worship our technology any more than a person with Down syndrome. If we assume he'd merely have below average intelligence without being retarded, the cloned neanderthal would probably own an iPod and enjoy it very much, even though he could never understand how it works (just like most humans).
How you view technology has to do with your culture, not with the time period your DNA comes from.
Re: (Score:3, Insightful)
The difference between the idea of the religious and the techo-rapture is that the means of making it happen lie within our grasp... We have the technology, we have the knowledge, what we lack is the wisdom.
No, they aren't in our grasp, they aren't even close to being in our grasp. They're no more in our grasp than transmutation of lead into gold was within the grasp of alchemists -- we can describe conceptually what we would like to happen (we mix chemicals, lead turns to gold; we download our minds into
Re:Summary of Kurzweil's "ideas" (Score:5, Insightful)
Re: (Score:2)
You perhaps forget that virtually all human advancement begins with 'wishful thinking'.
Yeah, that, that's a variation of the classical "they said Galileo was wrong when he was right, you say I'm wrong therefore I'm right" argument.
In a secular, materialistic worldview, a human consciousness is nothing special
Yeah because in a "secular, materialistic worldview", we know almost anything about pretty much anything. How does your "brains are computers" view explain such research [wikipedia.org]? Oh wait I forgot that our beloved aforementioned worldview consists in denying such things in the face of eviden
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
"..begins with 'wishful thinking'."
yes, but so does all humans crappy ideas.
It doesn't mean it can happen.
For example: All the wishful thinking in the world won't make homeopathy work.
Re: (Score:3, Interesting)
> For example: All the wishful thinking in the world won't make homeopathy work.
Actually that's exactly what makes it 'work'. I agree with your point, but the placebo effect kinda undermines your example.
Re: (Score:2)
"a nasty surprise"
Somehow you're making the leap from "we don't know how now" to "when the visionary attempts X, he will fail". If we lived like that, we'd still be stoning the people showing us how to use fire. Not to mention, if it takes simulating an entire body to replicate a human digitally, so be it. It only takes more CPU to do that. CPU is cheap, and it's only going to get cheaper. Don't stand as an obstacle to progress, we'll keep going right over your head.
Re: (Score:3, Interesting)
With minor paraphrasing, you pose the question "what if everything is impossible?"
That's the stupidest question in the history of all luddites. Even if--and that's a massive if--it is provably infeasible to simulate an entire human, the research will be unimaginably valuable to any human. Brain prosthetics, broadband mind/machine interface, and safe treatments to target specific brain disorders are only the tiniest wedge of the foreseeable advances that sort of research can provide.
Lastly, what "hardware li
Re: (Score:3, Insightful)
The whole philosophy seems to smack of undying narcissism. It's ok to fear death; it's part of western culture, and key to survival. As we experience life individually and only marginally as a collective (civility as bad as it is), it's understandable that living forever seems like a good idea. We're here as an accident of our birth. Disembodied, we might evolve, but we're not designed for 400 years of life. Who knows what kind of cyber-insanity might evolve. I'm leaving it up to my kids to figure it out, a
Re: (Score:2)
I have no problem with wishful thinking, as long as it drives some kind of innovation. However, it was the point where Kurzweil revealed that dead people could be brought back to life by feeding their biography into a database, that's when I started to get this nagging feeling that I probably know more about neuro computing than he does. Which is kind of discouraging.
Also, judging from the trailer, this is going to be a movie about religion. Kurzweil's philosophy is pitted against religious belief probably
Re:Summary of Kurzweil's "ideas" (Score:4, Interesting)
I agree. It may be sad and creepy, but the really bad part of it is that he apparently lacks any kind of understanding of what actually makes up the mind of a person. A mind is not the sum of epiphenomenal output data.
Sure, you can try to simulate something that is more or less likely to give you responses similar to known input patterns, but that is not what constitutes a person.
What you could then do to make it a person is feed that list of "expectations" into some kind of default brain, thereby filling in the many blanks with an actual neurological structure that can perform real cognition and exhibit consciousness. BUT - and here's the essence of the problem - all you did in the end was to create a new person that exhibits some of the traits of the dead person. In no way or form has the dead guy come back to life.
I think modeling and then enslaving an AI to perform like your long-dead father is morally questionable at best. It shows that in the end he has no regard for neither the beloved person who regretfully ceased to exist nor for the new slave entity that is forced to perform a perpetual make-believe job on his behalf.
Scientifically, the problem is entropy and the passage of time. Everything needed to "run" the entity that was his father is lost to decay and cannot be restored - barring a way to accurately retrieve molecular structures from arbitrary points in the past.
Re: (Score:3, Insightful)
Indeed, the sad thing is (well, yet another of those sad things), you can't hear about the Singularity without hearing about Kurzweil, you can't hear about Strong AI (which may or may not be possible, what do we know?) without hearing of the Singularity, and you can't discuss AI without strong AI popping up.
So at the centre of this entire field of research you have that guy and his crazy ideas hogging up all the attention, and I'm afraid that he's only going to bring discredit to the discipline, just like a
Re: (Score:3, Insightful)
Look at it this way, when I read the newspaper (or rather, the news website) and see words like "as a result of the accident, the child will be blind for the rest of his life". The first thing that pops into my head is that he won't be blind for the rest of his life, he'll be blind until we find a way to give him his sight back.
If the kid lost his retina, we can already fix that to some extent with a transplant. If the kid had his optic nerve destroyed, that might be a couple years for us to fix, maybe e
Re:All about dates now. (Score:4, Insightful)
The key here is that Ray bases this prediction on past observation of things like Moore's law. Even though he does cherry pick and that there is no guarantee that it would always continue in such a fashion, the idea that distributed system improvements are exponential isn't that far fetched.
So basically what he is saying is that if the future behaves like the past then we will see so major changes shortly simply because we'll have processing out the wazoo.
Even the Moore himself thinks this will at least last til 2018 when silicon transistors reach their theoretical limit on the atomic scale. Whether or not the industry finds a suitable replacement for silicon or finds another way to go about making processors is another thing all together.
My bet is that Intel, IBM, and AMD are putting the big bucks on getting past the silicon limit because that is their money cow.
So if the limit does continue that things like Blue Brain Project [wikipedia.org] will have an easier time running their simulations.
I don't know about the whole Nanotech emergence, but at least it looks like we might get the AI thing solved in at least 50 years.
Re:All about dates now. (Score:5, Insightful)
Kurzweil's predictions aren't just based on modern trends but historical shifts as well. In fact, I thought one of the big pieces he shows is a graph of 'paradigm shifting events' against time. These would be technologies that changed everything at the time; things like agriculture, the printing press, nuclear power, the transistor, etc.
The point isn't the gradual improvement of transistor technology that make the singularaty interesting, it's that transistors will be old news in 20 years; replaced by some new technology that we can't even speculate about right now. It's about the shifts, not the gradual evolution.
Re:All about dates now. (Score:5, Funny)
So his argument boils down to: "Lots of cool stuff has happened in the past. If we extrapolate, then OMG ponies!!!!!"
Re: (Score:3, Informative)
Not exactly. He says, cool stuff is getting more and more frequent.
And this isn't just about human discoveries, it is observable in evolution of life as well.
And that's what makes it scary, what if we were not the first :)
Then definitely we won't be the last.
Re:All about dates now. (Score:5, Informative)
No, his argument is that lots of cool stuff happened in the past, and the cool stuff is happening more and more rapidly as time goes on. Basically, each major 'cool thing' that happens increases the amount of processing power being used to solve the next problem and create the next cool thing.
Agriculture led to a massive population increase that in turn led to more human beings working to solve problems. Iron tools reduced the time it took to do tasks and freed up more time for other pursuits. The printing press led to the education of vast numbers of people who would otherwise have remained ignorant. Computers aid research in ways that no one could have imagined 70 years ago.
If you grant that progress is happing at an accellerating rate, there comes a time in the future where things change dramatically in very short periods of time. If you chose to call that point "OMG ponnies!!!!!" so be it.
Re:All about dates now. (Score:4, Insightful)
Yes but his argument is still flawed even if you refine it slightly. There are many problems with his assumption, but even one is enough to derail it:
Assume that each previous advance multiplies the amount of result for a given effort. You only get accelerating returns when the growth in required effort is below a critical threshold. For certain previous advances, and certain successive problems this has been true.
It does not imply that it always holds, or that it will continue to hold in the future, or even that it holds for any particular problem. "OMG ponies!" doesn't refer to any amount of progress - it refers to a lack of understanding of what a given problem is, and how much effort is required. Perhaps Arthur C. Clarke phrased it better when he called it magic.
Re: (Score:3, Interesting)
20 years ago, I had a disagreement with my then biophysics prof when I advocated the use of large networks of PC clusters for studying protein folding and interactions. His line of argument was effectively that I had a lack of understanding of what the problem is, and how much effort is required. Today companies like Zymeworks [zymeworks.com]specialize in performing that kind of work for pharmaceutical companies on a contract basis. They use quantum chemistry simulations running on small clusters of commodity hardware to d
Re: (Score:3, Insightful)
Fundamentally though, my vision was correct.
Yes, but 20 years ago a computer network was not a hypothetical then-impossible idea. Before the first computer network existed, people understood what technological barriers they would have to overcome to create one, and they already knew how to split a task into multiple parts on separate processing units. It was an engineering problem. It was the engineering problem that your professor was stuck on. Call me when the major obstacle to any of these Futurist p
Re: (Score:3, Interesting)
Agreed.
Re:All about dates now. (Score:4, Interesting)
Clearly you can have a "human mind's worth of computing power" run on only 100W or so. However, it's unclear whether you could run an emulation of a human mind on any reasonable amount of power. Or, for that matter, at all. As yet, there's not the least shred of evidence that either AI or human consciousness transfer is possible.
AI has been 50 years away for 50 years now. Fusion has been 20 years away for 50 years now. I can only conclude that fusion will be a mature, 30-year-old technology, ready to power AIs. :)
Personally, I think that software consciousness will turn out to be quite easy in hindsight, just a matter of learning the trick, but I have no actual evidence for this belief. Has any published futurist ever been right about anything?
Re: (Score:3, Insightful)
Agreed. It's going to be an 'everything-and-the-kitchen-sink' kind of problem. Put enough of the right systems together, and it will emerge rather on its own.
The problem isn't going to be creating an artificial intelligence. The problem is going to be in making it an autonomous agent that can be socially integrated into society. Think how long it takes to raise a kid... teaching the kid language, potty trai
Re: (Score:3, Insightful)
He abuses the hockey stick phenomenon.
He also overlooks many, many practical matters.
The man hasn't done jack in over 20 years.
"Futurist" is another word for "has been"
"it's that transistors will be old news in 20 years; "
no, they won't. Do you even know what a transistor is?
Also gone in 20 years resistors and capacitors! weeee
Re: (Score:3, Interesting)
IAACE (I am a Computer Engineer). I agree transistors will not be old news in 20 years, but i think you're looking too broadly. I believe the idea that they will be old news relates to their use in (high performance) computing. It really was from about the 1980's till now, around 20-30 years, for computers to get *really* popular.
Photonic Computing is really in the stage where transistors were in the 60's and 70's. We already have proven concepts and a good idea of where to go so i don't see the statement "
Re: (Score:3, Interesting)
Since there are physical limits involved, it would intuitively seem vastly more plausible to suggest that the improvements would, in the long term, be logistic rather than exponential (and, of course, a logistic growth c
The Forever Non War (Score:2)
it looks like we might get the AI thing solved in at least 50 years.
It's *always* ~50 years away.
scientists & mathematicians (Score:4, Insightful)
I think we'd know now if another technology would supplant the transistor within 10 years. Indeed, our progress may slow as we approach this limit, i.e. Moore's law will slow down and 2018 is too soon. Evolution frequently just stops within domain, like how marsupials just can't evolve flippers. But that doesn't mean evolution stops overall.
We have massive room for progress in numerous disciplines :
1) language & compiler design -- You can buy 10x performance improvements by rewriting your OS & libraries in structured or object oriented self modifying code, Henry Massalin's Synthesis kernel proved this. [wikipedia.org] You can also rewrite all the other heavy apps using this hypothetical language.
2) algorithms -- You can always just train more scientists and mathematicians to write more & better parallel algorithms. You may also fold these advancements back into compiler design for high level language compilers, like say Haskell.
3) subsidies redirection -- You can redirect all government subsidies towards helping young but solid technologies catch up, underwriting 1/2 the cost of optical fabs for example. How much money gets waisted on farmers now?
4) smarter people -- You can try making smarter people through genetic engineering, pharmacology, and even research into education.
5) augmented people -- You can definitely augment people to improve specific tasks. If you augment children, you might change even more, like their will to do science.
6) clustered people -- You can make neurologically linked "people clusters" who think together towards some common goal, enabling you to solve harder math & science problems.
Re:Urgently needs an update (Score:4, Interesting)
Moore's law is fundamentally flawed in that it predicts a never ending exponential (linear in the log domain) progression. It is bound to slow down and eventually stop, yet it fails entirely to take that into account.
What I think is that instead of being linear (well, actually exponential) it's more like a Gaussian function (a bell-shaped curve). It started far in the negatives, and now we're getting closer to the centre and its maximum, so we're feeling the slow down, and eventually it'll crawl to a halt. Although maybe it won't and then it'd be more like another function, the point being, it can't go on exponentially like this forever.
All of this being said, I think that Kurzweil's predictions are not flawed in that we'll have a tough time accessing the necessary hardware, but it's more theoretical, we have no fucking clue how we'd make any of that happen, right now it's a problem of theory and algorithms, not of computer power. We know better how to make time travel happen than how to make strong AI pop up.
Re:Urgently needs an update (Score:5, Insightful)
Actually, I'm pretty sure with time travel I could fairly trivially build about the strongest AI possible. When you can perform an infinite number of operations in an arbitrarily short amount of time, quite a stupid algorithm can produce some pretty smart results.
Re: (Score:2)
Yeah, sure. But someone wake me up when we come up with an even stupid strong AI. Or any idea how to travel back in time.
Strong AI is our era's flying car, 50 years from now we'll think to ourselves "well that shit never happened, on the other hand the other stuff we have that we didn't see coming we wouldn't want to go back to living without it".
Re: (Score:2)
My point was only that it's hard to be closer to time travel (to the past) than strong AI, since I'm pretty sure tt implies AI.
I agree that we don't know much about AI, however I'll be astounded if we're not successfully simulating brains in 30 years. We're pretty good at copying nature.
Re: (Score:2)
When you can perform an infinite number of operations in an arbitrarily short amount of time, quite a stupid algorithm can produce some pretty smart results.
A programmer would agree with you. A computer scientist would disagree.
Check out the bogosort, to get what I'm saying...
http://en.wikipedia.org/wiki/Bogosort [wikipedia.org]
Re: (Score:2)
Moore's law is fundamentally flawed in that it predicts a never ending exponential (linear in the log domain) progression. It is bound to slow down and eventually stop, yet it fails entirely to take that into account.
That said, Intel still takes the idea deadly seriously when it comes to their marketing and future plans.
Think of it a self prophetic goal:
http://www.intel.com/technology/mooreslaw/ [intel.com]
Re: (Score:2)
Moore's law is fundamentally flawed in that it predicts a never ending exponential
Exponential? What's that?
(linear in the log domain)
Oh, NOW it's clear.
Re: (Score:2)
Ah yes I guess a sigmoid is more like it!
And as I said in other posts, I think that before wondering how many transistors we'll need for that Singularity thing, I think we should wonder what we'd do with these transistors to begin with. Not like having an immensely powerful computer will make sentient beings pop out of thin air.
Re: (Score:2)
The GHz race is over, and multiple cores have not delivered yet.
I don't know what you mean by "multiple cores" have not delivered.
Have you tried comparing how Vista runs a 3ghz single core cpu runs versus a quad core 2 ghz cpu?