Vinge and the Singularity 163
mindpixel writes: "Dr. Vinge is the Hugo award winning author of the 1992 novel "Fire Upon the Deep" and the 1981 novella "True Names." This New York Times piece (registration required) does a good job of profiling him and his ideas about the coming "technological singularity," where machines suddenly exceed human intelligence and the future becomes completely unpredictable. " Nice story. And if you haven't read True Names, get a hold of a copy, plenty of used ones out there.
Deep Thought (Score:1)
DEEP THOUGHT : What is this great task for which I, Deep Thought, the second greatest computer in the Universe of Time and Space have been called into existence?
FOOK: Well, your task, O Computer is...
LUNKWILL: No, wait a minute, this isn't right, We distinctly designed this computer to be the greatest one ever and we're not making do with second best.
LUNKWILL: Deep Thought,are you not as we designed you to be, the greatest most powerful computer in all time?
Anyway, most of you know the rest. If not time to listen to radio series again H2G2 [bbc.co.uk]
Re:The Singularity and Computational Efficiency (Score:1)
The key hurdle, in my mind, is a direct computer interface to the brain. Once we have that, our current clumsy programming tools become obsolete - and we will be able to see by direct comparision of AI code with our own minds what needs to be done.
Theere is nothing like having the right tools.
--
brief review of A Fire Upon the Deep (Score:2)
Danny.
Golem XIV by Stanislaw Lem (Score:1)
Golem XIV [www.lem.pl]
However, when you think about it a little, the idea of a disembodied intelligence exiting in a computer is silly. Think what happens to human conciousness when deprived of all sensory input.
Can a AI robot cross the street? (Score:2)
This is an excellent point.
I'd like to see the AI guys build a robot that can cross Broadway at Times Square, against the light, without getting squashed.
Re:"General" Human Intelligence not Necessary (Score:1)
Slightly OT (Score:1)
Re:The Singularity and Computational Efficiency (Score:2)
Well, I don't see the computational efficiency of humans (or future AIs) as being a problem.
It takes human-level intelligence to correlate interesting information together (design of proposed chemical plant, mapping of local water table). But it doesn't take human-level intelligence to actually run the numbers and discover that there's a problem (arsenic levels in drinking water over EPA guidelines).
Future AIs will be able to do the same things we do now. Except that the AI will be directly wired to unbelievably fast parallel supercomputers. (Dare I say Beowulf Cluster?)
These AIs will be able to simulate complex weather systems as easily as you can calculate a mortgage table in Gnumeric.
Corporations not a recent near-singularity (Score:1)
The church
The monarchy and aristochracy
The state
At least in my countrys history (Denmark) the imortallity of these entities have had a profound effect on the political and personal lives of the citizens. This is particulairy the case for the church. One of the main reasons that the Danish king abolished catolisicm in favour of protestantism was that the church had ammased immense power and wealth through (mostly deathbed) donations of money and (more important) land. The land belonging to the crown and the aritocracy was slowly eroded away, as it was split up and inherited by the younger sons - who in some cases donated it to the church in order to improve their standing in the hereafter
At some point this led to the royalty and aristocrats joining forces, and neutering the church. This may happen to corporations too, if they get too powerfull. The current anti-trust laws are an indication that the political leadership of ANY country will never concede power to another entity.
Re:Unpredictable future (Score:1)
Excuse me? I can imagine the workings of my own brain quite well, even though I can't (yet) understand them. There is no reason that we are incapable of understanding the workings of the human brain, and therefore I think it rather likely that we will understand the workings of the human brain eventually (assuming that humankind lasts long enough).
Re:Predictability and Unpredictability (Score:1)
Yes. That set of rules would be exactly the program that is running on the smart computer. Probably no simpler set of rules would completely define it's behavior.
I believe that you are confusing 'deterministic' with 'predictable' and thinking that determinism makes prediction easy.
Why does a machine need to be conscious? (Score:2)
Something as simple as a self replication nano-bot (whatever that is) that consumer oxygen for energy could end up being the only non-plant form of life on the planet if it replicated out of control and drove oxygen levels below that needed to sustain animal life.
Currently machines do replicate and improve themselve, with the help of humans. Over time the amount of help they need is continuely decreasing. I do not think that machines will need to be as intelligent as humans to decrease the amount of human assistance required for replication to near 0.
-josh
Re:Smartness is Overrated (Score:2)
(I mean, that's if they had any reason to really care about your (or my) opinion. Which they probably don't, except perhaps as just another tiny part of the masses.)
And the point isn't that supersmart machines would necessarily want to run the world, it's that it's hard to guess what they would want. Or why they should care if what they want happens to be at odds with what we might want. Why would what we want be at all relevant to them?
Huh? (Score:2)
Where machines suddenly exceed human intelligence and the future becomes completely unpredictable.
It's funny to see someone predicting the future and at the end of their prediction ruling out the possibility of future predictions.
My prediction: That this prediction will end up like the majority of predictions -- wrong.
Re:Why emotion? (Score:2)
Emotions are much more than just chemical reactions. Chemical reactions are just how the human brain happens to implement emotions. Emotions have function and behavioral consequences (e.g. you lust for a female, so you sneak up behind her, restrain her, and hump her -- oops, I mean -- you talk to her and find out her astrological sign and phone #) and that behavior has emerged through (and been shaped by) the evolutionary process. Emotions do things useful for continued survival of the genes that program the chemical processes that implement the emotions, it's not just some weird byproduct.
An AI that is created through an evolution-like process (and there is a very reasonable chance that this is how the first AI will be made) will benefit from the behavior-altering characteristics of emotions, so they will probably emerge. Sure, they won't be implemented as chemical processes (well, I guess that depends on how future computers work ;-) but they'll
be there.
---
Re:The Singularity and Computational Efficiency (Score:2)
Mathematics as we know it has only been around for a couple thousand years (and was pretty darned simple until just a few hundred years ago), but humans have been around for hundreds of thousands of years. This means that the ability to do arithmetic quickly, simply isn't something that humans need to do to survive, thus evolutionary forces have not optimized our hardware for doing that.
If you want AIs that are fast at arithmetic, evolve them in a virtual environment where arithmetic ability is an important selection criterion.
---
Re:"A Fire..." and Anachronistic Commentary (Score:3)
I don't think many people back then had any idea, that it would suddenly become "normal" for people to execute untrusted data with full privledges. The concept is still mind-boggling even today, let alone 1992.
OTOH, it's more of a social issue than a technogical one. I guess it doesn't take much vision to realize: People are stupid.
---
Re:Flawed assumptions? (Score:2)
Consider, e.g., a large company that implemented an internal copy of the net. Now it has it's network servers, attached, but there's this problem of locating the information that is being sought. So it implements xml based data descriptions, and an indexing search engine. And, as computers get more powerful, it uses a distributed-net approach to do data-mining, with a neural net seeking the data, and people telling it whether it found what they wanted, or to look again. As time goes by, the computer staff tunes this to optimize storage, up-time, etc. The staff trains it to present them the information they need. It learns to recognize which kinds of jobs need the same information at the same time, which need it after a time delay, etc. And then it starts predicting what information will be asked for so that it can improve it's retrieval time.
Of the entire network, only the people are separately intelligent, but the network is a lot more intelligent than any of its components, including the people. The computers may never become separately intelligent. But the network sure would.
Still, I expect that eventually the prediction process would become sufficiently complete that it would also predict what the response to the data should be. So it could predict what data the next person should need. So it could predict what answer the next person should give. So
So if anybody called in sick, or went on vacation, the network would just heal around them. And eventually...
Caution: Now approaching the (technological) singularity.
Re:The Singularity and Computational Efficiency (Score:2)
Have you ever heard of sound cards? Video cards? Specialized graphics chips?
There's nothing that keeps computers from adding specialized signal processing hardware onto their general purpose capability. This is proven, because we already do it. And so does the brain.
Perhaps we will need to invent a specialized chip to handle synaesthesia for our intelligent computers. Is that really to be considered an impossible hurdle? To me that seems silly. Just because we don't know how to do it, and how it should be connected yet , doesn't mean that we won't next year. Or the year after that.
Caution: Now approaching the (technological) singularity.
Re:Smartness is Overrated (Score:2)
A certain amount of intelligence is probably necessary, but the main ingredient seems to be a monomanical fixation. This, of course, leads to a certain number of acts that actually hinder the cause that one is ostensibly attempting to forward, but if the result in increased control, then to the lunatic in charge, this will actually be evaluated as a success.
Don't trust what they tell you, watch what they do.
Actions speak louder than words. (Don't I wish. In fact, many pay attention to the words, and ignore the actions.)
Caution: Now approaching the (technological) singularity.
Re:Official Flame Thread (Score:2)
Caution: Now approaching the (technological) singularity.
Re:Vinge's Singularity is AI Doc Numero Uno! (Score:3)
If you will recall, last year was full of people denouncing Mozilla as a failure. It took a bit longer than they expected. But I no longer use anything else when I'm on Windows. (True, on Linux I more frequently use Konqueror, but I use Mozilla whenever I'm on the Gnome side of things.)
Possibly people's ideas of how a project should work have been overly influenced by movies and popular stories. (Though in Asimov's Foundation series, the bare framework of the Seldon plan required the entire lifetime devotion of the principle architect, as well as extensive commitment from dozens of others, so not all popular fiction is of the "quick fix" school.)
Relativity took many years to be developed to the point of presentation, then it took decades of testing, and it's still being worked on. Special Relativity is now reasonably soundly grounded, but General Relativity still needs work. But people don't call it a failure. Why not? The A-Bomb was as much of a brute-force effort as Deep Blue was. Both were successful demonstrations, and in their success they highlighted the weakness of the underlying theories.
But when it comes to AI, people keep moving the markers, so that whatever you do isn't really what they mean. I wait for the day when the hard version of the Turing test is passed. I firmly expect that at that point AI will be redefined so that this isn't sufficient to demonstrate intelligence. Already in matters of sheer logic computer programs can surpass any except the most talented mathematicians. (And perhaps them, I don't track this kind of thing.) It's true, most of these programs require a bit more resources that is today available on most home computers. But that's fair. Neural net programs can solve certain kinds of problems much more adeptly than people can. And they learn on their own what is an acceptable solution (via "training" and "reinforcement", etc.). And expert systems and capture areas of knowledge that are otherwise only accessible to experts in the field. (For some reason, experts are often a bit reluctant to cooperate.)
Now it's true, that these disparate functions need to be combined. It's true that the world is quite complex, and the only way to understand it may be to live in it.
The real problem with AI, is that nobody has a satisfactory definition of the 'I' part. Artificial is clear, but nobody can agree on a testable definition of Intelligence. The one real benefit is that it may get rid of those silly multiple choice IQ tests, and Standardized Achievement Tests. It would be easy for an AI to learn how to get the highest score possible (though it would require a bit of training, but then that's what they've turned grade-schools into -- training grounds for multiple choice tests).
Caution: Now approaching the (technological) singularity.
Re:Two things (Score:3)
In certain decades it is "fashionable" to be optomistic. In others to be pessimistic. (The reasons have much to do with the age spread of the population, of the writer, with whether the author feels that things are getting better or worse NOW, etc.) During the late 50's up through the mid 70's optimism dominated. Then there was a reaction (Vietnam war, etc.) and the trend turned to pessimism (this started in Britain for some reason...I don't know why, I wasn't there).
But there are always contrary voices. When Asimov, and the well engineered machines that favored humanity were dominant, then Saberhage introduced the Berserkers (intelligent robot war machines designed to reproduce, evolve, and kill all life.)
I can't remember which are current, but novels with robot servents (sometimes almost invisible) aren't that uncommon even now. They just aren't featured characters anymore. They've become common, expected.
OTOH, another of Vinge's postulates is coming to pass, whether through fashion or necessity, the proportion of fantasy to science fiction is increasing. Fairly rapidly. Fantasy used to be uncommon (although it was common before WWII). In the 50's and 60's it was usually disguised as science fiction. It started emerging again in the 70's. And now it is the predominant form. But a large part of this may be fashion. OTOH, Vinge predicted that as the future became more incomprehensible, the proportion of fantasy to science fiction would increase. So. Not proof, but evidence.
Caution: Now approaching the (technological) singularity.
Talk - an early form of instant messaging? (Score:2)
Quote from the article:
Is it just me, or did anyone else pause for a second after reading that sentence? As far as I remember, most of the operating systems that had access to the Internet had some form of a "talk" program. This includes all UNIX-like operating systems that I tried, such as Ultrix, SunOS, Solaris, HP-UX, A/UX, AIX and now Linux, but also some IBM 3090 mainframes (although these were batch-processing machines, there was also a way to talk to other users).
The term "instant messaging" was coined much later: only a few years ago, when Windows started to invade all desktops and AOL started promoting its AIM. Seeing "talk" defined as "an early form of instant messaging" just looks... strange to me.
Re:We've already been through a singularity (Score:2)
Corporations are an artifact of our legal systems and have steadily grown in power and efficacy since they were first concieved several hundred years ago. At this point they are self-sustaining and self-reproducing, even persuing their own agendas that have only a tangential relationship to individual human agendas.
I think it is interesting to note, however, that corporations are not, by almost any measure, smarter than individual humans, quite the oposite (consider well known sayings about the I.Q. of a mob or design by committee). The issue isn't whether our creations become more intelligent than us, but whether they become more potent than us.
Corporations have become more potent than individual humans because 1) they can amass far larger fortunes (in terms of manpower, money, land, or almost any other measure) than an individual, and 2) they are, essentially, immortal (and, to a large extent, unkillable. While the laws may, technically, be empowered to disband a coroporation, in practice this is nearly impossible). Corporations are essentially god-like: omnipotent (if not omnicient) and immortal, invulnerable to almost any harm, complete with their own mysterious motives and goals.
So, if we accept that the singularity has already occurred, we might ask why we aren't more aware of it's after effects. The answer, of course, is that the corporations don't want us to be aware, and are doing everything in their considerable power to obscure the effects of the singularity. Life goes on as normal, as far as lowly humans are concerned, because it would be terribly inconvenient for the corporations if it didn't (modulo polution, environmental destruction and a moderate amount of human suffering and expoitation).
Re:Vinge embodies the worst of science fiction (Score:2)
The reason that noone is commenting on Vinge's characters or stories is because they are not relevant to the topic at hand! The issue at hand is whether or not Vinge is a blithering nut-job for going on about this singularity crap that seems to be so popular with a number of science fiction writers cum technology commentators. I am heartened to see that there is a fair amount of skepticism in the comments concerning the idea of the singularity and Vinge's general nuttiness (and, even, self-contradiction) on the subject. It's good to know that the CS and IT trenches are fill, for the most part, with sane, level-headed folk, unlike the ranks of supposed luminaries like Joy, Kurzweil, and Vinge.
There may well be folks in this forum who think that Vinge is a great writer: they're wrong, but more power to 'em anyway. I've read both A Fire Upon the Deep and A Deepness in the Sky and found them moderately enjoyable, but nothing to rave about. I wouldn't say that Vinge is in the ranks of the worst science ficiton I've ever read, but he's not far removed from the median (I won't say if he's above or below).
<OFFTOPIC>
If you are looking for good literature in SF, you should have a look at Gene Wolfe (the New Sun and Long Sun series), Kim Stanley Robinson (Red/Green/Blue Mars and Ice Henge), Octavia Butler, Richard Grant (Rumors of Spring, Views from the Oldest House and Through the Heart. More recently, Tex and Molly in the Afterlife, In the Land of Winter and Kaspian Lost), or, maybe, Stephen R. Donaldson. I used to be quite fond of C. J. Cherryh, but have found her recent stuff too formulaeic. There is good SF out there, but, as with almost anything else, the ratio of good-to-crap follows Sturgeon's law.
<OFFTOPIC>
The Singularity and Computational Efficiency (Score:5)
However, in doing this extrapolation, one is making a few assumptions. Most notably is that one can teach a computer how to
What do I mean by computational efficiency? Roughly speaking, the relative performance of one algorithm to another. For instance, in talking about the singularity (as Vinge puts it), one often neglects to notice the fact that human beings, with their neurons clicking away at petacycles per second, can only do arithmetic extremely poorly, at less than a flop! Logical puzzles often similarly vex humans (witness the analytic portions of the GRE!), where they also perform incredibly poorly. Significantly, human beings are very computationally inefficient at most tasks involving higher brain functions. We might process sound and visual input very well and very quickly, but most higher brain functions are very poor performers indeed.
One application of a similar train of logic is that human beings are the only animals known to be capable of performing arithmetic. Therefore, if one had a computer comparable to the human brain, one could do arithmetic. Heck, by this logic, we're only 50 years away from using computers to do integer addition!
The main point here is that, with regards to developing a "thinking" machine, WE MIGHT VERY WELL have the brute force computational resources available to us today. The hardware is not the limitation, so much as our ability to design the software with the complex adaptive ability of the human brain.
Just WHEN we will be able to develop that software, no one can really say, since it is really a fundamental flaw in our approaches, rather than in our devices. (It is similar to asking when physicists will be able to write down a self-consistent theory of everything. No one can say.) It could happen in a decade or two, or it could take significantly longer then 50 years. It all depends on how clever we are in attacking the problem.
Diaspora by Greg Egan (Score:1)
let me plug the novel Diaspora by Greg Egan as an interesting look at what the singularity will mean to the future of humanity - the history of the rest of time reduced to handy pocket novel size
Re:Flawed assumptions? (Score:2)
Yes, technology will advance in the next X years, but to assume that a necessary part of that advancement is the creation of a machine that is more intelligence than a human is just plain ridiculous. Some would argue that a machine intelligence of that nature is absolutely impossible in the first place (not that I agree with them, but there are rational arguments that suggest this).
I'm basing my view on the state of AI and what we can expect in the future on the results of research I've seen and carried out at some of the top AI departments in the world, so I think I've got a fairly good grasp of the subject matter, and I am 100% happy to say that faster computers will not give us any form of machine intelligence.
Re:Flawed assumptions? (Score:3)
But very rarely in the ways you expect. Look at the predictions people were making for life in the year 2000 back in 1800, or 1900, or 1950, or even 1990. You'll see that a lot of it didn't happen. Some did, and some things that people hadn't even considered happened as well. But a lot of it just didn't take place.
Regardless of whether advancement takes place, the link that Vinge assumes between computer hardware performance and computer intelligent does not exist. If true machine intelligence comes about within the next thirty years it will not be as a direct result of improved hardware performance. There aren't any systems out there that aren't intelligent, but could be if we could overclock their processors to 150GHz.
Flawed assumptions? (Score:5)
Progress in computer hardware has followed this curve and continues to do so. Progress in computer intelligence however, hasn't. Computers are still stupid. They can now be stupid more quickly. This isn't going to produce super-human intelligence any time soon.
Dr Vinge reminds me somewhat of that most mocked of AI doomsayers, Kevin [kevinwarwick.com] Warwick [kevinwarwick.org.uk].
Replies (Score:1)
You said: "Why do people think SIs [super intelligences] will be unpredictable?"
Because they are so much smarter and intelligent than us that we have no way of knowing how they will think and thus act. Just as a moron can't write a book about a rocket scientist, an AI researcher can't write a book about a SIAI (well, at least not a rational one). In some cases, AI will act in ways that seem reasonable to us, and other times it will require a lot of thought on our part to understand their better reasoning. Still, as of right now, we can't say for sure how an AI will act or even once they are around know how they will act.
You said: "I don't think SIAI will be dangerous. They're way smarter than us and won't be evil."
Many of us in the Singularitarian community used to make this assumption, but eventually you realize that most of the reasons that we consider that an AI would act like this are antropomorphic. AIs don't have those same built in features that we do. They wouldn't even have a sense of self when first created (or maybe ever). Check out http://www.singinst.org/ for papers on Friendly AI for potential solutions and further explinations of this problem.
You said: "We'll never develop a theory of general intelligence. We're not even going to have strong AI, let alone ones that can improve their own code."
You have a lot to learn. All I can say to these kind of comments is that you need to educate yourself better on topics of cognitive science. You know, learn about physicalism (roughly, brain == mind) as opposed to dualism, learn about how information is represented in the mind, etc.. This view is mostly the result of simply have not been presented the proper information.
You said: "I don't think the Singularity will happen. I mean, look at how much stuff has been predicted and never came true."
The Singularity is a matter of life and death. In case you're wondering, the Meaning of Life right now is to reach the Singularity. Until then, anything you do is pretty much worthless unless it gets us to the Singularity faster. If we don't reach the Singularity, you are going to die sooner or later. The Singularity means you can live as long as you like, do what you want (this is a complex issue, see my site for a paper on how to protect non violation of volition post Singularity), and live happily ever after. I can't stress this enough: reaching the Singularity is a matter of life and death. If we don't make it, we all die. Maybe not today, but soon. And this is not just the fate of individuals, but of humanity and all life. If the Singularity is not reached, all life will cease to exist eventually. When looking at the Singularity in this life, you almost have to wonder why you're not already help to make it arrive sooner.
Re:Talk - an early form of instant messaging? (Score:1)
Now that we have that defined that equivalence, are there any IM patents that need busting?
uh... (Score:2)
Man, that sure sounds strange to my ears. I wonder what stuff the press will be explaining in a few more years...
Registration-free link (Score:1)
Re:The Singularity and Computational Efficiency (Score:2)
#include "disclaim.h"
"All the best people in life seem to like LINUX." - Steve Wozniak
Re:Knowledge Crash (Score:2)
Like you say, an interesting theory. However, it seems to hinge on the idea that educating someone carries a fixed cost per unit of knowledge (whatever that may be). Or at least that the cost of education per k.u. is not falling as fast as the rise in the number of k.u.'s required to operate in society.
This ignores the fact that it is not always necessary to have an instructor or prepared curriculum in order to learn something.
For example, when I first got a Windows box, I could have spent $150. on a course at the community college to learn how to double click on an icon, but chose to save my money and teach myself.
In fact when it comes to education in general, once you teach someone how to engage in critical thinking, and give them access to a worldwide knowledge database (which the Internet is turning into), the motivated student can gain unlimited knowledge and virtually no cost other than connectivity.
Myself as an example: I have learned far more in my past 6 years of Internet access at a cost of <$1.4K in dial-up fees than I did in my previous 6 years of university education at a cost of >$30K in tuition fees.
Trickster Coyote
I think, therefore I am. I think...
ray? (Score:1)
Re:ray? (Score:1)
you would have seen this. [kurzweilai.net]
Re:ray? (Score:1)
Re:Flawed assumptions? (Score:2)
Technology in genetics, networking, materials science and electrical engineering is progressing at a frightening rate. Soon, we'll be able to construct useful, microscopic machines; implanted computers; and who knows what else.
The world becomes stranger faster, every year.
--
Aaron Sherman (ajs@ajs.com)
Across Realtime and the signularity (Score:3)
The idea is that technology progression is asymtotic, and will eventually reach the point where one day of technological progress is equal to all that of human history, and then, well... there's the next day. He doesn't cover exactly what it is, because by definition, we don't know yet. But, it's catastrophic in the novel. A good read (actually the first part which basically just introduces the "Bauble" is a good read alone).
He sort of refined the idea into something maintainable in Fire Upon the Deep by introducing the concept of the Slow Zone which acts as a kind of buffer for technology. If things in the Beyond get too hairy, the Slow Zone always remains unaffected, and civilization can crawl back up out of the "backwaters" (e.g. our area of the galaxy).
He's a good author, and I love his take on things like cryptography, culture (A Deepness in the Sky), religion, USENET (Fire Upon the Deep), Virtual Reality and cr/hacker culture (True Names).
--
Aaron Sherman (ajs@ajs.com)
Re:The Singularity and Computational Efficiency (Score:1)
It may well be that we'll never be able to design such software.
However, we could evolve it. Using genetic algoriothms and other "evolutionary" programming approaches [faqs.org] seems to me the most promising approach.
Tom Swiss | the infamous tms | http://www.infamous.net/
Re:Flawed assumptions? (Score:1)
Re:The Singularity and Computational Efficiency (Score:1)
Human intelligence? (Score:4)
scold-mode: off
We've already been through a singularity (Score:2)
The human race has already been through a singularity. Its aftermath is known as "civilization", and the enabling technology was agriculture, which first made it possible for humans to gather in large permanent settlements.
There are a few living humans who have personally lived through this singularity... stone-age peoples in the Amazon and Papua New Guinea abruptly confronted by it. For the rest of the human race it was creeping and gradual, but it still fits the definition of a singularity: the "after" is unknowable and incomprehensible to those who live in the "before".
Re:Singularity, SETI and the Fermi Paradox (Score:2)
There are other possibilities as well for SETI's lack of success. Our solar system and our planet may be fairly unique in some ways:
Probably bacteria-like life is extremely common, but advanced intelligent life might in fact be somewhat rarer than was once thought.
virtual reality progress = ghost planet (Score:3)
Another strong possiblity (for lack of SETI) is that intelligent races prefer virtual reality to real reality, in much the same way that the human race prefers to sit inside watching TV instead of going outside for a walk in the woods and grasslands where we evolved.
When we have better than Final Fantasy rendering in real time, most of the human race will probably choose to spend most of their day living and interacting there, in virtual-reality cyberspace... in much the same way that many of us today spend most of their day in an office environment, living and creating economic value in ways incomprehensible to our hunter and farmer ancestors.
When this happens, the planet may seem empty in many ways... in much the same way that suburban streets in America seem empty to a third-world visitor used to bustling and noisy street life.
This phase (human race moves into and settles cyberspace, become less visible in the physical world) is not the same as the Singularity. For one thing, it is not at all dependent on future advances in artificial intelligence... we just need ordinary number-crunching computers a few orders of magnitude faster than today.
If the AI naysayers are right, and machines never get smart enough, then the Singularity will never happen... but the "ghost planet" scenario will inevitably happen in our lifetime... either as a result of progress, or as the unhappy result of plague or nuclear war.
Re:get a copy...if you can (Score:1)
True Names - the novel by Vernor Vinge [gatech.edu] ... Bluejay Books) TRUE NAMES ...
Comment by the transcriber
VERNOR VINGE Bluejay Books Inc. All the
progoth.resnet.gatech.edu/truename/truename.htm - 101k - Cached [slashdot.org] - Similar pages [slashdot.org]
Rapidly accelerating tech != singularity (Score:2)
The function y = 2^x has no asymptote - it becomes ever higher, ever steeper, but for each value of x there is a finite value of y
Let x = time, let y = technological level (if such a concept is reducable to a single number) and this may be a model our progress under ideal conditions, free from setbacks like plauges & nuclear wars
I have yet to hear a good reason why this model is not a better one than the singularity idea, other than wishful thinking & that the singularity makes a better story. But let's not confuse SF with the real world.
We dont even have working human-like AI yet (Score:2)
The converse is can the calculator understand algebra or calculus? Nope. Do we currently have machines that aren't as smart as humans but can understand/simulated human mentation? Nope. (I certainly don't think Cyc the database qualifies or that Darwinian algorithms have intelligence) Can 'very briefly' equal never. Yep.
I don't know when or why the a certain part of the geek contingent can't tell the difference between fiction and reality anymore but this transhumanist/Kurzweil/extropian stuff wont work until we have a working model of consciousness that can be verified experimentally.
Vinge and his ilk are this generation's Tim Leary, lots of optimism and futurism but feet planted strictly in the sky.
Re:Golem XIV by Stanislaw Lem (Score:1)
Re:Deepness in the Sky - Focus (Score:2)
We had better hope that AI (and hence the Singularity) is indeed possible, because if it isn't, Focus is almost certainly possible, and with it tyrrany on a scale we can barely imagine.
Singularity is "Rapture for Nerds" (Score:1)
Ken MacLeod noted this in a Salon article [salon.com].
Re:Respect Copyright (Score:1)
True Names Re-issue keeps getting delayed (Score:2)
Re:The Singularity and Computational Efficiency (Score:1)
Thanks for the post, and while I (naturally) agree with your conclusion AI is a software rather than a hardware problem, your comment
only describes the calculations we carry out consciously. This doesn't really apply to the autistic lightning calculators - or even us when we're doing calculus to say, catch a ball or drive a car. Trying to think about what you're doing under those circumstances tends to make the task quite a bit harder..(Is consciousness over-rated? :)
Is there anyone out there who knows more maths than me who's willing to tell me what my brain can do that a neural net of sufficient size can't?
All I have to say to Vinge is... (Score:1)
I like his books, but his predictions about the future are about as likely as those in the 50's stating that we would be all have our own flying vehicle by now.
Re:Huh? (Score:1)
Great teacher (Score:1)
Re:Deepness in the Sky - Focus (Score:2)
Scares the shit out of me.
The non-Singularity (Score:2)
Vinge has made it fairly clear that he doesn't think that Deepness is where society is going--he seems fairly confident that we'll reach the Singularity.
~=Keelor
Re:Flawed assumptions? (Score:2)
Well, that doesn't say much. Because either A) you're not very bright or B) you live a very safe and healthy life, so you expect it to be looong.
But seriously, don't you think there's a huge step from building an artificial neurological brain to making it actually work. We may imitate some internal processes in the neurons, but the brain has a huge and complex architecture suited for human activity and body. I believe it can be done, roughly, but if it's going to be in MY lifetime there'll have to be HUUGE advances soon.
I don't believe these AIs will be comparable to humans that soon though. Much of human thinking is not logical at all. If we were to only live perfectly logical lives, I think I'd vote myself out of "humanity". Because much of our joy and fun is not logical at all.
Then again, it all really depends what you mean with intelligence too. That's just another can of worms, making such statements completely arbitrary.
- Steeltoe
Re:Vinge's Singularity is AI Doc Numero Uno! (Score:1)
And if you're really serious about this, remember that a lot of clever people have tried this before you, and utterly failed.
Good luck anyway!
Re:Knowledge Crash (Score:2)
Allen also wrote a book called The Modular Man about a man who downloads his personality into a robotic vaccum cleaner that is excellent and deals with many of the same concepts Dr. Vinge is talking about.
Knowledge Crash (Score:4)
The idea is, basically, that every year it costs more to educate someone. In order to be able to expand our collective knowledge, or even to utilize the machines and operate the systems of the present, it will cost a certain amount of money in the education process.
In addition, we can quantify the amount of output a single human creates in his or her lifetime. For instance - if she works for thirty years at a Power Planet or something, we can determine the value that she has contributed to society.
As systems become more complex, more education is required. The education costs more money. At some point, if this continues unchecked, we will be faced with a situation where the cost of education exceeds the value brought as a result of that education.
That's called the Knowledge Crash. (Or it was in the books.)
While I'm not convinced that this is true, it's certainly an interesting theory. It seems to me that, on average this can't happen, as one of the points of creating more and more complicated (generic) systems is to facilitate simpler and simpler controls, and thus dumber and dumber operators. While the creators of those systems may have 'crashed knowledge,' it seems that the whole point of that would be to hurl some value at the workers.
But then you have to consider that, inherent in the value of a designer, the ease of use is part of the entire value analysis versus education, and then that'll crash...
Re:Flawed assumptions? (Score:2)
While skepticism is a fine sentiment, I can't help notice that you are making more assumptions than Vinge is. Sure we will be able to simulate or imitate the brain roughly -- but I think it is a stretch to demand that consciousness come only from detailed imitation. It may be that roughly is enough. The brain is also a finite machine - we will soon be able to build electronics that excess the capacity.
Re:Flawed assumptions? (Score:2)
For example, storage. It seems that we can build harddisks in excess of human brain capacity. But static storage is incomparable to the dynamic kind of store the brains have. So - wrong measure.
Another example: FLOPS. The human brain is a massively parallel computer, microchips are not. Now it is claimed that you can simulate a parrallel computer by a single chip one. Admittedly, the difference between possibility and practicability is huge. But if the brain is a massively parallel computer, then a sufficiently fast chip will get to the level where it has comparable compute power. Just run a brain simulation of this computer. If the brain is not, then again - wrong measure.
We can just go on, finding the right measure. I think all things considered, that mesaure will be exceeded and by that time, we will have a conscious computer.
At some level, you either have faith, or you don't.
Singularity, SETI and the Fermi Paradox (Score:1)
Fermi showed that, given reasonable assumptions, we ought to expect "ET" to be ubiquitous. Since extraterrestrials are not all about us, this suggests either technologic civilizations are exquisitely rare or that they rapidly lose behaviors like migration and radio communication. By rapidly I mean within two to three hundred years.
The Singularity is the kind of event that would do that. If technologic civilizations always progress to a Singularity they may well lose interest in minor details like reproduction and out migration. Among other things they would operate on very different time scales from pre-Singular civilizations.
See also http://www.faughnan.com/setifail.html [faughnan.com].
john
--
John Faughnan
Re:Singularity, SETI and the Fermi Paradox (Score:1)
A few comments:
1. Fermi's calculations assume a civilization with light speed technology that expands from planet to planet every few hundred to thousand years. No highly advanced technology is required for such a civilization to colonize the galaxy within tens of thousands of years -- just exponential growth. See for a Rumanian example.
2. The point of my argument is that it's not likely that post-Singular civilizations are driven by the same things that drive biological organisms (growth, expansion, etc). For one thing their time scales are different from biological organisms; it's not hard to imagine that they exist in a time-space that's thousands of times faster than ours.
Here's my argument in summary:
--
John Faughnan
Singularity, SETI and the Fermi Paradox - Kurzweil (Score:1)
See Kurzweil's article: and search on SETI.
I may have thought of it earlier (a year ago or so), but I didn't think I was the only one who thought of it.
see www.faughnan.com/setifail.html
--
John Faughnan
Misconception about Vinge's Singularity (Score:1)
Vinge does not require the advancement of computers to a point at which they are regarded intelligent. This is only one of several possibilities mentioned in his paper [caltech.edu].
Other possibilities include:
Vinge is one of the Spearheads of traditional SF (Score:1)
Your recollection, lacking exactly the substantiation you mention, is worthless. You can find plenty of detailed comments using Google.
Deepness in the Sky (Score:2)
'Deepness' [amazon.com] is the Prequel to 'Fire upon the Deep' and even better. Read it first.
While there is more discussion about non-human intelligence in 'Fire', the actual impact of Vinge's idea is greater in 'Deepness', where his excellent world-building skill is used to create the best traditional SF I know.
Both 'Deepness' and 'Fire' also feature some really neat alien races.
Where AI went to die (Score:2)
Below the sign, two rows of empty cubicles hold obsolete computers and out of date manuals. In a nearby lounge, old issues of Wired from the early '90s lie on tables. Dusty boxes are stacked against one wall. Few people are about. Nearby office doors hold the names of researchers who had their fifteen minutes of fame around 1985.
This is where the dream died. The Knowledge Systems Lab was the headquarters of the expert systems faction of artificial intelligence. These were the people who claimed that with enough rewrite rules, strong AI could be achieved.
It didn't work. And that empty room, frozen in the past, is what remains.
Re:I don't buy it. (Score:2)
Snicker. I think Dr. Vinge is right...and I think it is scary. If you are familiar with electronics, think about how a diode avalanches.
If he is correct, AI could well "avalanche" past what evolution gave us in a very, very short period of time.
Humans learn at a given pace. We are nearly helpless at birth, yet can be a "McGyver" in our twenties and thirties, able to make a helocopter gunship from nothing but bailing wire and duct tape (on tv anyway). That's a 20-30 year span, or nearly a quarter of our lives to reach our maximum potential.
Who is to say an AI system could not, at some point, triple it's cognitive abilities in a 100 nS time slice?
And to think I didn't take his class cuz some lamer told me he was a "hard ass" -- rats. That's what I get for listening to lamers. SDSU has so many wonderful Professors...Vinge, Baase, Carroll. Great University, great professors, great memories.
Treatment, not tyranny. End the drug war and free our American POWs.
Smartness is Overrated (Score:2)
Re:Smartness is Overrated (Score:2)
I respect their copyright... (Score:2)
Hmm...I wonder should I have an ethical dilemma reading commentary of people who have read the article in violation of copyright? I think not, since I have entered into no agreement with the NYT.
"General" Human Intelligence not Necessary (Score:3)
Because of stupid, but fast, computers, we are headed toward being able to hack our DNA (and/or proteins). This will certainly produce incremental gains in lifespan and health...perhaps it will produce dramatic ones.
Because of stupid, but fast, computers, we can simulate physical processes to enable us to engineer better widgets. Perhaps this will make routine space travel economical.
Because of stupid, but fast, computers, we are heading toward having the bulk of human knowledge instantly available to anyone net connection. How will this leverage technical progress?
Two things (Score:3)
Another thing has to do with this "let's fear AI" genre of SciFi in general. Why does no one challenge the assumption that when artificial creatures develop intelligence and a personality, that personality will inevitably be indifferent, power-hungry and cold? Isn't it just as easy to imagine that artificially intelligent creatures/machines will strike us as being neurotically cautious, or maybe friendly to the point of being creepy? Maybe they'll become obsessed with comedy or math or music. Or video games.
Realistically, I think the first machines which we take to be intelligent will be very good at means-to-ends reasoning, but will not be able to deliberate about ends (i.e. why should one sort of outcome be preferrable to another). I would argue that even we humans can't really deliberate about ends. At some point we hit some hard-wired instincts. Why, for example, is it better that people are happy rather than suffering? The answer is just a knee-jerk reaction by us, not some sort of a reasoned conclusion.
When we create AI we will have the luxury of hard-wiring these instincts into intelligent machines (without some parameters specifying basic goals, nothing could be intelligent, not even we). Humans and animals are basically built with a set of instincts designed make them survive and fuck and make sure the offspring survive. There is no reason to think AI creatures would necessarily have these instructions as basic. I'm sure we could think of much more interesting ones. The consequence is that AI creatures might be more intelligent than we are, but in no way sinister.
Re:Knowledge Crash (Score:2)
As things get more complex, they get refined into modular pieces.
It takes a very small amount more training to drive a modern Ford Taurus as compared to a 1930's Packard.
This holds true even when fixing the car. Mechanics don't rebuild alternators anymore, they replace them.
Computer technicians don't use a solder iron anymore. They replace the defective video card!
This pattern holds with software, as well. Remember when C, today's "low level" language, was considered very inefficient and bloat-ridden? How about Perl? (Now fast enough to decode a DVD movie on the fly with moderate hardware!)
The real danger here is not that we'll have a knowledge crash, but that we'll keep dumbing everybody down to the point where, to run anything, you push a red button. If the red button doesn't work, we have a REAL crash...
-Ben
Re:Flawed assumptions? (Score:2)
The problem from the perspective of a working neuroscientist is that we don't yet understand how the brain is intelligent. On the other hand, things are starting to fall into place. For example, we have a hint of why neural synchronization occurs in the brain, because we're beginning to realize that time synchrony is something many neurons are very good at detecting. We're also beginning to understand memory formation in the cortex. It seems to involve the creation of clusters of synapses, and those clusters get activated by time-synched signals. There's some evidence for analog computation, and there's some evidence for almost quantum computation. So we're beginning to understand how to build a brain. That seems to be the hump, so I'm fairly confident I'll live to see computers at least as intelligent as I am. And I'm 54.
Re:Across Realtime and the signularity (Score:2)
Vinge's Singularity is AI Doc Numero Uno! (Score:4)
--
Re: (Score:2)
Re:The Singularity and Computational Efficiency (Score:2)
I disagree. Humans and other animals may be poor (relatively) at doing paper-and-pencil mathematics, but they are quite good and fast with innate math. Huh? Well, tossing a basketball through a hoop requires unconscious calculation to make the muscle add the correct energy to the throw, it must be pused in the correct direction to make up for player movement relative to the hoop, etc. A lion, alternatively, must do the calculation on efficient pursuit trajectory of prey when they bolt. A lion doesn't run to the prey, it predicts and compensates for the movement/running of the prey to form an intercept course.
This happens all the time and unconsciously with ALL creatures with a brain. It does involve math and it is automatic. Not too bad.
Then there is the difference between a machine calculating a formula that HAD to derive from a human No machine creates new formulas or mathematics. They ONLY calculate that which humans in their creativity, slow as it may be, are able to devise. Quantum math, relativity, calculus...humans are slow to calculate the answers but very good at coming up with the formulations and rules.
Re:Registration required? Why not! (Score:2)
Re:Flawed assumptions? (Score:2)
Which is completely irrelevant. The human nervous system evolved over the course of billions of years and works in a very specific and detailed fashion, most of which is still a mystery to us at the computational level. Without reproducing all that evolved design, we would not have anything like human intelligence.
We already have machines that exceed the human capacity in every way physically, but we have not yet been able to create a robotic construction worker. Why should just throwing terraflops at the intelligence problem go any further towards solving it?
Tim
Vinge embodies the worst of science fiction (Score:2)
It's sad that there's not a better venue for scientific speculation per se. If there were, people with no ear for fiction, such as Vernor Vinge, Robert Forward and Isaac Asimov, would not feel themselves forced into quasi-fictional exercises that demean both themselves and the storyteller's art.
Tim
Re:Succinctly (Score:2)
Hmm. The trouble is that very, very few humans can actually make intuitive leaps. I can think of the guy (or gal) who figured out fire, Da Vinci, Edison, Einstein, a handful of others. Most of us just make tiny tweaks to other people's ideas.
Bizarrely, given sufficient processing power, it might be more efficient to produce a speculating machine (that can design a random device from the atomic level up, with no preconceptions of usage, then try and find a use for it), rather than try and identify humans who can actually come up with ideas that are genuinely new.
Succinctly (Score:4)
The most succinct Vinge quote [mbay.net] that I can think of is:
cool singularity links (Score:5)
http://www.kurzweilai.net/news/
And then there's the non-profit corporation, the Singularity Institute for artificial Intelligence, which is determined to bring the Singularity about as soon as possible:
http://www.singinst.org/
There are a lot of good Vinge links on that page too, btw
Singinst seems to be the brainchild of this guy:
http://www.wired.com/news/technology/0,1282,43080
who has a lot of interesting docs here:
http://sysopmind.com/beyond.html
Don't miss, the FAQ on the meaning of life, it's great reading.
Unpredictable future (Score:2)
I thought the future was already unpredictable.
About the intelligent machines, I think the error is falling into the "biology" trap. Our whole perception system is conditioned by the ideas of "survival", "advancement", "power", "conscience", among others. Those come from our setup as living entities, trapped in a limited resources environment, having to compete for those resources. The fact that a machine is intelligent won't make it conscious, or interested in survival or power. There is no obvious relation. If your were to menace a machine more intelligent than you with cutting the power supply, it would be perhaps politely interested but not more. That is, if the development of the machine is made through "traditional" procedures. I would be wary of genetical-algorithm type developing. That could create a thinking and competitive machine :o)
There are things that we cannot even imagine. One of them is the workings of our own brains. Other one is how a thinking machine would act. Of course, some are more interesting to write a book about that others. But it isn't S&F for me, more like fantasy.
--
Hmm yes (Score:2)
Now that I have seen my error, can I correct it by withdrawing my post? Can anyone tell me how?
(This is not intended as a troll)
Registration required? (Score:5)
But Dr. Vinge does not publish technical papers on those topics. He writes science fiction.
And in turning computer fact into published fiction, Dr. Vinge (pronounced VIN-jee) has developed a readership so convinced of his prescience that businesses seek his help in envisioning and navigating the decades to come.
"Vernor can live, as few can, in the future," said Lawrence Wilkinson, co-founder of Global Business Network, which specializes in corporate planning. "He can imagine extensions and elaborations on reality that aren't provable, of course, but that are consistent with what we know."
Dr. Vinge's 1992 novel, "A Fire Upon the Deep" (Tor Books), which won the prestigious Hugo Award for science fiction, is a grand "space opera" set 40,000 years in a future filled with unfathomable distances, the destruction of entire planetary systems and doglike aliens. A reviewer in The Washington Post (news/quote) called it "a wide-screen science fiction epic of the type few writers attempt any more, probably because nobody until Vinge has ever done it well."
But computers, not aliens, were at the center of the work that put Dr. Vinge on the science fiction map -- "True Names," a 30,000-word novella that offered a vision of a networked world. It was published in 1981, long before most people had heard of the Internet and a year before William Gibson's story "Burning Chrome" coined the term that has come to describe such a world: cyberspace.
For years, even as its renown has grown, "True Names" has been out of print and hard to find. Now it is being reissued by Tor Books in "True Names and the Opening of the Cyberspace Frontier," a collection of stories and essays by computer scientists that is due out in December.
"True Names" is the tale of Mr. Slippery, a computer vandal who is caught by the government and pressed into service to stop a threat greater than himself. The story portrays a world rife with pseudonymous characters and other elements of online life that now seem almost ho-hum. In retrospect, it was prophetic.
"The import of `True Names,' " wrote Marvin Minsky, a pioneer in artificial intelligence, in an afterword to an early edition of the work, "is that it is about how we cope with things we don't understand."
And computers are at the center of Dr. Vinge's vision of the challenges that the coming decades will bring. A linchpin of his thinking is what he calls the "technological singularity," a point at which the intelligence of machines takes a huge leap, and they come to possess capabilities that exceed those of humans. As a result, ultra- intelligent machines become capable of upgrading themselves, humans cease to be the primary players, and the future becomes unknowable.
Dr. Vinge sees the singularity as probable if not inevitable, most likely arriving between 2020 and 2040.
Indeed, any conversation with Dr. Vinge, 56, inevitably turns to the singularity. It is a preoccupation he recognizes with self-effacing humor as "my usual shtick."
Although he has written extensively about the singularity as a scientific concept, he is humble about laying intellectual claim to it. In fact, with titles like "Approximation by Faber Polynomials for a Class of Jordan Domains" and "Teaching FORTH on a VAX," Dr. Vinge's academic papers bear little resemblance to the topics he chooses for his fiction.
"The ideas about the singularity and the future of computation are things that basically occurred to me on the basis of my experience of what I know about computers," he said.
"And although that is at a professional level, it's not because of some great research insight I had or even a not-so-great research insight I had. It's because I've been watching these things and I like to think about where things could go."
Dr. Vinge readily concedes that his worldview has been shaped by science fiction, which he has been reading and writing since childhood. His dream, he said, was to be a scientist, and "the science fiction was just part of the dreaming."
Trained as a mathematician, Dr. Vinge said he did not begin "playing with real computers" until the early 1970's, after he had started teaching at San Diego State. His teaching gradually shifted to computer science, focusing on computer networks and distributed systems. He received tenure in 1977.
"Teaching networks and operating systems was a constant source of story inspiration," Dr. Vinge said. The idea for "True Names" came from an exchange he had one day in the late 1970's while using an early form of instant messaging called Talk.
"Suddenly I was accosted by another user via the Talk program," he recalled. "We chatted briefly, each trying to figure out the other's true name. Finally I gave up and told the other person I had to go -- that I was actually a personality simulator, and if I kept talking, my artificial nature would become obvious. Afterwards I realized that I had just lived a science fiction story."
Computers and artificial intelligence are, of course, at the center of much science fiction, including the current Steven Spielberg film, "A.I." In the Spielberg vision, a robotic boy achieves a different sort of singularity: parity with humans not just in intelligence but in emotion, too. "To me, the big leap of faith is to make that little boy," Dr. Vinge said. "We don't have evidence of progress toward that. If it ever happens, there will be a runaway effect, and getting to something a whole lot better than human will happen really fast."
How fast? "Maybe 36 hours," Dr. Vinge replied.
Dr. Vinge's own work has yet to make it to the screen, although "True Names" has been under option for five years. "It's been a long story of my trying to convince studio executives to really consider the work seriously because it seemed so far out," said David Baxter, a Hollywood writer and producer who is writing the screenplay with Mark Pesce, co-creator of Virtual Reality Modeling Language, or VRML. "But as time has passed, the world has started to match what was in the book."
In the meantime Dr. Vinge has been providing scenarios in the corporate world as well. He is one of several science fiction writers who have worked with Global Business Network in anticipating future situations and plotting strategies for several major companies.
Mr. Wilkinson, the co-founder of Global Business Network, said that Dr. Vinge's work with the group provided "an unbelievably fertile perspective from which to look back at and reunderstand the present."
"It's that ability to conceptualize whole new ways of framing issues, whole new contexts that could emerge," Mr. Wilkinson said. "In the process he has contributed to the turnarounds of at least two well-known technology companies."
Dr. Vinge, shy and reserved, is hardly a self-promoter. He scrupulously assigns credit to others whenever he can. And although he insists that much of his work is highly derivative, his fans do not necessarily share that view.
"The thing that distinguishes Vernor is he's a scientist and all of his stuff makes sense," Mr. Baxter said. "It's all grounded in the here and now."
Dr. Vinge is now a professor emeritus at San Diego State, having retired to devote his time to his writing and consulting. Over lunch at a restaurant not far from the university, he described a story he was working on.
"Well, there's a recovering Alzheimer's patient," Dr. Vinge began, before being interrupted and asked how one could be a recovering Alzheimer's patient.
His eyes brightened. "You can't," he said, and a sly smile crossed his face. "Yet."
Re:The Singularity and Computational Efficiency (Score:2)
I'm not sure I agree that AI is a software problem because I don't see where regular human intelligence is a software problem. There is no software that comes with a new-born. A new-born is a complex system that comes out of the womb ready to learn. It's already thinking. You could argue that it has an OS - instincts, genetic instructions, but really what if there were a hardware copy of a baby only made with silicon (or whatever). If it was constructed properly it should pop out of the vat ready to learn.
I guess I'm arguing that intelligence is a function of pathway complexity and self-referentiality (real word?).
Maybe if we build it right - complex enough circuitry/pathways and enough self-referential ability, can modify itself and external environment, e.g. alter it's own version of programmable logic controllers and move a coke bottle with a robotic arm, [Yes I did say "programmable" but I didn't say "fully pre-programmed".] - maybe, like a new-born, if we build it right, and simulate a little evolution along the way, the intelligence will come.
I think the challenge is not coding intelligence which sounds impossible to me, but building something that facilitates intelligence developing on it's own, again, like a new-born. Not software, but hardware that has the "complex adaptive ability of the human brain".
Granted the first one would be no more intelligent than a rabid gerbil, but that's a good start.
Well, (Score:2)
Re:Flawed assumptions? (Score:2)
Consider that for perhaps millions of years we had fire and spears as our main tools. Then agriculture, then metallurgy, then language, communication etc. Each epoch is marked by revolutions in technological sophistication, and also, each epoch shift occurs more and more rapidly, in a logarithmic fashion. consider the advances of the last 100 years to see my point.
In fact, the last great technological revolution has been the global information network that we are currently using to discuss the topic. Born less than 30 years ago, it has already saturated the planet, becoming nearly ubiquitous to the segment of the population at the front of the wave.