Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
News

Vinge and the Singularity 163

mindpixel writes: "Dr. Vinge is the Hugo award winning author of the 1992 novel "Fire Upon the Deep" and the 1981 novella "True Names." This New York Times piece (registration required) does a good job of profiling him and his ideas about the coming "technological singularity," where machines suddenly exceed human intelligence and the future becomes completely unpredictable. " Nice story. And if you haven't read True Names, get a hold of a copy, plenty of used ones out there.
This discussion has been archived. No new comments can be posted.

Vinge and the Singularity

Comments Filter:
  • I suspect that once this "technological singularity" is reached in computing it will go something like this:

    DEEP THOUGHT : What is this great task for which I, Deep Thought, the second greatest computer in the Universe of Time and Space have been called into existence?

    FOOK: Well, your task, O Computer is...

    LUNKWILL: No, wait a minute, this isn't right, We distinctly designed this computer to be the greatest one ever and we're not making do with second best.

    LUNKWILL: Deep Thought,are you not as we designed you to be, the greatest most powerful computer in all time?

    Anyway, most of you know the rest. If not time to listen to radio series again H2G2 [bbc.co.uk]
  • Or copy it from existing code.

    The key hurdle, in my mind, is a direct computer interface to the brain. Once we have that, our current clumsy programming tools become obsolete - and we will be able to see by direct comparision of AI code with our own minds what needs to be done.

    Theere is nothing like having the right tools.
    --

  • I wrote a very brief review of A Fire Upon the Deep [dannyreviews.com]. (My older reviews were a lot shorter than my more recent ones [dannyreviews.com].)

    Danny.

  • In this book Lem presents the lectures that a super-intelligent computer gives to people about the nature of man and machine. Take a look here:

    Golem XIV [www.lem.pl]

    However, when you think about it a little, the idea of a disembodied intelligence exiting in a computer is silly. Think what happens to human conciousness when deprived of all sensory input.

    ...richie

  • This happens all the time and unconsciously with ALL creatures with a brain. It does involve math and it is automatic. Not too bad.

    This is an excellent point.

    I'd like to see the AI guys build a robot that can cross Broadway at Times Square, against the light, without getting squashed.

    ...richie

  • Actually... as to the stupidity of a computer, I would have to say PEBKAC...
  • Years ago (early 90's) I started a GURPS space campaign, spent weeks setting up scenario, etc. A week before the game was going to begin, a friend of mine (one of the people that was going to play in the game) hands me "A fire upon the deep" to read. As I started reading, It sounded vaguely familiar. Turns out my scenario for the GURPS game was remarkably close to the plot of the book. QUICK! To the re-write cave, Robin!
  • Well, I don't see the computational efficiency of humans (or future AIs) as being a problem.

    It takes human-level intelligence to correlate interesting information together (design of proposed chemical plant, mapping of local water table). But it doesn't take human-level intelligence to actually run the numbers and discover that there's a problem (arsenic levels in drinking water over EPA guidelines).

    Future AIs will be able to do the same things we do now. Except that the AI will be directly wired to unbelievably fast parallel supercomputers. (Dare I say Beowulf Cluster?)

    These AIs will be able to simulate complex weather systems as easily as you can calculate a mortgage table in Gnumeric.

  • You seem to be forgetting three rather old social and legal structures, that in most respects have the same attributes that you ascribe to corporations:

    The church

    The monarchy and aristochracy

    The state


    At least in my countrys history (Denmark) the imortallity of these entities have had a profound effect on the political and personal lives of the citizens. This is particulairy the case for the church. One of the main reasons that the Danish king abolished catolisicm in favour of protestantism was that the church had ammased immense power and wealth through (mostly deathbed) donations of money and (more important) land. The land belonging to the crown and the aritocracy was slowly eroded away, as it was split up and inherited by the younger sons - who in some cases donated it to the church in order to improve their standing in the hereafter :)


    At some point this led to the royalty and aristocrats joining forces, and neutering the church. This may happen to corporations too, if they get too powerfull. The current anti-trust laws are an indication that the political leadership of ANY country will never concede power to another entity.

  • There are things that we cannot even imagine. One of them is the workings of our own brains.

    Excuse me? I can imagine the workings of my own brain quite well, even though I can't (yet) understand them. There is no reason that we are incapable of understanding the workings of the human brain, and therefore I think it rather likely that we will understand the workings of the human brain eventually (assuming that humankind lasts long enough).

  • If computers do get smarter than humans, wouldn't we be able to follow a set of rules to predict an outcome?

    Yes. That set of rules would be exactly the program that is running on the smart computer. Probably no simpler set of rules would completely define it's behavior.

    I believe that you are confusing 'deterministic' with 'predictable' and thinking that determinism makes prediction easy.

  • Everyone seems to be all wrapped up in 'consciousness' and 'emotion'. Machines must certainly have these things to take our place, right? Nope. All they need is the capability to reproduce themselves and to do it more frequently or more efficiently that the biological systems that came before.

    Something as simple as a self replication nano-bot (whatever that is) that consumer oxygen for energy could end up being the only non-plant form of life on the planet if it replicated out of control and drove oxygen levels below that needed to sustain animal life.

    Currently machines do replicate and improve themselve, with the help of humans. Over time the amount of help they need is continuely decreasing. I do not think that machines will need to be as intelligent as humans to decrease the amount of human assistance required for replication to near 0.

    -josh
  • The people who run the world -- to whatever extent anyone runs it -- would no doubt be pleased that you think that the people that you've been led to believe run the world aren't smart.

    (I mean, that's if they had any reason to really care about your (or my) opinion. Which they probably don't, except perhaps as just another tiny part of the masses.)

    And the point isn't that supersmart machines would necessarily want to run the world, it's that it's hard to guess what they would want. Or why they should care if what they want happens to be at odds with what we might want. Why would what we want be at all relevant to them?

  • by dennism ( 13667 )

    Where machines suddenly exceed human intelligence and the future becomes completely unpredictable.

    It's funny to see someone predicting the future and at the end of their prediction ruling out the possibility of future predictions.

    My prediction: That this prediction will end up like the majority of predictions -- wrong.

  • Emotion, for the most part, is a chemical reaction to events, that's all.

    Emotions are much more than just chemical reactions. Chemical reactions are just how the human brain happens to implement emotions. Emotions have function and behavioral consequences (e.g. you lust for a female, so you sneak up behind her, restrain her, and hump her -- oops, I mean -- you talk to her and find out her astrological sign and phone #) and that behavior has emerged through (and been shaped by) the evolutionary process. Emotions do things useful for continued survival of the genes that program the chemical processes that implement the emotions, it's not just some weird byproduct.

    An AI that is created through an evolution-like process (and there is a very reasonable chance that this is how the first AI will be made) will benefit from the behavior-altering characteristics of emotions, so they will probably emerge. Sure, they won't be implemented as chemical processes (well, I guess that depends on how future computers work ;-) but they'll be there.


    ---
  • human beings, with their neurons clicking away at petacycles per second, can only do arithmetic extremely poorly, at less than a flop!

    Mathematics as we know it has only been around for a couple thousand years (and was pretty darned simple until just a few hundred years ago), but humans have been around for hundreds of thousands of years. This means that the ability to do arithmetic quickly, simply isn't something that humans need to do to survive, thus evolutionary forces have not optimized our hardware for doing that.

    If you want AIs that are fast at arithmetic, evolve them in a virtual environment where arithmetic ability is an important selection criterion.


    ---
  • by Sloppy ( 14984 ) on Thursday August 02, 2001 @08:01AM (#2177248) Homepage Journal

    The concept of an email virus would not be some grand prophetic vision in 1992.

    I don't think many people back then had any idea, that it would suddenly become "normal" for people to execute untrusted data with full privledges. The concept is still mind-boggling even today, let alone 1992.

    OTOH, it's more of a social issue than a technogical one. I guess it doesn't take much vision to realize: People are stupid.


    ---
  • If you would read the original paper, you would note that Vinge postulates several differing ways in which superhuman intelligence could be achieved. Some of them are similar to the net with added computational clusters. For these to be successful, it seems to me there are no technical problems that currently need to be solved. The problems are more orgizational and economic.

    Consider, e.g., a large company that implemented an internal copy of the net. Now it has it's network servers, attached, but there's this problem of locating the information that is being sought. So it implements xml based data descriptions, and an indexing search engine. And, as computers get more powerful, it uses a distributed-net approach to do data-mining, with a neural net seeking the data, and people telling it whether it found what they wanted, or to look again. As time goes by, the computer staff tunes this to optimize storage, up-time, etc. The staff trains it to present them the information they need. It learns to recognize which kinds of jobs need the same information at the same time, which need it after a time delay, etc. And then it starts predicting what information will be asked for so that it can improve it's retrieval time. ...

    Of the entire network, only the people are separately intelligent, but the network is a lot more intelligent than any of its components, including the people. The computers may never become separately intelligent. But the network sure would.

    Still, I expect that eventually the prediction process would become sufficiently complete that it would also predict what the response to the data should be. So it could predict what data the next person should need. So it could predict what answer the next person should give. So ...
    So if anybody called in sick, or went on vacation, the network would just heal around them. And eventually...

    Caution: Now approaching the (technological) singularity.
  • Umnh...
    Have you ever heard of sound cards? Video cards? Specialized graphics chips?
    There's nothing that keeps computers from adding specialized signal processing hardware onto their general purpose capability. This is proven, because we already do it. And so does the brain.

    Perhaps we will need to invent a specialized chip to handle synaesthesia for our intelligent computers. Is that really to be considered an impossible hurdle? To me that seems silly. Just because we don't know how to do it, and how it should be connected yet , doesn't mean that we won't next year. Or the year after that.

    Caution: Now approaching the (technological) singularity.
  • Control tends to drift into the hands of those that lust after it.

    A certain amount of intelligence is probably necessary, but the main ingredient seems to be a monomanical fixation. This, of course, leads to a certain number of acts that actually hinder the cause that one is ostensibly attempting to forward, but if the result in increased control, then to the lunatic in charge, this will actually be evaluated as a success.

    Don't trust what they tell you, watch what they do.

    Actions speak louder than words. (Don't I wish. In fact, many pay attention to the words, and ignore the actions.)


    Caution: Now approaching the (technological) singularity.
  • You should clearly read the non-fiction paper. (Sorry, don't remember the url, but it available via http://www.singinst.org ). The fiction was intentionally fiction, and clearly contains a large fantasy element. In his non-fiction paper he clearly explains what he means, and discusses various different paths that could lead to the singularity, and what could keep us out of it. You might disagree, but at least you would no what he was talking about, and why he believed it.

    Caution: Now approaching the (technological) singularity.
  • I don't know why you are claiming failure. Actually there has been quite a large amount of success. The problems have certainly been many, and there appear to be many remaining, but so?
    If you will recall, last year was full of people denouncing Mozilla as a failure. It took a bit longer than they expected. But I no longer use anything else when I'm on Windows. (True, on Linux I more frequently use Konqueror, but I use Mozilla whenever I'm on the Gnome side of things.)

    Possibly people's ideas of how a project should work have been overly influenced by movies and popular stories. (Though in Asimov's Foundation series, the bare framework of the Seldon plan required the entire lifetime devotion of the principle architect, as well as extensive commitment from dozens of others, so not all popular fiction is of the "quick fix" school.)

    Relativity took many years to be developed to the point of presentation, then it took decades of testing, and it's still being worked on. Special Relativity is now reasonably soundly grounded, but General Relativity still needs work. But people don't call it a failure. Why not? The A-Bomb was as much of a brute-force effort as Deep Blue was. Both were successful demonstrations, and in their success they highlighted the weakness of the underlying theories.

    But when it comes to AI, people keep moving the markers, so that whatever you do isn't really what they mean. I wait for the day when the hard version of the Turing test is passed. I firmly expect that at that point AI will be redefined so that this isn't sufficient to demonstrate intelligence. Already in matters of sheer logic computer programs can surpass any except the most talented mathematicians. (And perhaps them, I don't track this kind of thing.) It's true, most of these programs require a bit more resources that is today available on most home computers. But that's fair. Neural net programs can solve certain kinds of problems much more adeptly than people can. And they learn on their own what is an acceptable solution (via "training" and "reinforcement", etc.). And expert systems and capture areas of knowledge that are otherwise only accessible to experts in the field. (For some reason, experts are often a bit reluctant to cooperate.)

    Now it's true, that these disparate functions need to be combined. It's true that the world is quite complex, and the only way to understand it may be to live in it. ... So what about web-based intelligent agents? (I don't know of any advanced ones... and that might require more computation than would be practical.) A web connected computer could live on the web in a way that would be only indirectly related to how a persons would experience it. Would their ability to learn the environment, and to figure out (i.e., calculate) how to navigate to their desired destination be considered intelligence? I doubt it. People wouldn't see it. And they wouldn't want to believe it (except for some small number of boosters who would cheer even simple responses as proof of intelligence).

    The real problem with AI, is that nobody has a satisfactory definition of the 'I' part. Artificial is clear, but nobody can agree on a testable definition of Intelligence. The one real benefit is that it may get rid of those silly multiple choice IQ tests, and Standardized Achievement Tests. It would be easy for an AI to learn how to get the highest score possible (though it would require a bit of training, but then that's what they've turned grade-schools into -- training grounds for multiple choice tests).

    Caution: Now approaching the (technological) singularity.
  • by HiThere ( 15173 ) <charleshixsn@earthlinkLION.net minus cat> on Thursday August 02, 2001 @08:49AM (#2177254)
    This says more about when/what you choose to read than about what has been written.

    In certain decades it is "fashionable" to be optomistic. In others to be pessimistic. (The reasons have much to do with the age spread of the population, of the writer, with whether the author feels that things are getting better or worse NOW, etc.) During the late 50's up through the mid 70's optimism dominated. Then there was a reaction (Vietnam war, etc.) and the trend turned to pessimism (this started in Britain for some reason...I don't know why, I wasn't there).

    But there are always contrary voices. When Asimov, and the well engineered machines that favored humanity were dominant, then Saberhage introduced the Berserkers (intelligent robot war machines designed to reproduce, evolve, and kill all life.)

    I can't remember which are current, but novels with robot servents (sometimes almost invisible) aren't that uncommon even now. They just aren't featured characters anymore. They've become common, expected.

    OTOH, another of Vinge's postulates is coming to pass, whether through fashion or necessity, the proportion of fantasy to science fiction is increasing. Fairly rapidly. Fantasy used to be uncommon (although it was common before WWII). In the 50's and 60's it was usually disguised as science fiction. It started emerging again in the 70's. And now it is the predominant form. But a large part of this may be fashion. OTOH, Vinge predicted that as the future became more incomprehensible, the proportion of fantasy to science fiction would increase. So. Not proof, but evidence.

    Caution: Now approaching the (technological) singularity.
  • Quote from the article:

    The idea for "True Names" came from an exchange he had one day in the late 1970's while using an early form of instant messaging called Talk.

    Is it just me, or did anyone else pause for a second after reading that sentence? As far as I remember, most of the operating systems that had access to the Internet had some form of a "talk" program. This includes all UNIX-like operating systems that I tried, such as Ultrix, SunOS, Solaris, HP-UX, A/UX, AIX and now Linux, but also some IBM 3090 mainframes (although these were batch-processing machines, there was also a way to talk to other users).

    The term "instant messaging" was coined much later: only a few years ago, when Windows started to invade all desktops and AOL started promoting its AIM. Seeing "talk" defined as "an early form of instant messaging" just looks... strange to me.

  • Actually, there is another (semi-)recent event that more closely resembles Vinge's singularity, where our own artifacts have overtaken us by one means or another: the rise of corporations.

    Corporations are an artifact of our legal systems and have steadily grown in power and efficacy since they were first concieved several hundred years ago. At this point they are self-sustaining and self-reproducing, even persuing their own agendas that have only a tangential relationship to individual human agendas.

    I think it is interesting to note, however, that corporations are not, by almost any measure, smarter than individual humans, quite the oposite (consider well known sayings about the I.Q. of a mob or design by committee). The issue isn't whether our creations become more intelligent than us, but whether they become more potent than us.

    Corporations have become more potent than individual humans because 1) they can amass far larger fortunes (in terms of manpower, money, land, or almost any other measure) than an individual, and 2) they are, essentially, immortal (and, to a large extent, unkillable. While the laws may, technically, be empowered to disband a coroporation, in practice this is nearly impossible). Corporations are essentially god-like: omnipotent (if not omnicient) and immortal, invulnerable to almost any harm, complete with their own mysterious motives and goals.

    So, if we accept that the singularity has already occurred, we might ask why we aren't more aware of it's after effects. The answer, of course, is that the corporations don't want us to be aware, and are doing everything in their considerable power to obscure the effects of the singularity. Life goes on as normal, as far as lowly humans are concerned, because it would be terribly inconvenient for the corporations if it didn't (modulo polution, environmental destruction and a moderate amount of human suffering and expoitation).

  • I read through this entire thread at mod level one, and sure enough, there is not a single comment about his stories or his characters, because those barely exist.

    The reason that noone is commenting on Vinge's characters or stories is because they are not relevant to the topic at hand! The issue at hand is whether or not Vinge is a blithering nut-job for going on about this singularity crap that seems to be so popular with a number of science fiction writers cum technology commentators. I am heartened to see that there is a fair amount of skepticism in the comments concerning the idea of the singularity and Vinge's general nuttiness (and, even, self-contradiction) on the subject. It's good to know that the CS and IT trenches are fill, for the most part, with sane, level-headed folk, unlike the ranks of supposed luminaries like Joy, Kurzweil, and Vinge.

    There may well be folks in this forum who think that Vinge is a great writer: they're wrong, but more power to 'em anyway. I've read both A Fire Upon the Deep and A Deepness in the Sky and found them moderately enjoyable, but nothing to rave about. I wouldn't say that Vinge is in the ranks of the worst science ficiton I've ever read, but he's not far removed from the median (I won't say if he's above or below).

    <OFFTOPIC>
    If you are looking for good literature in SF, you should have a look at Gene Wolfe (the New Sun and Long Sun series), Kim Stanley Robinson (Red/Green/Blue Mars and Ice Henge), Octavia Butler, Richard Grant (Rumors of Spring, Views from the Oldest House and Through the Heart. More recently, Tex and Molly in the Afterlife, In the Land of Winter and Kaspian Lost), or, maybe, Stephen R. Donaldson. I used to be quite fond of C. J. Cherryh, but have found her recent stuff too formulaeic. There is good SF out there, but, as with almost anything else, the ratio of good-to-crap follows Sturgeon's law.
    <OFFTOPIC>

  • by RobertFisher ( 21116 ) on Thursday August 02, 2001 @03:49AM (#2177258) Journal
    Vinge is not the only one to notice that the rate of growth in computer devices, if extrapolated for a few decades, will eventually exceed the capacity of the human brain, both in terms of strorage capability and in terms of processing speed. Indeed, this very notion forms the basis of many of Joy's and Kurzweil's recent discussions.

    However, in doing this extrapolation, one is making a few assumptions. Most notably is that one can teach a computer how to :"think" using some (probably very complex) set of algorithms with comparable computational efficiency as the human brain, if one indeed had a computer with similar processing and storage ability as the human brain. That logic is quite flawed, due to the assumption of computational efficiency.

    What do I mean by computational efficiency? Roughly speaking, the relative performance of one algorithm to another. For instance, in talking about the singularity (as Vinge puts it), one often neglects to notice the fact that human beings, with their neurons clicking away at petacycles per second, can only do arithmetic extremely poorly, at less than a flop! Logical puzzles often similarly vex humans (witness the analytic portions of the GRE!), where they also perform incredibly poorly. Significantly, human beings are very computationally inefficient at most tasks involving higher brain functions. We might process sound and visual input very well and very quickly, but most higher brain functions are very poor performers indeed.

    One application of a similar train of logic is that human beings are the only animals known to be capable of performing arithmetic. Therefore, if one had a computer comparable to the human brain, one could do arithmetic. Heck, by this logic, we're only 50 years away from using computers to do integer addition!

    The main point here is that, with regards to developing a "thinking" machine, WE MIGHT VERY WELL have the brute force computational resources available to us today. The hardware is not the limitation, so much as our ability to design the software with the complex adaptive ability of the human brain.

    Just WHEN we will be able to develop that software, no one can really say, since it is really a fundamental flaw in our approaches, rather than in our devices. (It is similar to asking when physicists will be able to write down a self-consistent theory of everything. No one can say.) It could happen in a decade or two, or it could take significantly longer then 50 years. It all depends on how clever we are in attacking the problem.

  • let me plug the novel Diaspora by Greg Egan as an interesting look at what the singularity will mean to the future of humanity - the history of the rest of time reduced to handy pocket novel size
  • Very interesting, but it still doesn't address the question of whether artificial intelligence that approaches human intelligence, let alone surpasses it is possible. A lot of the ideas of what the future would contain in 100 years a century ago were wrong. In fact the same is true for much shorter periods of time.

    Yes, technology will advance in the next X years, but to assume that a necessary part of that advancement is the creation of a machine that is more intelligence than a human is just plain ridiculous. Some would argue that a machine intelligence of that nature is absolutely impossible in the first place (not that I agree with them, but there are rational arguments that suggest this).

    I'm basing my view on the state of AI and what we can expect in the future on the results of research I've seen and carried out at some of the top AI departments in the world, so I think I've got a fairly good grasp of the subject matter, and I am 100% happy to say that faster computers will not give us any form of machine intelligence.
  • by iapetus ( 24050 ) on Thursday August 02, 2001 @06:06AM (#2177261) Homepage
    The world becomes stranger faster, every year.

    But very rarely in the ways you expect. Look at the predictions people were making for life in the year 2000 back in 1800, or 1900, or 1950, or even 1990. You'll see that a lot of it didn't happen. Some did, and some things that people hadn't even considered happened as well. But a lot of it just didn't take place.

    Regardless of whether advancement takes place, the link that Vinge assumes between computer hardware performance and computer intelligent does not exist. If true machine intelligence comes about within the next thirty years it will not be as a direct result of improved hardware performance. There aren't any systems out there that aren't intelligent, but could be if we could overclock their processors to 150GHz.

  • by iapetus ( 24050 ) on Thursday August 02, 2001 @03:10AM (#2177262) Homepage
    Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years.

    Progress in computer hardware has followed this curve and continues to do so. Progress in computer intelligence however, hasn't. Computers are still stupid. They can now be stupid more quickly. This isn't going to produce super-human intelligence any time soon.

    Dr Vinge reminds me somewhat of that most mocked of AI doomsayers, Kevin [kevinwarwick.com] Warwick [kevinwarwick.org.uk].

  • Okay, I read a bunch of posts, but rather than reply to them one by one, I'm going to hit them all at once.

    You said: "Why do people think SIs [super intelligences] will be unpredictable?"

    Because they are so much smarter and intelligent than us that we have no way of knowing how they will think and thus act. Just as a moron can't write a book about a rocket scientist, an AI researcher can't write a book about a SIAI (well, at least not a rational one). In some cases, AI will act in ways that seem reasonable to us, and other times it will require a lot of thought on our part to understand their better reasoning. Still, as of right now, we can't say for sure how an AI will act or even once they are around know how they will act.

    You said: "I don't think SIAI will be dangerous. They're way smarter than us and won't be evil."

    Many of us in the Singularitarian community used to make this assumption, but eventually you realize that most of the reasons that we consider that an AI would act like this are antropomorphic. AIs don't have those same built in features that we do. They wouldn't even have a sense of self when first created (or maybe ever). Check out http://www.singinst.org/ for papers on Friendly AI for potential solutions and further explinations of this problem.

    You said: "We'll never develop a theory of general intelligence. We're not even going to have strong AI, let alone ones that can improve their own code."

    You have a lot to learn. All I can say to these kind of comments is that you need to educate yourself better on topics of cognitive science. You know, learn about physicalism (roughly, brain == mind) as opposed to dualism, learn about how information is represented in the mind, etc.. This view is mostly the result of simply have not been presented the proper information.

    You said: "I don't think the Singularity will happen. I mean, look at how much stuff has been predicted and never came true."

    The Singularity is a matter of life and death. In case you're wondering, the Meaning of Life right now is to reach the Singularity. Until then, anything you do is pretty much worthless unless it gets us to the Singularity faster. If we don't reach the Singularity, you are going to die sooner or later. The Singularity means you can live as long as you like, do what you want (this is a complex issue, see my site for a paper on how to protect non violation of volition post Singularity), and live happily ever after. I can't stress this enough: reaching the Singularity is a matter of life and death. If we don't make it, we all die. Maybe not today, but soon. And this is not just the fate of individuals, but of humanity and all life. If the Singularity is not reached, all life will cease to exist eventually. When looking at the Singularity in this life, you almost have to wonder why you're not already help to make it arrive sooner.
  • Ah, but tis true. Welcome to a new way of looking at the world.

    Now that we have that defined that equivalence, are there any IM patents that need busting?
  • by Polo ( 30659 )

    • ...while using an early form of instant messaging called Talk.


    Man, that sure sounds strange to my ears. I wonder what stuff the press will be explaining in a few more years...

  • Vinge does allude to this in the Singularity [caltech.edu] paper:
    But it's much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans' natural equipment.


    #include "disclaim.h"
    "All the best people in life seem to like LINUX." - Steve Wozniak
  • As systems become more complex, more education is required. The education costs more money. At some point, if this continues unchecked, we will be faced with a situation where the cost of education exceeds the value brought as a result of that education.

    Like you say, an interesting theory. However, it seems to hinge on the idea that educating someone carries a fixed cost per unit of knowledge (whatever that may be). Or at least that the cost of education per k.u. is not falling as fast as the rise in the number of k.u.'s required to operate in society.

    This ignores the fact that it is not always necessary to have an instructor or prepared curriculum in order to learn something.

    For example, when I first got a Windows box, I could have spent $150. on a course at the community college to learn how to double click on an icon, but chose to save my money and teach myself.

    In fact when it comes to education in general, once you teach someone how to engage in critical thinking, and give them access to a worldwide knowledge database (which the Internet is turning into), the motivated student can gain unlimited knowledge and virtually no cost other than connectivity.

    Myself as an example: I have learned far more in my past 6 years of Internet access at a cost of &#060$1.4K in dial-up fees than I did in my previous 6 years of university education at a cost of &#062$30K in tuition fees.

    Trickster Coyote
    I think, therefore I am. I think...
  • did ray kurzweil [kurzweilai.net] just bogart this dude's idea for his new book? [kurzweilai.net]
  • hey jackass, why don't you look down the page some more?
    you would have seen this. [kurzweilai.net]
  • you're a prick!
  • The idea is that *all* technology is asymtotic. Yes, computer *speed* is a simple (non-asymtotic, so far) progression. AI seems to have gone no-where (except where we redefine the term), but the IMPACT on our culture and our world has been on a curve, the function of which is only starting to become evident. Think about what a man from 1800 would say about our world (it would probably involve lots of screaming). Now think about how someone from 1900 would feel (not a WHOLE lot different, but the acceptance of, if not comfort with electricity is there at least). Now, think about someone from 1950 (you actually HAVE little radios that you can talk to people through? you can travel to Japan HOW fast? old people get replacement WHATS?)

    Technology in genetics, networking, materials science and electrical engineering is progressing at a frightening rate. Soon, we'll be able to construct useful, microscopic machines; implanted computers; and who knows what else.

    The world becomes stranger faster, every year.

    --
    Aaron Sherman (ajs@ajs.com)
  • by ajs ( 35943 ) <ajs.ajs@com> on Thursday August 02, 2001 @04:06AM (#2177273) Homepage Journal
    So, this idea is introduced in book 2 (or 3, depending on how you count) of "Across Realtime", a Novel(s) of his (it was origianlly 2 short novels and a novella, I think).

    The idea is that technology progression is asymtotic, and will eventually reach the point where one day of technological progress is equal to all that of human history, and then, well... there's the next day. He doesn't cover exactly what it is, because by definition, we don't know yet. But, it's catastrophic in the novel. A good read (actually the first part which basically just introduces the "Bauble" is a good read alone).

    He sort of refined the idea into something maintainable in Fire Upon the Deep by introducing the concept of the Slow Zone which acts as a kind of buffer for technology. If things in the Beyond get too hairy, the Slow Zone always remains unaffected, and civilization can crawl back up out of the "backwaters" (e.g. our area of the galaxy).

    He's a good author, and I love his take on things like cryptography, culture (A Deepness in the Sky), religion, USENET (Fire Upon the Deep), Virtual Reality and cr/hacker culture (True Names).

    --
    Aaron Sherman (ajs@ajs.com)
  • The hardware is not the limitation, so much as our ability to design the software with the complex adaptive ability of the human brain.

    It may well be that we'll never be able to design such software.

    However, we could evolve it. Using genetic algoriothms and other "evolutionary" programming approaches [faqs.org] seems to me the most promising approach.

    Tom Swiss | the infamous tms | http://www.infamous.net/

  • Look, you just don't seem to understand. NO computer existing today can understand natural language. That is the "huge" technical problem. And by huge I really do mean huge.

  • What you describe is very much like the neural net approach. Neural network research is not currently stymied by lack of hardware power - you can add as much hardware as you want, but we don't know how to organise it so it learns in a really intelligent way. Yes they can do simple tasks, but not everything (and another problem is, they're hard to verify/test - black-box testing is the only option, so they're not suitable for critical systems).

  • by Hard_Code ( 49548 ) on Thursday August 02, 2001 @04:36AM (#2177277)
    I'm still waiting for humans to exceed human intelligence...we're all so obsessed about what the "robots" will do in the future when they get smarter than us. The present sucks already.

    scold-mode: off
  • The human race has already been through a singularity. Its aftermath is known as "civilization", and the enabling technology was agriculture, which first made it possible for humans to gather in large permanent settlements.

    There are a few living humans who have personally lived through this singularity... stone-age peoples in the Amazon and Papua New Guinea abruptly confronted by it. For the rest of the human race it was creeping and gradual, but it still fits the definition of a singularity: the "after" is unknowable and incomprehensible to those who live in the "before".

  • There are other possibilities as well for SETI's lack of success. Our solar system and our planet may be fairly unique in some ways:

    • the Sun is constant (not a variable star)
    • the Sun is a single star (not a binary or multiple star)
    • the presence of Jupiter in its current location (large planet gravitationally deflects and sweeps away comets and small asteroids that cause catastrophic extinctions)
    • nearly circular orbits (many of the transsolar planets that have been discovered are in highly eccentric orbits)
    • the presence of the Moon (large satellite that causes tides, which are important in the development of terrestrial life)
    • plate tectonics (present on Earth but not on Venus, may be crucial)
    • the positioning of the Earth in the "habitable zone" of the solar system (5-10% closer or farther to the Sun, and advanced life wouldn't develop)
    • the positioning of the Sun in the "habitable zone" of the galaxy (too much farther out and metals are too sparse and element ratios are unsuitable, too much closer and you run the risk of mass extinctions from supernovas and the like)

    Probably bacteria-like life is extremely common, but advanced intelligent life might in fact be somewhat rarer than was once thought.

  • by Hydrophobe ( 63847 ) on Thursday August 02, 2001 @06:00AM (#2177280)

    Another strong possiblity (for lack of SETI) is that intelligent races prefer virtual reality to real reality, in much the same way that the human race prefers to sit inside watching TV instead of going outside for a walk in the woods and grasslands where we evolved.

    When we have better than Final Fantasy rendering in real time, most of the human race will probably choose to spend most of their day living and interacting there, in virtual-reality cyberspace... in much the same way that many of us today spend most of their day in an office environment, living and creating economic value in ways incomprehensible to our hunter and farmer ancestors.

    When this happens, the planet may seem empty in many ways... in much the same way that suburban streets in America seem empty to a third-world visitor used to bustling and noisy street life.

    This phase (human race moves into and settles cyberspace, become less visible in the physical world) is not the same as the Singularity. For one thing, it is not at all dependent on future advances in artificial intelligence... we just need ordinary number-crunching computers a few orders of magnitude faster than today.

    If the AI naysayers are right, and machines never get smart enough, then the Singularity will never happen... but the "ghost planet" scenario will inevitably happen in our lifetime... either as a result of progress, or as the unhappy result of plague or nuclear war.

  • Google [google.com] turned up this:

    True Names - the novel by Vernor Vinge [gatech.edu]
    Comment by the transcriber ... Bluejay Books) TRUE NAMES
    VERNOR VINGE Bluejay Books Inc. All the ...
    progoth.resnet.gatech.edu/truename/truename.htm - 101k - Cached [slashdot.org] - Similar pages [slashdot.org]

  • Porgress could, for instance, be exponential.

    The function y = 2^x has no asymptote - it becomes ever higher, ever steeper, but for each value of x there is a finite value of y

    Let x = time, let y = technological level (if such a concept is reducable to a single number) and this may be a model our progress under ideal conditions, free from setbacks like plauges & nuclear wars .. and it is completely free of a singularity

    I have yet to hear a good reason why this model is not a better one than the singularity idea, other than wishful thinking & that the singularity makes a better story. But let's not confuse SF with the real world.

  • This is still in the realm of science fiction. Literally, there are many machines that are smarter than humans, oh like the humble calculator. Can you do math that quickly?

    The converse is can the calculator understand algebra or calculus? Nope. Do we currently have machines that aren't as smart as humans but can understand/simulated human mentation? Nope. (I certainly don't think Cyc the database qualifies or that Darwinian algorithms have intelligence) Can 'very briefly' equal never. Yep.

    I don't know when or why the a certain part of the geek contingent can't tell the difference between fiction and reality anymore but this transhumanist/Kurzweil/extropian stuff wont work until we have a working model of consciousness that can be verified experimentally.

    Vinge and his ilk are this generation's Tim Leary, lots of optimism and futurism but feet planted strictly in the sky.
  • Thanks for bringing up Lem. He is one of my faborite science fiction authors. I have plugged his work and ideas on Slashdot several times. (His Master's Voice, re: SETI online)
  • The scariest part of Deepness for me was his idea of Focus - a biotechnology for inducing hacker trance artificially and indefinitely in humans. Focus was the basis of the power of the bad guys in the novel - their automated systems had super-human reasoning abilities because they were based on networks of Focused humans and computers.

    We had better hope that AI (and hence the Singularity) is indeed possible, because if it isn't, Focus is almost certainly possible, and with it tyrrany on a scale we can barely imagine.
  • Ken MacLeod noted this in a Salon article [salon.com].

  • Didn't you just violate their copyright yourself by publishing that notice? I'd think "all materials" means just that: all materials on the site, including the copyright notice. Now, where can I report you to the NYTimes?
  • If you look at True Names in Amazon, you'll see that it's going to be reissued Real Soon Now, with a bunch of introductory essays on various topics, as True Names and the Opening of the Cyberspace Frontier. Friends of mine wrote some of the essays, so I've been interested in getting a copy. Unfortunately, it's been going to come out Real Soon Now for about 5 years, and every 6-12 months the publication date slips another 6-12 months - This time for Sure! Some of the essays that were cutting-edge when they were written are going to start to look like old science-fiction by the time they actually get published...
  • Thanks for the post, and while I (naturally) agree with your conclusion AI is a software rather than a hardware problem, your comment

    ...one often neglects to notice the fact that human beings, with their neurons clicking away at petacycles per second, can only do arithmetic extremely poorly, at less than a flop!
    only describes the calculations we carry out consciously. This doesn't really apply to the autistic lightning calculators - or even us when we're doing calculus to say, catch a ball or drive a car. Trying to think about what you're doing under those circumstances tends to make the task quite a bit harder..

    (Is consciousness over-rated? :)

    Is there anyone out there who knows more maths than me who's willing to tell me what my brain can do that a neural net of sufficient size can't?

  • WHERE IS MY FLYING CAR?!!

    I like his books, but his predictions about the future are about as likely as those in the 50's stating that we would be all have our own flying vehicle by now.
  • The word 'paradoxon' has a nice ring to it, and looks as though its root word is 'paradox'. What exactly does it mean?
  • I took his microcomputer architecture class at SDSU back in '93. He was probably the best teacher I've had so far, being real clear and logical. Not to mention a hardcore assembly programmer, having us to do labs using the multitasking mini-OS he had written in 68K assembly...
  • The scariest part for me was that Focus is a plot device to let the author talk about us. The Focused people were, as you mention, hackers, and they were slaves. The point (for me, at least) is not that some super-biotech could be created to convert humans into willing slaves -- it's that we hackers already willingly enslave ouselves. Our central philosophy puts our focus on doing the work first, and being paid for the work second. As long as our employer continues to give us interesting puzzles to solve, and interesting tools to solve those puzzles with, we will be his willing slave.

    Scares the shit out of me.

  • A great book, and a (somewhat humorous) look at what might happen if the Singularity cannot be reached is A Deepness in the Sky. The common counter-argument to the future of incredibly intelligent AI is that we can't even write a word processor without bugs right now--well, Deepness takes that idea and runs with it. In that future, AI, FTL travel, and all those fun science fiction ideas were never realized. Instead, people have to deal with spending years going between stars, isolated civilazations rising and collapsing over and over again, and 10,000 years of legacy code. The hero of the book gets much of his power from the fact that he actually understands a decent amount of the legacy code.

    Vinge has made it fairly clear that he doesn't think that Deepness is where society is going--he seems fairly confident that we'll reach the Singularity.

    ~=Keelor

  • so I'm fairly confident I'll live to see computers at least as intelligent as I am. And I'm 54.

    Well, that doesn't say much. Because either A) you're not very bright or B) you live a very safe and healthy life, so you expect it to be looong. ;-)

    But seriously, don't you think there's a huge step from building an artificial neurological brain to making it actually work. We may imitate some internal processes in the neurons, but the brain has a huge and complex architecture suited for human activity and body. I believe it can be done, roughly, but if it's going to be in MY lifetime there'll have to be HUUGE advances soon.

    I don't believe these AIs will be comparable to humans that soon though. Much of human thinking is not logical at all. If we were to only live perfectly logical lives, I think I'd vote myself out of "humanity". Because much of our joy and fun is not logical at all.

    Then again, it all really depends what you mean with intelligence too. That's just another can of worms, making such statements completely arbitrary.

    - Steeltoe
  • Maybe you would be better off writing science fiction. It's easier than creating a "superintelligence beyond any human IQ".

    And if you're really serious about this, remember that a lot of clever people have tried this before you, and utterly failed.

    Good luck anyway!

  • Roger MacBride Allen's Hunted Earth series volume one: The Ring of Charon. Volume 2 is The Shattered Sphere. It's been a while since I read them and I don't remember the knowledge crash stuff, but they are pretty good hard scifi IIRC.

    Allen also wrote a book called The Modular Man about a man who downloads his personality into a robotic vaccum cleaner that is excellent and deals with many of the same concepts Dr. Vinge is talking about.

  • by Satai ( 111172 ) on Thursday August 02, 2001 @03:24AM (#2177298)
    One of the more interesting ideas I've read about is the Knowledge Crash. I'm not entirely sure how feasible a theory it is, but it was proposed in a science fiction series I read a while back. (The first book was called something about Charon... Ring of Charon? Moon of Charon? Something like that.)

    The idea is, basically, that every year it costs more to educate someone. In order to be able to expand our collective knowledge, or even to utilize the machines and operate the systems of the present, it will cost a certain amount of money in the education process.

    In addition, we can quantify the amount of output a single human creates in his or her lifetime. For instance - if she works for thirty years at a Power Planet or something, we can determine the value that she has contributed to society.

    As systems become more complex, more education is required. The education costs more money. At some point, if this continues unchecked, we will be faced with a situation where the cost of education exceeds the value brought as a result of that education.

    That's called the Knowledge Crash. (Or it was in the books.)

    While I'm not convinced that this is true, it's certainly an interesting theory. It seems to me that, on average this can't happen, as one of the points of creating more and more complicated (generic) systems is to facilitate simpler and simpler controls, and thus dumber and dumber operators. While the creators of those systems may have 'crashed knowledge,' it seems that the whole point of that would be to hurl some value at the workers.

    But then you have to consider that, inherent in the value of a designer, the ease of use is part of the entire value analysis versus education, and then that'll crash...
  • But seriously, don't you think there's a huge step from building an artificial neurological brain to making it actually work. We may imitate some internal processes in the neurons, but the brain has a huge and complex architecture suited for human activity and body. I believe it can be done, roughly, but if it's going to be in MY lifetime there'll have to be HUUGE advances soon.

    While skepticism is a fine sentiment, I can't help notice that you are making more assumptions than Vinge is. Sure we will be able to simulate or imitate the brain roughly -- but I think it is a stretch to demand that consciousness come only from detailed imitation. It may be that roughly is enough. The brain is also a finite machine - we will soon be able to build electronics that excess the capacity.

  • You claim is that measure is not a good criteria of reaching "intelligence". Which measure? Of what?

    For example, storage. It seems that we can build harddisks in excess of human brain capacity. But static storage is incomparable to the dynamic kind of store the brains have. So - wrong measure.

    Another example: FLOPS. The human brain is a massively parallel computer, microchips are not. Now it is claimed that you can simulate a parrallel computer by a single chip one. Admittedly, the difference between possibility and practicability is huge. But if the brain is a massively parallel computer, then a sufficiently fast chip will get to the level where it has comparable compute power. Just run a brain simulation of this computer. If the brain is not, then again - wrong measure.

    We can just go on, finding the right measure. I think all things considered, that mesaure will be exceeded and by that time, we will have a conscious computer.

    At some level, you either have faith, or you don't.

  • One of the best pieces of indirect evidence for the inevitability of the Singularity is the Fermi paradox, and, to a lesser extent, SETI's lack of success (to date).

    Fermi showed that, given reasonable assumptions, we ought to expect "ET" to be ubiquitous. Since extraterrestrials are not all about us, this suggests either technologic civilizations are exquisitely rare or that they rapidly lose behaviors like migration and radio communication. By rapidly I mean within two to three hundred years.

    The Singularity is the kind of event that would do that. If technologic civilizations always progress to a Singularity they may well lose interest in minor details like reproduction and out migration. Among other things they would operate on very different time scales from pre-Singular civilizations.

    See also http://www.faughnan.com/setifail.html [faughnan.com].

    john
    --
    John Faughnan

  • I appreciate the reply, but naturally I disagree.

    A few comments:

    1. Fermi's calculations assume a civilization with light speed technology that expands from planet to planet every few hundred to thousand years. No highly advanced technology is required for such a civilization to colonize the galaxy within tens of thousands of years -- just exponential growth. See for a Rumanian example.

    2. The point of my argument is that it's not likely that post-Singular civilizations are driven by the same things that drive biological organisms (growth, expansion, etc). For one thing their time scales are different from biological organisms; it's not hard to imagine that they exist in a time-space that's thousands of times faster than ours.

    Here's my argument in summary:

    1. Fermi's paradox implies that no technologic civilization ever survives.
    2. Singularity is a good example of a disaster or transformation that may strike all technologic civlizations of all forms, probably before then are able to travel between stars.
    3. Since there not likely to be any star-faring civilizations, and since Singularity is a reasonable transformation event common to all technologic civilizations, it is probably that it is Singularity that ends a civilization's interest in expansion and exploration.
    www.faughnan.com/setifail.html

    --
    John Faughnan
  • Since in scored a big fat [1] for this piece, I suppose I might as well reply to it. It turns out, not surprisingly, that someone else has thought of the implications of the Singularity for SETI.

    See Kurzweil's article: and search on SETI.

    I may have thought of it earlier (a year ago or so), but I didn't think I was the only one who thought of it.

    see www.faughnan.com/setifail.html
    --
    John Faughnan

  • Vinge does not require the advancement of computers to a point at which they are regarded intelligent. This is only one of several possibilities mentioned in his paper [caltech.edu].

    Other possibilities include:

    • "Waking up" of computer networks.
    • Humans using sophisticated HCI [wikipedia.com]. (e.g. Vinge's Focused, Stephenson's Drummers)
    • Genetically altered humans. (Card's Descolada?)
  • Your recollection, lacking exactly the substantiation you mention, is worthless. You can find plenty of detailed comments using Google.

  • 'Deepness' [amazon.com] is the Prequel to 'Fire upon the Deep' and even better. Read it first.

    While there is more discussion about non-human intelligence in 'Fire', the actual impact of Vinge's idea is greater in 'Deepness', where his excellent world-building skill is used to create the best traditional SF I know.

    Both 'Deepness' and 'Fire' also feature some really neat alien races.

  • The graveyard of artificial intelligence is on the second floor of the William Gates Computer Science Building at Stanford. Go through the double doors west of the elevators, and you'll reach a large room where faded gold letters spell out "Knowledge Systems Lab".

    Below the sign, two rows of empty cubicles hold obsolete computers and out of date manuals. In a nearby lounge, old issues of Wired from the early '90s lie on tables. Dusty boxes are stacked against one wall. Few people are about. Nearby office doors hold the names of researchers who had their fifteen minutes of fame around 1985.

    This is where the dream died. The Knowledge Systems Lab was the headquarters of the expert systems faction of artificial intelligence. These were the people who claimed that with enough rewrite rules, strong AI could be achieved.

    It didn't work. And that empty room, frozen in the past, is what remains.

  • > Artificial intelligence will never have emotions.

    Snicker. I think Dr. Vinge is right...and I think it is scary. If you are familiar with electronics, think about how a diode avalanches.

    If he is correct, AI could well "avalanche" past what evolution gave us in a very, very short period of time.

    Humans learn at a given pace. We are nearly helpless at birth, yet can be a "McGyver" in our twenties and thirties, able to make a helocopter gunship from nothing but bailing wire and duct tape (on tv anyway). That's a 20-30 year span, or nearly a quarter of our lives to reach our maximum potential.

    Who is to say an AI system could not, at some point, triple it's cognitive abilities in a 100 nS time slice?

    And to think I didn't take his class cuz some lamer told me he was a "hard ass" -- rats. That's what I get for listening to lamers. SDSU has so many wonderful Professors...Vinge, Baase, Carroll. Great University, great professors, great memories.



    Treatment, not tyranny. End the drug war and free our American POWs.
  • Oh, NOOO! When machines become supersmart, they'll run the world, because right now the smartest people are the ones who're running the world, and supersmart machines will be even -- oh, wait, never mind. In the words of Jurgen, cleverness is not on top, and never has been.
  • It continues to be unclear to me just exactly how smartness = world domination. Experience would seem to indicate that very high intelligence is in fact associated with a decreasing likelihood of achieving substantial political power. Abuse is all very fine, but I'd appreciate seeing a mechanism here, not handwaving.
  • ...by not reading the damned article at all. There are enough freely browsable sources of information that I do not need the online New York Times. Once in a while I send the NYT an email telling them so.

    Hmm...I wonder should I have an ethical dilemma reading commentary of people who have read the article in violation of copyright? I think not, since I have entered into no agreement with the NYT.

  • by clary ( 141424 ) on Thursday August 02, 2001 @04:55AM (#2177315)
    Progress in computer hardware has followed this curve and continues to do so. Progress in computer intelligence however, hasn't. Computers are still stupid. They can now be stupid more quickly. This isn't going to produce super-human intelligence any time soon.
    We don't necessarily need to crack the strong AI problem to push us into a singularity. Exponential progress in technological capability in general will do the trick, once we hit the elbow of the curve, if it has one (which is a bit tricky to see from this side).

    Because of stupid, but fast, computers, we are headed toward being able to hack our DNA (and/or proteins). This will certainly produce incremental gains in lifespan and health...perhaps it will produce dramatic ones.

    Because of stupid, but fast, computers, we can simulate physical processes to enable us to engineer better widgets. Perhaps this will make routine space travel economical.

    Because of stupid, but fast, computers, we are heading toward having the bulk of human knowledge instantly available to anyone net connection. How will this leverage technical progress?

  • by Dr. Spork ( 142693 ) on Thursday August 02, 2001 @03:52AM (#2177316)
    One thing I don't get is why something that's very intelligent would be inherently unpredictable. Should Christians think that because the God they believe in is supposed to be supremely intelligent His actions are totally unpredictable by us? Might he send the pious to hell and the wicked to heaven? I don't see much of a relationship between intelligence and predictability. The most unpredictable people I know are dumb.

    Another thing has to do with this "let's fear AI" genre of SciFi in general. Why does no one challenge the assumption that when artificial creatures develop intelligence and a personality, that personality will inevitably be indifferent, power-hungry and cold? Isn't it just as easy to imagine that artificially intelligent creatures/machines will strike us as being neurotically cautious, or maybe friendly to the point of being creepy? Maybe they'll become obsessed with comedy or math or music. Or video games.

    Realistically, I think the first machines which we take to be intelligent will be very good at means-to-ends reasoning, but will not be able to deliberate about ends (i.e. why should one sort of outcome be preferrable to another). I would argue that even we humans can't really deliberate about ends. At some point we hit some hard-wired instincts. Why, for example, is it better that people are happy rather than suffering? The answer is just a knee-jerk reaction by us, not some sort of a reasoned conclusion.

    When we create AI we will have the luxury of hard-wiring these instincts into intelligent machines (without some parameters specifying basic goals, nothing could be intelligent, not even we). Humans and animals are basically built with a set of instincts designed make them survive and fuck and make sure the offspring survive. There is no reason to think AI creatures would necessarily have these instructions as basic. I'm sure we could think of much more interesting ones. The consequence is that AI creatures might be more intelligent than we are, but in no way sinister.

  • You miss a very key point...

    As things get more complex, they get refined into modular pieces.

    It takes a very small amount more training to drive a modern Ford Taurus as compared to a 1930's Packard.

    This holds true even when fixing the car. Mechanics don't rebuild alternators anymore, they replace them.

    Computer technicians don't use a solder iron anymore. They replace the defective video card!

    This pattern holds with software, as well. Remember when C, today's "low level" language, was considered very inefficient and bloat-ridden? How about Perl? (Now fast enough to decode a DVD movie on the fly with moderate hardware!)

    The real danger here is not that we'll have a knowledge crash, but that we'll keep dumbing everybody down to the point where, to run anything, you push a red button. If the red button doesn't work, we have a REAL crash...

    -Ben

  • iapetus wrote: "Progress in computer hardware has followed this curve and continues to do so. Progress in computer intelligence however, hasn't. Computers are still stupid. They can now be stupid more quickly. This isn't going to produce super-human intelligence any time soon."

    The problem from the perspective of a working neuroscientist is that we don't yet understand how the brain is intelligent. On the other hand, things are starting to fall into place. For example, we have a hint of why neural synchronization occurs in the brain, because we're beginning to realize that time synchrony is something many neurons are very good at detecting. We're also beginning to understand memory formation in the cortex. It seems to involve the creation of clusters of synapses, and those clusters get activated by time-synched signals. There's some evidence for analog computation, and there's some evidence for almost quantum computation. So we're beginning to understand how to build a brain. That seems to be the hump, so I'm fairly confident I'll live to see computers at least as intelligent as I am. And I'm 54.

  • Across realtime is one of my all-time favorite SF novels. In it he introduces 'bobbles' (stasis fields) and the 'technological singularity'. What's interesting is that rather than just invent some new technology and go Oooh Ahhh over it, he lets the story follow how people quickly adapt the new technology and start playing with it. What's amazing is that he wrote this back in the early 80's, and yet nothing he wrote about computers seems dated - that's foresight!
  • by Mentifex ( 187202 ) on Thursday August 02, 2001 @02:48AM (#2177325) Homepage Journal
    Technological Singularity by Vernor Vinge -- available online at http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-s ing.html [caltech.edu] -- is the scariest and yet most inspiring document that I have ever read on Artificial Intelligence -- which is being implemented slowly but surely on SourceForge at http://mind.sourceforge.net [sourceforge.net] in JavaScript for Web migration and in Forth for robots, evolving towards full civil rights on a par with human beings and towards a superintelligence beyond any human IQ, as described so eerily and scarily by Vinge. It used to be that I did not like Vinge's science fiction, but right now I am thoroughly enjoying A Deepness in the Sky by Vinge.
    --
  • Comment removed based on user account deletion
  • I disagree. Humans and other animals may be poor (relatively) at doing paper-and-pencil mathematics, but they are quite good and fast with innate math. Huh? Well, tossing a basketball through a hoop requires unconscious calculation to make the muscle add the correct energy to the throw, it must be pused in the correct direction to make up for player movement relative to the hoop, etc. A lion, alternatively, must do the calculation on efficient pursuit trajectory of prey when they bolt. A lion doesn't run to the prey, it predicts and compensates for the movement/running of the prey to form an intercept course.

    This happens all the time and unconsciously with ALL creatures with a brain. It does involve math and it is automatic. Not too bad.

    Then there is the difference between a machine calculating a formula that HAD to derive from a human No machine creates new formulas or mathematics. They ONLY calculate that which humans in their creativity, slow as it may be, are able to devise. Quantum math, relativity, calculus...humans are slow to calculate the answers but very good at coming up with the formulations and rules.

  • What is so hard about respecting a miminal request by the publishers of the article? Just register a name with them and read it off of their site. The more people do things like this (repost entire articles) the more likely the NY Times is going to stop providing articles on their web site.
  • The brain is also a finite machine - we will soon be able to build electronics that excess the capacity.

    Which is completely irrelevant. The human nervous system evolved over the course of billions of years and works in a very specific and detailed fashion, most of which is still a mystery to us at the computational level. Without reproducing all that evolved design, we would not have anything like human intelligence.

    We already have machines that exceed the human capacity in every way physically, but we have not yet been able to create a robotic construction worker. Why should just throwing terraflops at the intelligence problem go any further towards solving it?

    Tim

  • This thread reminds me why I don't read science fiction any more. I recall Vinge as a terrible writer, with hackneyed adventure plots, characters with the dimensionality of a cardboard cutout, and themes that reduce to "ain't science grand?" I read through this entire thread at mod level one, and sure enough, there is not a single comment about his stories or his characters, because those barely exist. Apparently no one here minds that little fact. Critical standards in science fiction are abysmally low; the fans routinely gargle boatloads of watery spooge as if it were fine champagne.

    It's sad that there's not a better venue for scientific speculation per se. If there were, people with no ear for fiction, such as Vernor Vinge, Robert Forward and Isaac Asimov, would not feel themselves forced into quasi-fictional exercises that demean both themselves and the storyteller's art.

    Tim

    • Automatons will not be able to easily jump beyond logic by themselves, people will always be needed to teach them how.

    Hmm. The trouble is that very, very few humans can actually make intuitive leaps. I can think of the guy (or gal) who figured out fire, Da Vinci, Edison, Einstein, a handful of others. Most of us just make tiny tweaks to other people's ideas.

    Bizarrely, given sufficient processing power, it might be more efficient to produce a speculating machine (that can design a random device from the atomic level up, with no preconceptions of usage, then try and find a use for it), rather than try and identify humans who can actually come up with ideas that are genuinely new.

  • by Rogerborg ( 306625 ) on Thursday August 02, 2001 @02:58AM (#2177358) Homepage

    The most succinct Vinge quote [mbay.net] that I can think of is:

    • To the question, "Will there ever be a computer as smart as a human?" I think the correct answer is, "Well, yes. . . very briefly."
  • by jparp ( 316662 ) on Thursday August 02, 2001 @03:34AM (#2177365)
    Ray Kurzweil seems to be making Vinge's singularity his life's work:
    http://www.kurzweilai.net/news/

    And then there's the non-profit corporation, the Singularity Institute for artificial Intelligence, which is determined to bring the Singularity about as soon as possible:
    http://www.singinst.org/
    There are a lot of good Vinge links on that page too, btw

    Singinst seems to be the brainchild of this guy:
    http://www.wired.com/news/technology/0,1282,43080, 00.html
    who has a lot of interesting docs here:
    http://sysopmind.com/beyond.html

    Don't miss, the FAQ on the meaning of life, it's great reading.
  • where machines suddenly exceed human intelligence and the future becomes completely unpredictable

    I thought the future was already unpredictable.

    About the intelligent machines, I think the error is falling into the "biology" trap. Our whole perception system is conditioned by the ideas of "survival", "advancement", "power", "conscience", among others. Those come from our setup as living entities, trapped in a limited resources environment, having to compete for those resources. The fact that a machine is intelligent won't make it conscious, or interested in survival or power. There is no obvious relation. If your were to menace a machine more intelligent than you with cutting the power supply, it would be perhaps politely interested but not more. That is, if the development of the machine is made through "traditional" procedures. I would be wary of genetical-algorithm type developing. That could create a thinking and competitive machine :o)

    There are things that we cannot even imagine. One of them is the workings of our own brains. Other one is how a thinking machine would act. Of course, some are more interesting to write a book about that others. But it isn't S&F for me, more like fantasy.

    --

  • I guess you are right.
    Now that I have seen my error, can I correct it by withdrawing my post? Can anyone tell me how?
    (This is not intended as a troll)
  • by Ubi_UK ( 451829 ) on Thursday August 02, 2001 @02:42AM (#2177377)
    AN DIEGO -- VERNOR VINGE, a computer scientist at San Diego State University, was one of the first not only to understand the power of computer networks but also to paint elaborate scenarios about their effects on society. He has long argued that machine intelligence will someday soon outstrip human intelligence.

    But Dr. Vinge does not publish technical papers on those topics. He writes science fiction.

    And in turning computer fact into published fiction, Dr. Vinge (pronounced VIN-jee) has developed a readership so convinced of his prescience that businesses seek his help in envisioning and navigating the decades to come.

    "Vernor can live, as few can, in the future," said Lawrence Wilkinson, co-founder of Global Business Network, which specializes in corporate planning. "He can imagine extensions and elaborations on reality that aren't provable, of course, but that are consistent with what we know."

    Dr. Vinge's 1992 novel, "A Fire Upon the Deep" (Tor Books), which won the prestigious Hugo Award for science fiction, is a grand "space opera" set 40,000 years in a future filled with unfathomable distances, the destruction of entire planetary systems and doglike aliens. A reviewer in The Washington Post (news/quote) called it "a wide-screen science fiction epic of the type few writers attempt any more, probably because nobody until Vinge has ever done it well."

    But computers, not aliens, were at the center of the work that put Dr. Vinge on the science fiction map -- "True Names," a 30,000-word novella that offered a vision of a networked world. It was published in 1981, long before most people had heard of the Internet and a year before William Gibson's story "Burning Chrome" coined the term that has come to describe such a world: cyberspace.

    For years, even as its renown has grown, "True Names" has been out of print and hard to find. Now it is being reissued by Tor Books in "True Names and the Opening of the Cyberspace Frontier," a collection of stories and essays by computer scientists that is due out in December.

    "True Names" is the tale of Mr. Slippery, a computer vandal who is caught by the government and pressed into service to stop a threat greater than himself. The story portrays a world rife with pseudonymous characters and other elements of online life that now seem almost ho-hum. In retrospect, it was prophetic.

    "The import of `True Names,' " wrote Marvin Minsky, a pioneer in artificial intelligence, in an afterword to an early edition of the work, "is that it is about how we cope with things we don't understand."

    And computers are at the center of Dr. Vinge's vision of the challenges that the coming decades will bring. A linchpin of his thinking is what he calls the "technological singularity," a point at which the intelligence of machines takes a huge leap, and they come to possess capabilities that exceed those of humans. As a result, ultra- intelligent machines become capable of upgrading themselves, humans cease to be the primary players, and the future becomes unknowable.

    Dr. Vinge sees the singularity as probable if not inevitable, most likely arriving between 2020 and 2040.

    Indeed, any conversation with Dr. Vinge, 56, inevitably turns to the singularity. It is a preoccupation he recognizes with self-effacing humor as "my usual shtick."

    Although he has written extensively about the singularity as a scientific concept, he is humble about laying intellectual claim to it. In fact, with titles like "Approximation by Faber Polynomials for a Class of Jordan Domains" and "Teaching FORTH on a VAX," Dr. Vinge's academic papers bear little resemblance to the topics he chooses for his fiction.

    "The ideas about the singularity and the future of computation are things that basically occurred to me on the basis of my experience of what I know about computers," he said.

    "And although that is at a professional level, it's not because of some great research insight I had or even a not-so-great research insight I had. It's because I've been watching these things and I like to think about where things could go."

    Dr. Vinge readily concedes that his worldview has been shaped by science fiction, which he has been reading and writing since childhood. His dream, he said, was to be a scientist, and "the science fiction was just part of the dreaming."

    Trained as a mathematician, Dr. Vinge said he did not begin "playing with real computers" until the early 1970's, after he had started teaching at San Diego State. His teaching gradually shifted to computer science, focusing on computer networks and distributed systems. He received tenure in 1977.

    "Teaching networks and operating systems was a constant source of story inspiration," Dr. Vinge said. The idea for "True Names" came from an exchange he had one day in the late 1970's while using an early form of instant messaging called Talk.

    "Suddenly I was accosted by another user via the Talk program," he recalled. "We chatted briefly, each trying to figure out the other's true name. Finally I gave up and told the other person I had to go -- that I was actually a personality simulator, and if I kept talking, my artificial nature would become obvious. Afterwards I realized that I had just lived a science fiction story."

    Computers and artificial intelligence are, of course, at the center of much science fiction, including the current Steven Spielberg film, "A.I." In the Spielberg vision, a robotic boy achieves a different sort of singularity: parity with humans not just in intelligence but in emotion, too. "To me, the big leap of faith is to make that little boy," Dr. Vinge said. "We don't have evidence of progress toward that. If it ever happens, there will be a runaway effect, and getting to something a whole lot better than human will happen really fast."

    How fast? "Maybe 36 hours," Dr. Vinge replied.

    Dr. Vinge's own work has yet to make it to the screen, although "True Names" has been under option for five years. "It's been a long story of my trying to convince studio executives to really consider the work seriously because it seemed so far out," said David Baxter, a Hollywood writer and producer who is writing the screenplay with Mark Pesce, co-creator of Virtual Reality Modeling Language, or VRML. "But as time has passed, the world has started to match what was in the book."

    In the meantime Dr. Vinge has been providing scenarios in the corporate world as well. He is one of several science fiction writers who have worked with Global Business Network in anticipating future situations and plotting strategies for several major companies.

    Mr. Wilkinson, the co-founder of Global Business Network, said that Dr. Vinge's work with the group provided "an unbelievably fertile perspective from which to look back at and reunderstand the present."

    "It's that ability to conceptualize whole new ways of framing issues, whole new contexts that could emerge," Mr. Wilkinson said. "In the process he has contributed to the turnarounds of at least two well-known technology companies."

    Dr. Vinge, shy and reserved, is hardly a self-promoter. He scrupulously assigns credit to others whenever he can. And although he insists that much of his work is highly derivative, his fans do not necessarily share that view.

    "The thing that distinguishes Vernor is he's a scientist and all of his stuff makes sense," Mr. Baxter said. "It's all grounded in the here and now."

    Dr. Vinge is now a professor emeritus at San Diego State, having retired to devote his time to his writing and consulting. Over lunch at a restaurant not far from the university, he described a story he was working on.

    "Well, there's a recovering Alzheimer's patient," Dr. Vinge began, before being interrupted and asked how one could be a recovering Alzheimer's patient.

    His eyes brightened. "You can't," he said, and a sly smile crossed his face. "Yet."

  • I'm not sure I agree that AI is a software problem because I don't see where regular human intelligence is a software problem. There is no software that comes with a new-born. A new-born is a complex system that comes out of the womb ready to learn. It's already thinking. You could argue that it has an OS - instincts, genetic instructions, but really what if there were a hardware copy of a baby only made with silicon (or whatever). If it was constructed properly it should pop out of the vat ready to learn.

    I guess I'm arguing that intelligence is a function of pathway complexity and self-referentiality (real word?).
    Maybe if we build it right - complex enough circuitry/pathways and enough self-referential ability, can modify itself and external environment, e.g. alter it's own version of programmable logic controllers and move a coke bottle with a robotic arm, [Yes I did say "programmable" but I didn't say "fully pre-programmed".] - maybe, like a new-born, if we build it right, and simulate a little evolution along the way, the intelligence will come.
    I think the challenge is not coding intelligence which sounds impossible to me, but building something that facilitates intelligence developing on it's own, again, like a new-born. Not software, but hardware that has the "complex adaptive ability of the human brain".

    Granted the first one would be no more intelligent than a rabid gerbil, but that's a good start.
  • If the computers ever get angry at us, we can always just block out the sun by scorching the sky, thus rendering the computers powerless while the humans sit pretty.
  • heh, the flawed assumptions are yours, I assure you. Dr. Vinge is considering technological sophistication from the stone age to present in his theory, not computational power or any other specific.

    Consider that for perhaps millions of years we had fire and spears as our main tools. Then agriculture, then metallurgy, then language, communication etc. Each epoch is marked by revolutions in technological sophistication, and also, each epoch shift occurs more and more rapidly, in a logarithmic fashion. consider the advances of the last 100 years to see my point.

    In fact, the last great technological revolution has been the global information network that we are currently using to discuss the topic. Born less than 30 years ago, it has already saturated the planet, becoming nearly ubiquitous to the segment of the population at the front of the wave.

If I want your opinion, I'll ask you to fill out the necessary form.

Working...