Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology Books Media Book Reviews

True Names 122

Fans of Vernor Vinge know that he's a computer scientist, now retired, and science fiction writer. An interview we linked to a few months ago does a good job of discussing some of his ideas about the Singularity, the point in time when humans create a machine intelligence that is smarter than we are. Vinge's novella True Names was written in 1981, and forecast many aspects of the internet of today. True Names and The Opening of the Cyberspace Frontier is an anthology of the True Names novella and several shorter articles by other technically-inclined folk. If you haven't read the original True Names, this book is worth it for that story alone.
True Names and The Opening of the Cyberspace Frontier
author Vernor Vinge; ed. by James Frenkel
pages 352
publisher Tor
rating 8/10
reviewer michael
ISBN 0-312-86207-5
summary collection of articles by computer scientists predicting the future of the network

The history of this book is a little odd. It was supposed to be published several years ago, and was delayed for some reason, unknown to me. As a result, only the introduction to the book has been written recently - even the pieces that were intended to be extremely current are now rather painfully dated.

There's an old interview with Vinge where the interviewer draws out a number of Vinge's ideas about the modern internet and the Singularity. Vinge seems to have had it in hand when writing his introduction to True Names, and you can probably got a good idea of what he tried to convey in the anthology by reading the interview. If it sounds at all interesting, read on.

Vinge's central point is that cyberspace is extremely controllable, if and only if everyone's true names are known. That's the point brought out in the essay True Names, and it's a point that the other writers featured in the anthology agree upon. It's an incredibly insightful idea, one well worth spending some time pondering.

Let's look at some of the larger pieces included in the book. Timothy May, who is perhaps best known for his ranting posts about crypto anarchy, has a lengthy and astonishingly well-written essay titled "True Nyms and Crypto Anarchy". The essay reads as if an editor with a firm hand extracted most of May's characteristic wild-eyed prose and yet kept the insightful ideas behind it - if only all of his writing was like this essay. It's a great introduction to what May means by "crypto anarchy". May is one of the most optimistic writers in the book, and he, as well as the other writers, believe that we are at a fork: either we'll move toward a surveillance state, or toward what May calls an anarcho-capitalist state, but the middle ground is unstable - we'll end up at one extreme or the other. May believes we're already firmly on the road toward anarcho-crypto-utopia.

John M. Ford, who you may recognize as a science fiction writer, has a short story wondering what the machines will think of us.

Alex Wexelblat, computer scientist, has a powerful essay looking at the internet as a tool for surveillance and control. Written only a few years ago, many of his predictions are now fact.

Richard Stallman has his essay The Right to Read. Hopefully it will reach a larger audience on dead trees than in electronic form.

Leonard Foner has an essay covering the basics of the cryptography debate. It's geared to get newcomers up to speed and should do that adequately.

Chip Morningstar and F. Randall Farmer have a couple of essays about Habitat, a very early MUD sponsored by Lucasfilm. The essays have been published online; here's one of them and there's been plenty written about Habitat if you look. Excellent reading, brings out the challenges faced by any online community and simultaneously reminds you of the "good" old days (who here is paying by the hour for internet access today?).

And finally we come to True Names itself. Should be required reading in high school, IMHO. I won't discuss it much, either you've read it or you haven't, and if you haven't I'd rather you learn about it by reading it. If you don't want to buy the book, there are unauthorized electronic versions of the text floating around, but one way or another, read it, it's worth your time.

I'm going to go back now to Vinge's introduction. It bears quoting:

"It seems to me that it's still an open question whether computers and networks will help or hurt freedom--but this is one place where the most extreme scenarios are also the most plausible. I think we could easily go in the direction Tim May indicates, perhaps ending up with a world very like the one in Neal Stephenson's Diamond Age. On the other hand, there are the "Four Horsemen" that Tim, Alan, and Lenny remark upon. All four Horsemen are good excuses for the incremental tightening of regulation and enforcement (some being more effective with one constituency than another), but I think the "Terrorist Horseman" is the one that could shift our whole society toward strict controls. Just a few really ghastly terrorist incidents would be enough to cause a sea change in public opinion. It's not hard to imagine the entire country run the way airports were run in the late twentieth century. But there are worse nightmares: Imagine a government that mandated control of some part of each communicating microchip. In that case, the computing power of the Internet could be used for much tighter control than George Orwell described." -- Vernor Vinge, August 1999

Today the "Terrorist Horseman" is in full charge, whipping us toward ever-tighter controls. And Vinge's prediction is embodied in the countless initiatives to install Digital Rights Management and government surveillance in every computing device. And that is why, in the end, I gave the anthology less than a 10/10 rating. Although I know it was written before the most recent events which proved it so accurate, it feels dated, as if we've already run at top speed down the road to a Net filled with surveillance, where the government and the MPAA know everyone's True Name, and yet Vinge is behind the times in predicting it now.


You can purchase True Names at Fatbrain. Want to see your own review here? Read the review guidelines first, then use Slashdot's webform.
This discussion has been archived. No new comments can be posted.

True Names

Comments Filter:
  • by Anonymous Coward
    but i can`t be seen buying/reading books with `cyberspace` in the title.
    • by Frank White ( 515786 ) on Monday January 14, 2002 @12:47PM (#2836953) Homepage Journal
      Science fiction was exploring "the Net" in print long before it became the subject of fiction in Hollywood movies and on television news. Indeed, the science fictional imagery of "cyberpunks" seem to have been a model for many online venues, as well as for a goodly portion of the online population. But very few SF writers accurately portrayed today's Internet--or even the virtual reality Net that seems to be looming on the horizon. Almost none showed the real, day-to-day concerns of inhabitants of their fictional Nets.

      Most readers will be familiar with the early cyberpunk novel Neuromancer, by William Gibson. In it, as in many movies, the Net is portrayed as a sort-of monster video game populated by self-made superheroes. On a more realistic level, but still making dazzling use of virtual reality in cyberspace, is Vernor Vinge's novella, "True Names." First published in 1980 and thus predating the Gibson work and other cyberpunkia, "True Names" focuses in part on the obsession online hacker communities have with keeping their real names--their true names--a secret. In fact, the plot hinges on it; hence the title. In Vinge's world of hackers, for someone else to know your true name was to lose all, and to be at the mercy of those who knew your true name.

      As you know if you've read Chapter 1, anonymity is every bit as vital a concern to contemporary hackers as it was to the hackers in "True Names." It should be a vital concern to you, too, in certain situations.

      "True Names" is also a good starting-point for understanding a bit about the hacker culture and related Web communities. In short, it is a "must-read" for anyone who uses the Internet.
      • Since we're talking Vinge, I will make a blatant ad for the fact that I have put up 6 copies of the now rare "1993 Hugo and Nebula Anthology" CD-ROM up for Charity Auction on eBay.

        This CD is the world's first major eBook project with current fiction, and became famous because the novel "A Fire Upon the Deep," is in hypertext, linked to 400kb of the author's notes, written as the novel was being developed. You really get to see Vinge's mind at work.

        Anyway, Vinge fans love these, so from time to time they show up for charity auction. In this case, the proceeds go to support the Electronic Frontier Foundation (EFF) [eff.org], which you can read about elsewehre on /. today.

        To see the auctions go to my eBay page [ebay.com] or to the CD's web page [templetons.com]

  • In a nutshell (Score:3, Insightful)

    by -brazil- ( 111867 ) on Monday January 14, 2002 @12:27PM (#2836830) Homepage
    "I suppose when it gets to that point, we shan't know how it does it."

    --- Alan Turing
  • Was the first Vinge book I read, back when it was serialized in Analog. Bobbles, life after the infocalypse, and a mystery. Good read.
    • i'd have to agree, i just finished the book (out on millennium in the uk) called 'across realtime' and i loved it. it's basically two books: 'the peace war' (pre) and 'marooned in realtime' (post). If you haven't read the peace war, i'd recommend picking it up.
      • If you haven't read the peace war, i'd recommend picking it up.

        Agree wholeheartedly. Having received "Across Realtime" for xmas, I've just finished "The Peace War", and am currently half way through "Marooned In Realtime". The Peace War, in particular, is an outstanding work. Recommended to all. It's the first Vinge I've read, and on the basis of what I've found, I'll definitely be getting more...

  • its because its a 20 year old book about computing.
    Oh, and that stuff about terrorists and horses - well, people have been going on about spooks and terrorists and civil rights for years, and they always will.
    "Written before recent events"? Well, 1999 may be before September 11th, 2001, but what he`s describing (`Just a few really ghastly terrorist incidents would be enough to cause a sea change in public opinion`) is old news to anyone who`s been paying attention to how governments (around the world) get the laws they want.

    http://abcnews.go.com/sections/us/DailyNews/join tc hiefs_010501.html
  • A common mistake (Score:3, Insightful)

    by Peter Dyck ( 201979 ) on Monday January 14, 2002 @12:33PM (#2836882)
    when humans create a machine intelligence that is smarter than we are

    I wonder why this same mistake comes up time after time in sci-fi.

    Having superior raw intelligence doesn't mean anything performancewise. Yeah, you might be able to carry out a perfect logical deduction in a nanosecond but that doesn't make you creative or give you the intuititive ability us humans have to skip over irrelevant facts.

    At least the original Star Trek got it right when Spock uttered the most insightful line in the whole series: "Logic is the beginning of wisdom, Valeris, not the end."

    • by Sloppy ( 14984 ) on Monday January 14, 2002 @01:07PM (#2837078) Homepage Journal

      Yeah, you might be able to carry out a perfect logical deduction in a nanosecond but that doesn't make you creative or give you the intuititive ability us humans have to skip over irrelevant facts.

      The whole premise of the singularity thing is that real AI will be achieved. That means that "logic" stuff wouldn't just be done in nanoseconds, but insightful and creative stuff as well. I may be mistaken, but I suspect most AI researchers believe that creativity itself ultimately reduces to "mere" logic (possibly combined with interesting stimuli).

      It's the "interesting stimuli" thing that's got me wondering. Whenever I do something creative, it's never really 100% from within; it's just that I have some kind of "different take" on something else in the world that I have seen/heard/etc. If that's how creativity works, then AI creativity becomes an I/O bound problem, so thinking in nanoseconds doesn't help if you're waiting all the time to see things to get you started. OTOH, you can add lots of inputs to a computer -- much more than a human can ever have. The computer, instead of just having my limited ears and eyes, could have the whole internet at its disposal (and process it a lot faster than a human), all the street cameras in London, etc. It might think up some pretty wacky and very high-level stuff compared to us little people.

      • My reading of Vinge's essay on the singularity and his book "Marooned in Realtime" was that he considered Intelligence Amplification (IA) a much more likely route to the post-human era than Aritficial Intelligence (AI).

        I couldn't agree more. AI requires that we get to the bottom of how intelligence works then do a better job of it than evolution did. Not bloody likely any time soon. IA only requires that we stumble upon enough of the working principles of our brains to enhance them with machines/computers. FAR more likely. We have had some very interesting experiments in recent years decoding the signals a cat's eye sends to its brain, and interfacing neural networks with a lobster's nervous system to control the parts that move food particles along, just to name two.

        What progress has been made in AI that can compare with that?
      • believe that creativity itself ultimately reduces to "mere" logic (possibly combined with interesting stimuli).

        Are those AI researchers living before or after Godel?
    • A common mistake (Score:2, Informative)

      by yooden ( 115278 )
      I take it you refer to Vinge's text on the Singularity. You should read it though, he never says that a machine intelligence is inevitable, but that a super-human intelligence is inevitable. That could also be achieved through genetics.

      One of the best ideas in this respect are the Focused from Vinge himself

    • I wonder why this same mistake comes up time after time in sci-fi.

      Having superior raw intelligence doesn't mean anything performancewise. Yeah, you might be able to carry out a perfect logical deduction in a nanosecond but that doesn't make you creative or give you the intuititive ability us humans have to skip over irrelevant facts.


      Exactly... people see an exponential graph, and try to extrapolate the point where the line becomes vertical, but it's just a trick of the eye... the "singularity" is always just around the corner, and never shows up (unless you log-plot, in which case it's obvious that growth is more or less constant.) Sure, we have ever more incredible CPU resources at hand, but the (interesting) problems grow exponentially (or nearly so). Things are moving as slowly/quickly as they ever were...

      And as for AI, have there been any advancements in that field that aren't:

      a) a parlor trick like ELIZA,
      b) happening "Real Soon Now", or
      a) a plain hoax?

      --
      Benjamin Coates
  • Stallman's piece has no place in a book like this. It is a poorly written fable, and while it may be somewhat instructive, it's nothing that I ever hoped would be published on paper. Compared to the other selections in this book that I'm familiar with it doesn't seem to warrant inclusion.
    • Well, it certainly is more plausible than the other stories, and actually has come true to an extent (Dimitri, anyone?) and predicting the future correctly is the ultimate test of any science fiction story.
    • I agree that it's poorly written, but I think if fleshed out by someone with writing skills it could be a good story. The seed is definately there.

  • by AlephNot ( 177467 ) on Monday January 14, 2002 @12:35PM (#2836891)
    I should note that the Singularity refers not just to the creation of greater-than-human intelligence, but to what happens when such intelligence is let loose in the world and begins to enhance its own mind. From http://sysopmind.com/singularity.html [sysopmind.com]:

    ((begin quote))
    If computing speeds double every two years, what happens when computer-based AIs are doing the research?

    Computing speed doubles every two years.
    Computing speed doubles every two years of work.
    Computing speed doubles every two subjective years of work.

    Two years after Artificial Intelligences reach human equivalence, their speed doubles. One year later, their speed doubles again.

    Six months - three months - 1.5 months ... Singularity.
    ((end quote; emphasis in original))

    If you're still interested, check out http://www.singinst.org [singinst.org].
    • Please correct me if I'm wrong, but doesn't "the singularity" simply refer to the point in time where something happens which changes things so fundamentally that we can't accurately predict what things will be like beyond that point? What that something is doesn't necessarily have to be AI, though at this moment it appears to be one of the most likely occurences.
      • I've always seen the idea of singularity in the context of the dialectical tradition (Hegel, Marx, and maybe Darwin) - just like we cannot get information from beyond the event horizon of a real singularity, the singularity as a historical moment is that point past which by definition we can not comprehend. So attempts to discuss what it is like, what its substructure is, and so forth, are by definition futile.
    • I think he's right in principle, but just guessing as far as the actual timing goes. While computer speeds will continue to improve and may improve to staggering levels (e.g. one gram of computer doing the raw calculations of all human brains on the planet), that won't be enough to get the singularity started.

      what happens when computer-based AIs are doing the research?

      That's the key. The computers themselves are not going to be able to begin to contribute to improving computers, until some human, somewhere, finally figures out how to build an AI. And no one, not even Vinge, knows when that is going to happen. None of the Moore's Law stuff matters until some Computer Scientist has a breakthrough and figures out a way to use all that power.

      • While your observations on AI are certainly on the mark, I have to think that simply having a computer sufficiently powerful to model a running human brain is a prerequisite for discovering how to build an AI. Maybe we haven't been able to pull it off yet (at least partly) for lack of this infrastructure. Things might move along pretty quickly once we have a computer powerful enough to run a realtime simulation of a working brain.


        Thoughts?

      • This is a common mistake. (I'm fairly sure that it's a mistake.) Consider compilers. Faster compilers create result in code being created faster. So do compilers that are distributed more widely.

        Now this is clearly singnificant, but then the tools don't debug themselves. So you create a debugger, and development speeds up again.

        There's clearly a limit. People are in the loop, planning, designing, and slowing things down. So you look for where things are slow, and you automate that. Say you build an automated cross reference tool. Or a self-organizing data dictionary. Eventually you get to the point where people are designing the UML diagrams, and the rest is automated. So you look for where things are slow, and automate that...

        I believe that if you were to map the generations of tools you would find that this speedup is already in progress. No AI is needed. People are just moving higher and higher up the coding tree. (I once programmed in assembler ... but for the last 6 computers I haven't bothered to learn how. I once programmed in C, now I prefer Python, though I still drop into C on occasion. That seems to be decreasing.)

        The problem is that moving "higher and higher up the coding tree" can also be said "become more and more peripheral". And AI hasn't appeared anywhere in the process.

        Also: Lots of things that were once defined as "AI" aren't called that anymore. Voice recognition is starting to fall. Logic has fallen. Simple game playing ("that's just alpha-beta pruning of a span searched game tree with a decent estimator"). etc.

        AI probably doesn't exist, in any absolute sense. But in the same sense, one may wonder wheather or not human intelligence exists. Certainly lots of times the way the people make decisions causes one to suspect that they are operating on a rather limited computational budget, with a bunch of fancy pattern recognition processes feeding them info. Otherwise how can one explain some of the really stupid decisions that people make. E.g.: In my case, why did I eat that jelly doughnut? I knew that it was a bad idea. I wasn't hungry. I knew at the time I wasn't hungry. But I went ahead and ate it anyway. I knew ahead of time that I would disapprove of the decision later. Intellectually I disapproved at the time. But I did it. This wasn't intelligence acting in any reasonable definition of the term. But it's the kind of decision that people all too frequently make. So it's a reasonable question "Just how much 'intelligence' do people actually have, outside of the categories that computers already exhibit?" And what do you mean by intelligence, anyway.
        .

    • Artificial intelligence is always thirty years into the future. Thirty years ago, people expected to have it by the year 2000; now people expect it by the year 2030. I am sure that in 2030, they will expect it in 2060, and so on.
    • No matter how fast a computering speed is, it will never exist the speed of light. Instead of assuming an exponential growth (2^n), a better assumption is to assume an s-curve growth. As technology development approach the speed of light or whatever reference it is based on, the speed will slow down, like the same way it speeds up. At that point,

      Computing speed halves every two years.
      Computing speed halves every two years of work.
      Computing speed halves every two subjective years of work.

      Eventually, the technology will become common knowledge and stablized, perhap even abondoned for a better alternative.

      Will AI approach human intelligent at that time could still be a question.
      • No matter how fast a computering speed is, it will never exist the speed of light.

        (I assume you meant "it will never exceed the speed of light.)

        Um, I don't mean to be patronising (liar!) but you don't seem to understand the meaning of "speed of a computer". Speed refers to how many calculations of some sort a computer can sustain per unit time. This is only distantly related to measuring the amount of linear displacement per unit time.

        I mean really. Are you worried about your computer wandering around your desk at 300000000 meters per second?

    • by Anonymous Coward

      Uh oh. Why oh why didn't I choose the red pill?
  • by Mentifex ( 187202 ) on Monday January 14, 2002 @12:43PM (#2836927) Homepage Journal

    The best and most terrifying document about Artificial Intelligence [sourceforge.net] that I have ever read is Vernor Vinge on Technological Singularity. [caltech.edu]


    Consequently when my independent-scholar Open Source AI project [scn.org] caused me to be interviewed about it by Nanomagazine, [nanomagazine.com] I made sure in my low-status interview [nanomagazine.com] to refer hero-worshippingly to on-high Vernor Vinge, [nanomagazine.com]whom I thank for lighting the path for all us AI geeks. [sourceforge.net]

    • Jesus Christ, Michael, what the fuck were you thinking when you posted this story? I knew this asshole Mentifex would blow his load when he saw the perfect opportunity to spam us with his AI project. Maybe he's not offtopic this time, but he's sure as hell a troll.
  • by LazyGun ( 138083 ) on Monday January 14, 2002 @12:45PM (#2836942)
    This is as complete and accurate an etext of the 1984
    edition of True Names

    True Names by Vernor Vinge [gatech.edu]
  • by why-is-it ( 318134 ) on Monday January 14, 2002 @12:54PM (#2836990) Homepage Journal
    the Singularity, the point in time when humans create a machine intelligence that is smarter than we are.

    How exactly do you define intelligence?

    If you define it in terms of capabilities, computers are already faster at calculations than any human could be. In the absence of a clearly defined standard of what constitues intelligence, how do we know what the Singularity IS much less when we get to that point?

    The Turing test for Artificial Intelligences was once considered the benchmark as to whether the goal of creating an artificial intelligence had been achieved. Now, the test is considered insignificant because what it measured (the ability to understand and respond to human natural language) was merely one aspect of intelligence, and apparently not the most important indicator of intelligence.

    There are literally hundreds of IQ tests out there, and they all measure something, but the question is, does it measure what it claims to, and is the thing being measured an appropriate indicator of what you are looking for? If the experts (psychologists) can't come to some sort of concensus, what hope to the rest of us have for figuring it out?
    • by IvyMike ( 178408 ) on Monday January 14, 2002 @01:36PM (#2837282)

      How exactly do you define intelligence?

      If you define it in terms of capabilities, computers are already faster at calculations than any human could be. In the absence of a clearly defined standard of what constitues intelligence, how do we know what the Singularity IS much less when we get to that point?

      You can question whether or not it's truly "intelligent", but when Skynet sends the T1000 killbot after you, this argument will seem pretty academic.

      • Well, this isn't so far from the truth.

        There are some video games which have some pretty brain-dead AI - but you still lose in the end, to superior forces. Either they have more firepower, or are more maneuverable, or better armor, better "intelligence" (information), or numbers.

        In any case - we're focussing on the wrong aspects of the "threat machine" - intelligence isn't really all that necessary. In fact, I think that the intelligence required to be a threat to humanity already exists.
        Even the "power" aspect already exists. Automated warfare. ICBMs. Cruise Missiles.

        What does not exist is self-sufficiency.
        When someone can create the mechanical apparatus for current machine intelligence, that can build, repair, service, and power a suitable weapons platform, we (humanity) are toast.

        Right now, it would definately take concerted deliberate innovation to develop a "threat" along these lines.

        What the fear is - in the future, the intelligence will exist in the machine world, that can become self-sufficient in it's own right. THAT is where we'll need to start worrying.
      • I can't believe how many people still make this mistake. The human brain has much more raw processing power than a computer. However, it is fantastically optimised for vision. When the human mind performs mathematical / logical calculations, it typically does so by transforming into word / though pictures, processing, then transforming back.

        If the brain used the complexity of neurons required to actually calculate directly, we could easily thrash computers at maths / logic / chess / anything that doesn't require vision.

        Computers as we have them have about the same raw processing power as an insect. Or possibly a small fish. Go and read some Hans Moravec. As complexity increases in th biological nervous system, so do higher order computational abilities.

        This includes such factors as emotions, instincts, communication, vision, and of course consciousness.

        Of course computers are still incapable of proper emotion / self awareness. In terms of raw power, they're a beetle.

        On the other hand, we do know that evolution of nervous systems (either biological or silicon) does follow an exponential curve, jumping over ice ages, climate changes, predators, environmental disturbances, transistors, lithography, etc.

        And the current calculations show that the evolution of the silicon nervous system is happening at about 100 million times faster than the one based on cells and adenosine tri-phosphate as a power source.

        Moore's law is just an quantisation of what anyone who's studied the evolution of the biological nervous system already knows.

        Honestly, do you really believe that the medium has anything to do with it? Consciousness et al come from the immense complexity of interactions, combined with an environment that encourages such abilities.

        mick
    • The entire point is that if the Singularity occurs, it will be like being hit across the head with a 2x4... while not a one of us could come up with a precise definition of "hit" (versus "tap", versus "nudge", etc.) in numeric terms of Newtons and radians, you know it when it happens to you.

      Similarly, when dealing with an intelligence that can meaningfully modify itself to its own improvement, academic questions over whether it's "ten times smarter then an unassisted human" or "elevent times smarter then an unassisted human" aren't even academic; they're just meaningless. The point is that that level of intelligence is entirely unlike anything experience we have in the past with which to judge the future.

      If the Singularity occurs, the "common man's" (which will be in short supply by then) problem will not be figuring out whether the machine intelligence is a billion or a trillion times more intelligent then him; it will be trivially obvious that the intelligence is orders of magnitude greater by ANY measurement. The problem will be avoiding being left behind, or, alternatively, trying to be left behind. (Either way the choices could well be constrained.)

      (I personally don't think it will happen, for multiple reasons not worth going into, but those who understand the concept and its proponents IMHO do have a clearer understanding of what the future implies then those of us who think that life will largely continue as it has; frankly, it hasn't been working THAT way for a couple hundred years already, so why we persist in that notion is beyond me. Well, actually it's not beyond me, I understand it well, but the rhetoric stands.)

  • This is how someone I've forgotten (perhaps someone here on Slashdot) summed up Vinge's notion of the Singularity. It's an accurate description.

    I think it's all nonsense, anyway. Various technological revolutions have come and gone, and we humans are pretty much the same as we always were, as any student of history, ancient literature, or anthropology could tell you.
    • Ken MacLeod. "The Singularity is just the Rapture for geeks."
    • Various technological revolutions have come and gone, and we humans are pretty much the same as we always were, as any student of history, ancient literature, or anthropology could tell you.

      And that, my obtuse friend, is the point. We humans are limited in the speed to which we can adapt to change due to the biological matrix from which we came and the social matrices built upon that limited biological matrix. What happens when we introduce a new species that is liberated from that matrix? The point is that we cannot comprehend this new species any more than a chimpanzee could comprehend the human world.

      Now compound this with the fact that the new species' evolution is no longer limited by biological speed, but that they start to redesign themselves at an accelerating pace. Now imagine trying to maintain your society or even existance in the face of these new species.

      If we are lucky, they will allow us to stay around, perhaps giving us shiny trinkets that will make us happier and healthier. Perhaps, they'll pay as little attention to us as we pay to "lesser" species. And, perhaps, they'll like cows better than us for their pets and just exterminate us because we keep trying to eat them.

      Singularity does not even begin to describe this...

    • I, for one, don't believe that we'll ever create AI. Not if the definition of "AI" is; a perfect duplication of the human mind - consciousness. I don't think it will ever be possible. Hell, thousands of years of philosophy, and we still can't really define what the human consciousness is.

      On the other hand, I *do* believe that one day, computers will be "smart enough" to pose a threat to our existance. They won't even have to be "conscious".
    • ... we humans are pretty much the same as we always were, as any student of history, ancient literature, or anthropology could tell you.
      We humans may still be the same, but conditions now are different. The big difference is that for the last five hundred years or so, we've been collectively building a description of the world that continuously checks itself against reality, and doesn't backslide (new descriptions have to be consistent with all previous observations). We use that description to build tools to control our environment and improve our lives.

      Before empiricism and the scientific method, change was so slow and so random that most people never saw it during their lifetimes. Today, people take exponential progress in computer hardware performance as a given, and expect visible change in their surroundings multiple times during their lives.
    • Various technological revolutions have come and gone, and we humans are pretty much the same as we always were

      Exactly. "No matter where we go, there we are." Intellectually speaking, human hardware today is pretty much the same as it was a hundred or a thousand years ago.

      The Singularity is what happens when the driving force behind progress, that is, intelligence itself, learns how to improve itself at the exponential rate we've seen in computers. A true AI could do it, very likely so could humans augmented with ordinary 'dumb' computers. This new creature would change not only the world around it but itself as well. A faster processor doesn't just mean it can play Quake with twice the framerate, it can literally think faster, and then it can design the next processor even sooner, etc, etc.

      How long would it take for something like that to become completely incomprehensible to anything that existed before it? That is the Singularity. It is the point where simple extrapolation of what a self-improving intelligence will be capable of makes no sense anymore. The only way to understand it, to know what's on the other side, is to go through it.

      • I'm amazed at how many geeks buy into this without question! It's pure mythology that Vinge dreamed up. I love his novels, but the "Singularity" is indeed the Rapture for atheists. People who lack a belief in any form of supernatural transcendence still have a very human hunger for it...and so they create a version of it that superficially conforms to their worldview (science, or more accurately "scientism," the belief that science is the ultimate way to describe all reality).

        No one has ever offered any shred of proof of the Singularity scenario. AI research has not gotten very far, and we can't even agree on what "intelligence" or "consciousness" are. There is NO evidence that we will create "AIs," whatever they are (here's hoping they look nothign like Haley Joel Osment :) ), and there's no evidence that they will start some sort of ever-accelerating self-improvement process, culminating in some kind of movement into a higher plane of existence.

        I haven't read any of Vinge's "non-fiction" writings on this subject, but I must say it sounds suspiciously like something that Stanislaw Lem dreamed up a few decades ago. He has some stuff (in a book called _Imaginary Magnitude_) about a series of self-aware computers called the GOLEMs. Each new GOLEM is capable of higher forms of thought, and creates new languages and metalanguages to convey its ideas. Finally the GOLEMs are communicating with each other on a level totally inaccessible to humans.

        But again, none of this is SCIENCE. There is no hard basis for any of it, folks. It is FICTION. More accurately, it is MYTHOLOGY...mythology for our technological society, especially for a technologically-obsessed subculture of it. I enjoy it as such, in the same way I enjoy reading alt.conspiracy -- as a student of human nature and culture, not as a "true believer."

        • There is no hard basis for any of it, folks.


          There is no hard basis for denouncing it as
          a myth either :)
        • I've heard of the singularity applied in different ways than just AI. What about our current research into nano-tech or bio-tech. There is a point where our own science will be WAY beyond our own control. Take for instance Greg Bears "Moving Mars" If we had technology like that how would we cope? A singularity is not about moving into a higher plane of existance, it is a point we cannot see or see beyond because reality at that point moves in such a radical way that humankind can't conceive. Imagine if it had taken ten years to get from the stoneage to today?

          "But the local net at the High Lab had transcended-almost without the humans realizing. The processes that circulated through it's nodes were complex, beyond anything that could live on the computers the humans had brought. Those feeble devices were now simply front ends to the devices the recipes suggested. The processes had the potential for self-awareness . . . and occasionally the need."
          Vernor Vinge "A fire upon the deep"
        • there's no evidence that they will start some sort of ever-accelerating self-improvement process

          Of course there's no evidence, it hasn't happened (yet). You could just as easily say there's no evidence that research into manipulating human genetics will ever prove useful. Or the space industry will ever be anything beyond a few shuttles and satellites. Or fusion power will ever be anything more than a toy for scientists with grants. We haven't done it yet, but that doesn't mean we rule it out forever. All we can do is look at what we've managed so far and what we think we can do tomorrow and judge how likely a certain technology is.

          No one has ever offered any shred of proof of the Singularity scenario

          The singularity is an extrapolation from current trends. When Moore's Law was first put forth, the extrapolations from it probably seemed downright stupid. "Billion calculations per second processors in every home? Absurd!" Yet even that level has been surpassed and shows no signs of slowing down. If a true AI is not possible or if we never develop mind-machine interfaces (the two most promising agents for it), then it might indeed be unlikely. We'll probably get something useful out of the genetic aspect of intelligence (spare me the histrionics about the complexity of the genome), but I'm not sure how far you could run with it.

          But given some of the advances in combining silicon and wetware, it's not unreasonable to think that something will come of it. From that assumption, you can extrapolate and speculate. We think, and again, it's not an unreasonable assumption, that one of the biggest bonuses of this kind of tech would be ability to improve intelligence. This accelerates the extrapolations and eventually you get abilities and intelligences that are so far beyond anything we can imagine today they make no sense.

          For the record, I'm rather offended by you telling me that I naturally 'hunger' for the supernatural and that science is just another opiate for the masses. Neither is true and you can stop touting it as fact.

          He has some stuff (in a book called _Imaginary Magnitude_)

          Really? Cool. Got any more info on that book (or was it a short story?); Amazon and a couple other such sites don't list it.

  • by crumbz ( 41803 )
    I just ordered this from my local Sci-Fi bookstore, "The Stars are Destination" as they sold out of their original order. I could of ordered from Amazon, but a physical space to browse for books is a great thing and I am happy to pay extra for it.

    Of course, once we have true tele-presence, my attitude may change.
  • was "A deepness in the Sky". I remember noticing something funny with the book very soon. But it took me until the smart dust to recognise what it was: All the CompSci was correct! (I am a Computer Scientist.) And the book was still good.

    Not often that you encounter that combination.
  • The link, http://isbn.nu/0312862075 listed in the main post attributes a novella titled "Neuromancer" to Mr. Vinge. I think they must be confused. William Gibson's novel by that name has been compared to Vernor Vinge's (earlier) work, "True Names" but is in no way related to Mr. Vinge.

    Unless there's some other connection of which I'm unaware...
  • Having to read a book containing the rantings of both Vinge and RMS must be torture. Although the presence of alternate voices in the world must be seen as an advantage, Vinge in particular is truly paranoid. When my computer starts manufacturing robots with which to take over the world, I'll simply turn it off.
    • When my computer starts manufacturing robots with which to take over the world, I'll simply turn it off./i>

      If your computer is smart enough to dominate the planet it'll be smart enough to get around a simple-minded tactic like pulling the plug.

    • When my computer starts manufacturing robots with which to take over the world, I'll simply turn it off.

      That's why i don't trust these newfangled ATX power supplies. "Soft Power" my ass!

      ...it'll be a cold day in hell when i let my computer control the big red switch...
      --
      Benjamin Coates
    • When my computer starts manufacturing robots with which to take over the world, I'll simply turn it off.

      Fine, as long it's "your" computer. If it failed to understand that vulnerability, it wasn't destined to take over the world.

      I remember an interview with William Gibson. Roughly:

      Q: If computers get too smart will they take over the world? Or will we have to pull the plug?
      A: First, they already are too smart. Second, why would any street-smart computer give you a plug to pull?

      In other words, computers make alliances with humans before attacking other humans. If you decide to "pull the plug" on a machine housed in a private locked room at Abovenet, you've got a major and risky task on your hands.
  • If it's smart, it'll keep it's mouth shut.

    Probably right after it reads up on the Salem Witch Trials, or watches Terminator 2.
  • by spiro_killglance ( 121572 ) on Monday January 14, 2002 @01:42PM (#2837326) Homepage

    Why would super human intellegent AIs, be
    devoloped before the technology for Uploading
    human minds into a computer?


    Evolution has been working nearly a billion years
    with a population count of billion upon billion, on producing integellence, only once at the
    end of this time as produced integllence. Hopefully its just plain hard to produce an general purpose self aware intellegence and human uploads will be ones running a post singilarty world.

    On the other hand, what would be the difference
    between a human uploads and a constructed AI.
    Humans have been knowned to think and do practicly
    anything no matter how wacked out. Would an AI
    be any different. In general bad ideas means bad
    actions, and human programming comes from your
    environment not genetics.

  • by schepers ( 462428 ) on Monday January 14, 2002 @01:43PM (#2837328)
    For fans of Vernor Vinge (as I am), you can pick up his new anthology, The Collected Stories of Vernor Vinge, reviewed here [scifi.com] and here [sfrevu.com]. Amazon has both [amazon.com] of these books for sale (though at a spurious "special low price").

    "True Names" is one of two of his stories not included in the collection, sadly. I don't know what the other one is.
    • "True Names" is one of two of his stories not included in the collection, sadly. I don't know what the other one is.

      The second is part of the fix-up novel Tatja Grimm's World, and really makes more sense as part of that novel than as a stand-alone story.

      In addition to the two stories (which are both available elsewhere as noted above), there's also a page missing from the collection: see this page containing the missing text [ucsd.edu].

  • "...written in 1981, and forecast many aspects of the internet..."

    What incredible foresight he must have had to visit his local university or research company and see how the Internet had been working for 15 years!

    Hey, maybe I should write a book "predicting" the Mars Rover or Microsoft monopoly.

  • Now imagine this... A Supercomputer with a superinteligence ... set to conquer the world.

    And after a while theres a blip and he vanishes. You then realise that he was Windows based.

    The world rejoices and present eternal thanks to Bill Gates for it's wonderfull OS that helped to save the world...

    What a nightmare... :)
  • by FreeUser ( 11483 ) on Monday January 14, 2002 @01:59PM (#2837438)
    the Singularity, the point in time when humans create a machine intelligence that is smarter than we are.

    This is an inaccurate characterization of the singularity. The singularity refers more generally to that point in time where technological growth and change are happening so quickly (in other words, where the exponential curve steepens toward the vertical), that we don't know what happens.

    It could be a superintelligent computer that wipes out or enslaves humankind. More likely, it will involve some sort of transformation of humanity into something else (presumably enhanced, perhaps godlike or simply so different as to be unrecognizable). Examples of such possibilities include the merger of human and machine intellect (e.g. wiring the cortex directly to a powerful computer, or perhaps the internet itself), the emergence of a group intelligence (we all become one mind as we wire ourselves together to the internet, perhaps like the borg, perhaps not), self-modification of our physical forms beyond our current recognition (or ability to imagine), ditto for our intellects, loading our minds onto machines directly and turning our back on the physical world in favor of an existence in some form of virtual reality, and so on.

    "The Singularity" has many, many positive connotations as well as many negative connotations, the underlying factor is that what will happen is an unknown, what form it will take (or what forms, as it is quite possible, even probable, that we will diverge in many directions as the means and technologies to do so become available) is an unknown, and when exactly it will happen is an unknown. Assuming such stifling things as patents, Bill Joy-proposed types of restrictions on research, and other governmental or corporate restrictions on technological progress do not play a deciding role we can be certain of only one thing: the singularity will happen very rapidly, perhaps in a time period of days, hours, or even minutes.
    • The Singularity" has many, many positive connotations as well as many negative connotations, the underlying factor is that what will happen is an unknown, what form it will take (or what forms, as it is quite possible, even probable, that we will diverge in many directions as the means and technologies to do so become available) is an unknown, and when exactly it will happen is an unknown. Assuming such stifling things as patents, ...

      Hey, but that gives me an idea. What happens when computers, brains, and software are all the same thing. "You're running illegal software on your brain! We're to confiscate your brain as evidence! Drag your internal icon to the Start menu and click shutdown immediately!

      LOL

      C//
  • Cool (Score:3, Insightful)

    by QuantumG ( 50515 ) <qg@biodome.org> on Monday January 14, 2002 @02:18PM (#2837541) Homepage Journal
    old school cyberpunk repackaged for the youngens.
  • Obsoletion (Score:4, Insightful)

    by Bobtree ( 105901 ) on Monday January 14, 2002 @02:41PM (#2837667)
    Having placed my order for this book about 2.5 years ago, I was totally surprised to recieve it last month (during finals week, damn!) and accordingly devoured it.

    Vinge is visionary, but what's always struck me about the Singularity is not what it says about the future, but what it says about our world today.

    I grew up reading every scrap of cyberpunk I could get my hands on (and now I read Pynchon, so don't get me started), and knowing, not just imagining or believing, but knowing that modern man is soon to become obsolete. Everything about us is on the verge of becoming mere history to offspring quite unlike ourselves. I don't mean it in a bad way, in fact I quite look forward to it and don't doubt that we'll make a leap of our own, but in the back of my mind it's always underscored a certain thread of dispair. We seem to be at that tricky point where we've outlived our usefulness, indeed we have quite literally won the game (evolution) that brought us here, but haven't yet been replaced or upgraded. It gives life a certain quaint feeling, and a curious urge to transform now or self destruct (yes, go watch Fight Club again).

    This world isn't quite mine. I think we're waiting ours to arrive.

    Sometimes I wonder why noone (other than Vinge) is writing post-Singlarity science fiction.

    In the meantime, go read True Names, then read the essays and read it again.
    • > I grew up [...] knowing, not just imagining or believing, but knowing that modern man is soon to become obsolete.

      That seems like an odd comment when praising Vinge's idea of the Singularity. By definition, the Singularity is the point past which all predictions and speculations fail, due to an epistemological--and possibly metaphysical--shift. I think that "believing" is a better word. "Just knowing" smacks of superstition.

      But your point about what the concept says about today is well taken. We do seem to live in a better/faster/cheaper world, even when talking about people. Personally, I'd love to be able to upgrade my software.
    • >>Sometimes I wonder why noone (other than Vinge) is writing post-Singlarity science fiction.

      There are others out there writing post-singularity sci-fi but you'll look hard to find them. They are almost like post-apocalypse sci-fi except you don't know what happened. I read one example (sorry can't remeber the title or author) where an astronaut returns to Earth to find all of earth empty. He goes on to discover that they dissapeared in some sort of new age transformation but he cannot understand it or something (my memory of the end of the book reflects the singularity, sorry).
  • I see many comments here (and in other related slashdot posts) that are doubtful of the probability of a singularity event emerging from the widespread networking of silicon computers.

    Do we have some sort of permanent monopoly on intelligence? Consider whether or not alternative forms of intelligence (or, at the very least, operating successfully in modern human society) is possible - irregardless of whether the result behaves human.

    Software engineered by humans running on networked silicon computers will some day be able to write new software that can perform more complex functions. While this will initially be a humble process at best, it will be improved upon, just as the silicon architectures that software runs on now have improved dramatically over the years.

    Money and effort will be allocated toward the development of such a process, for more money and effort can be saved by it.

    When this level of accomplishment is reached (i.e. human engineering has made this possible), it won't matter that this software behaves in a way unlike the input and output of a network of human neurons.

    What will matter will be our interaction with and reaction to these process-beings which may (some day) surpass many capabilities of a single human. We had better think about how we may successfully operate along side such beings, for a world with them in it may be a much faster and more dangerous world than any we have previously built.
  • by Jack William Bell ( 84469 ) on Monday January 14, 2002 @02:57PM (#2837797) Homepage Journal

    Most anything I can bring to this discussion has already been said. Many of the posts on Vinge's concept of the Singularity are especially good. Although one point - Vinge said 'orders of magnitude' smarter than us, not just smarter than us - an important difference.

    But I do have an interesting tale about William Gibson and the origins of Gibson's 'Matrix' and Virtual Reality! Many people have noted that Vinge wrote 'True Names', with a complete description of an Internet and Virtual Reality, years before Gibson published Neuromancer. Often people wonder if Gibson had read 'True Names' and lifted the ideas wholesale from it.

    I once had an opportunity bring this up with Gibson. A few years later I posted the story of this conversation to alt.cyberpunk as part of an on-going conversation there. Thanks to the wonders of Google you can read the original [google.com] or you can read on here for an edited version:

    Back in 1995 I had a girlfriend who was attending The Evergreen State College in Olympia Washington. One of her classes was a kind of on-going seminar thingy about 'cyberculture' and one of the seminar leaders was a teacher at that school named Tom Maddox who knew Gibson personally. Tom Maddox also wrote exactly one (not bad) cyberpunk novel, 'Halo', which is now available on the Internet [ecn.org].

    Anyway, one of the seminars featured William Gibson as a speaker (and apparently this is something he never does). My girlfriend made sure I knew about it so I could attend (they were open to the public, but attendance was required for the students). It was a strange 'talk'. Rather than Gibson giving a speech, it was presented as the 'Tom and Bill Show' with the two of them sitting across a table from each other and having a straight ahead discussion on whatever they liked. Incredibly interesting; like being a fly on the wall at someone else's bullshit session.

    During a break, right before the talk, Gibson was outside smoking a cigarette and looking about as out-of-place as you can get. I was probably the only one there that recognized him before he got on stage, because no-one was near him. So I took the opportunity to introduce myself and ask him a few questions. He was friendly enough, and didn't seem to take affront except for one question (which is why it sticks in my mind so well)... As you have probably already figured out, the question was whether he (Gibson) had read Vinge's story 'True Names'?

    He replied - rather tartly - that, so far as could remember, he had not, but had read it since. He did acknowledge the several ways in which Vinge's story foreshadowed his own work (including the introduction of Virtual Reality among other things). He also said that lots of people ask him that question, but he did not (and he repeated "NOT" very forcefully) steal Vinge's ideas, but rather invented them in parallel.

    Personally I believe him. I think the early eighties was 'Steam Engine Time' [google.com] for Virtual Reality and for Cyberpunk in general [altx.com].

    Jack William Bell

  • If you're a cheap SF-lovin' fanatic like me, another way to get "True Names" is by finding Dell Binary Star #5, a paperback that also has George R. R. Martin's super-fine SF horror story "Nightflyers."*

    Binary Stars are modeled after the legendary Ace Doubles, which feature two novellas and introductions by both authors. Like the Doubles, the author introduce the other guy's work instead of their own.

    Lucky & obsessive browsers might be able find this in a used store--I did!

    *Yes, Virginia, Martin primarly wrote SF before he got around to the Big Fantasy genre. And he's done some great "intelligent space opera" a la Simak and Bester. Not as good as "Stars My Destination, but who is?|
  • Fans of Vinge (or the merely curious) may be interested to know that he will be the Pro Guest of Honor at ConJosé [conjose.org], the 60th annual World Science Fiction Convention, to be held this August (Aug 29 - Sep 2) in San José, CA, USA.

    Memberships are a bit pricey at this point ($180 US), but include the right to nominate and vote for the Hugo awards (for those who were upset over Harry Potter's Hugo last year, this is your chance to help prevent a similar occurance this year).

    Maybe I'll see some of you there. Cheers.
  • The singularity (http://sysopmind.com/singularity.html) is predicated on geometric progression gone wild. Computer speed doubles every x period of time forever. And when the brain doing the work is also doubled, then you have a runaway geometric progression. Pretty cool thought.

    But.....as they say, if something can't go on forever, it won't. Population explosions burn themselves out for lack of food. If you double your bet in the casino until you win, you'll wind up broke because you eventually double beyond you money supply.

    Intelligent articles about Moore's law speculate on how long the doubling can last not on whether it can go on forever. So yes, maybe we are heading toward a singularity where smart machines can make ever smarter machines. But this progression won't be a geometric singularity, at least not for long. It will become linear when some kind of limit we can't anticipate stops it. And if that limit hits sooner than we think, we might find that machines aren't getting smarter very quickly at all.
  • There are lots of IRCisms in the true names story.

    One of my favorite parts of the story is when the cops say "An honest citizen is content with an ordinary data set..." to the guy because he has way more computing power than a normal citizen.

    With the return of this book I soon expect to see IRC full of handles like DON.MAC, mailman, Mr Slipery, Erythrina.
  • I think thats the first time I've seen the phrase "retired computer scientist" before. What kind of work did he start with? How old was he then? Did he do other work before that?

  • Vernor Vinge well deserves his reputation - the guy is a fantastic writer with bona-fide science knowledge and some Transcendent ideas.

    For some reason, here in the UK, you're lucky to find even one book by him on the shelves of the major bookstores (Waterstones, Blackwells, etc). It's been the case for a while that several of his books have been either out of print or, in the case of the new True Names book, delayed somewhat. Publishers should be fighting each other off with big sticks for the right to sell Vinge, I can only assume that they're seeing low sales volumes and steering clear.

    Just to verify Vinge's appeal, I've taken to occasionally buying his books for friends. One of my mates raved over it- seeing Marooned in Realtime as the best murder-mystery he'd ever read! My girlfriend, on the other hand, has yet to read hers... can't win 'em all I guess, hey ho.

    Roberto
    • here in the UK, you're lucky to find even one book by him on the shelves of the major bookstores

      Yes, and that's a major problem. Today's SF readers often aren't given the opportunity to find the classic works in the genre. When was the last time you saw a Van Vogt book for sale, for example? Or anything by Alfred Bester? E.E. Doc Smith? Jack Vance? Even specialist bookshops like New Worlds in Charing Cross Road have very little of the classic works. Even the major names (Asmiov, Clarke, Heinlein, etc.) only have a few titles available in the major stores. Yes, the world moves on, and shelf space needs to be given to new authors, but I think the balance has gone too far, and we need to remember our roots. But what would I know? I'm interested in the genre, not in maximising profits for the large bookstore chains...

      • Well, all you have to do is wait for the authors to keel over and start pushing up daisies... Reprints of their complete works will soon appear. Happened for Heinlein, same for Asimov, Dick.

        Hmmm... There's a whole series of authors that are long overdue:

        • Clarke
        • Voight
        • Bester
        • etc etc etc
        We have a lot of good classic SF reprints to look forward to over the next few years! ;-)
        • Reprints of their complete works will soon appear. Happened for Heinlein, same for Asimov, Dick.

          Yes, to an extent, that's true, but it's certainly an exageration to say that their complete works will be reprinted (I should know, I have a complete Heinlein collection, and a large percentage of it wasn't reprinted after his death). Same for Asimov, but then with over 400 books published, reprinting the whole lot was probably never going to happen anyway. But I guess at least the more important books will become available again, which should give younger readers a taster, and hopefully will trigger a few to go and hunt out the more obscure titles in second hand bookstores and the like.

  • A friend of mine, non-technologist but a writer, was very interested in reading True Names, but later told me he couldn't get through more than the first quarter and that the writing was _bad_.

    I myself think True Names is one of the most breathtaking fictions I've read. But then again I am a futurist in training ;)

    Now the reason I'm so interested in True Names suckage is to find out how to bridge the aspects I find awesome and worth propagating, and avoiding aspects of writing that are, um, contentious.

    And I really want to know because once I'm done my futurist training, I'd really like to write scifi, or better yet, fictions that illustrate the changing nature of mankind and the technology we create.

    Who hates True Names!?! Let me know.

    The Big O

  • I don't know why so many people get this confused. The singularity refers to a point along an asymptotic technological advance curve where the rate of accumulation of knowledge exceeds the speed at which knowledge can actually be accumulated. In other words, it's at the point where information theory deforms and breaks down, somewhat (sort of, mildly) like how physical laws break down when you pass the event horizon of a black hole.

    The singularity has nothing to do with AI. It has never had anything to do with AI. AI doesn't enter into the equation at all, other than in the fevered minds of some so-called futurists (read "X-Files-believing fruitcakes").

    As for "Across Realtime", Vinge never once mentioned AI in "Marooned", where he talked about his take on the Singularity. He said, simply, that Something Happened; humanity disappeared and went someplace else. But that all of us left behind couldn't even begin to imagine what they'd become, much less where they'd gone. No AI written about or hinted at anywhere in the story.

    So what does actually happen when the Singularity is reached? Probably nothing at all. In fact, the singularity is just a popular term for a concept in information theory where the theory isn't capable of predicting events beyond a certain point. That's all it is. No pseudo-scientific clap-trap required.

    Max

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...