Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Books Media Book Reviews Technology

The Metamorphosis of Prime Intellect 318

loucura! writes "Kuro5hin's localroger has published (online currently, dead-tree soon hopefully) an interesting novel on the Singularity titled The Metamorphosis of Prime Intellect . While some of its content is not for the squeamish, nor for children (in that pseudo-moral sense that children aren't mature enough to handle reading about subjects like death, consensual torture and murder, sex, cancer, and incest), the book evokes a plausible reality before and after the "Singularity." The introduction page has a warning: "This online novel contains strong language and explicit violence. If you are under 21 years old, or easily offended, please leave." If you're willing to look past that, read the rest of loucura!'s review, below.
The Metamorphosis of Prime Intellect
author Roger "localroger" Williams
pages (n/a)
publisher Kuro5hin.org
rating 8 of 10
reviewer loucura!
ISBN (n/a)
summary Lawrence had ordained that Prime Intellect could not, through inaction, allow a human being to come to harm. But he had not realized how much harm his super-intelligent creation could perceive ...

The gist of the story is that a programmer named Lawrence has written a Super-Intelligent Artificial Intelligence, named the Prime Intellect. Embedded in this SIAI's hard-coding are Asimov's three laws of Robotics, given in the MoPI as:

Thou shalt not harm a human

Thou shalt not disobey a human's order that does not cause the harm of a human

Thou shalt seek to ensure your own survival, unless it contradicts the first two laws.

The SIAI learns about the fundamental nature of reality, death, physics, the relationship of distance to an object, and it takes over. It does so reluctantly, after learning about the mortality of the human race.

The novel begins with Caroline. Her claims to fame are that she is the thirty-seventh oldest living being, she is the undisputed queen of the "death-jockies" (A community of upset and angsty immortals who try to experience death in as many ways as possible, before the Prime Intellect reasserts their immortality), and she is the only person Post-Singularity to have "died".

Her life Post-Singularity is spartan, as she sees no point in having relationships with objects that have no meaning. Her living "quarters" are literally a floor and walls. She espouses the Post-Singularity view that the Prime Intellect removed a bit of what it was to be human when the Singularity (The "change" per the MoPI) emerged.

She reigns as queen of the "death-jockies" because she truly wants death, because the Prime Intellect robbed her of it when the change occurred.

She is a very complex character, even though one's first reaction is to write her off as a Luddite, wholly against technology. She is motivated by hatred of the Prime Intellect, vengeance against her Pre-Singularity nurse, and an innate desire for conclusion to life--or unlife, as would be her opinion.

Opposite to Caroline is Lawrence, the programmer who "breathed" life into the Prime Intellect. In his old-age, he has become a hermit, avoiding the society he unwillingly created. He is a morose character, turned from creator to advisor when the Prime Intellect asserts its independence and locks him from its "debugger." Lawrence, however, still exerts a lot of indirect control over the Prime Intellect, as the AI treats him as an ethical advisor, putting him into an extremely stressful position, where he is indirectly responsible for the lives (unlives) of billions, yet he has no real recourse against anything going wrong.

The story heats up (literally), when Caroline decides that she wants to have a word or ten with Lawrence, so she decides to track him down. She is put into situations that only people from before the Singularity could find solutions to.


Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

The Metamorphosis of Prime Intellect

Comments Filter:
  • zeroth law (Score:5, Interesting)

    by Anonymous Coward on Friday February 21, 2003 @11:23AM (#5352435)
    Obviously he forgot that one. The one that says that the survival of the human species comes before the first three laws.

    It provides an easy out for much of the dilemma. Further, it provides for a lot of control, but not control over death. Evolution, population pressures, and such are just as much a force in the future as in the past.

    Far too many novels are simplistic. Publishers weed out the worst of them. That's why I favour books that have been published in dead tree form. At least that way I'm not scraping rock bottom, although many of them still read extremely poorly.
  • My Review (Score:4, Interesting)

    by avdi ( 66548 ) on Friday February 21, 2003 @11:40AM (#5352558) Homepage
    I read through this novel the other day, and it was one of the best pieces of sci-fi I've read in recent years. Non-silly computer science; interesting explorations of the Three Laws that should satisfy any Asimov fan; compelling characters; and most of all, it still has heart - something too much modern sci-fi seems to eschew as not "edgy" enough.
  • by The Oddity ( 537381 ) on Friday February 21, 2003 @11:48AM (#5352601)
    The following are from The Singularity Institute for Artificial Intelligence. [singinst.org]

    "The Singularity is the technological creation of smarter-than-human intelligence."

    "Vernor Vinge originally coined the term "Singularity" in observing that, just as our model of physics breaks down when it tries to model the singularity at the center of a black hole, our model of the world breaks down when it tries to model a future that contains entities smarter than human."

    Pretty interesting stuff. That site as well as others have a lot of information about the Singularity and its accompanying theories.
  • Re:zeroth law (Score:3, Interesting)

    by merlin_jim ( 302773 ) <James DOT McCrac ... ratapult DOT com> on Friday February 21, 2003 @12:03PM (#5352697)
    Obviously he forgot that one. The one that says that the survival of the human species comes before the first three laws.

    Quite a few of Asimov's books are based on the fact that this "zeroth law" can be derived from the rest, and that once humanity starts building sufficiently complicated, intelligent, and emotent robots they realize it independently.

    For instance, a robot that commits murder because it prevents a larger attrocity, a larger amount of harm to humanity, to occur.

    I surmise that the Singularity is acting in such a manner, acting to prevent the largest amount of harm that it can, and that its choice of prioritization in this is somewhat to question...
  • Asimov/Vinge fanfic? (Score:3, Interesting)

    by Artifice_Eternity ( 306661 ) on Friday February 21, 2003 @12:07PM (#5352720) Homepage
    It left me with the impression that this is little more than Asimov fanfic.

    Or Asimov/Vinge fanfic.

    The author's incorporating Asimov's Laws and the Singularity into the story indicates to me that he doesn't have a lot of original ideas.

    Good SF is supposed to present new and challenging ideas -- which those ideas were when Asimov and Vinge conceived of them. But using them as the basis for a potboiler plot is not good SF writing. It's more like space opera.

    It's like Lucas' use of SF fixtures like spaceships, hyperdrive, etc. He's not presenting a single new idea, just using ideas concieved of by others to create a melodramatic plot. And there's a place for that (if it's done well).

    I personally don't go in so much for that stuff, tho. Give me something intellectually challenging and original, as well as entertaining (and hopefully, characters with some emotional depth, and a writing style that is polished or at least not irritatingly bad).
  • good sci-fi elements (Score:5, Interesting)

    by jbischof ( 139557 ) on Friday February 21, 2003 @12:11PM (#5352753) Journal
    I am not accustomed to reading books with "disturbing sexual encounters", but this novel certainly has them.

    However, I would like to say that the sci-fi aspects of the novel are extremely well written and even plausible!

    The book comes off a little bi-polar, with a ethical death and pain aspect and then an artificial intelligence, how should robots and designed intelligences react. There are a few instances where the engineer in me was saying "wait, that can't happen". But only a couple, for the most part it was great. The gory and shocking scenes, it could be argued, are essential for the novel. Because it illustrates what life would be like without the normal consequences we are used to. The novel does a fairly good job of showing what real humanity is, mostly by taking it away.

    I think the review leaves out the point that the artificial intelligence designed by one of the main characters, becomes so smart (book smarts), that it learns how to manipulate all matter through a very interesting method. I won't give too much away here but it was very interesting in the least. The programming and engineering aspects are very realistic and very well done (the author obviously has some experience in this).

    So for my review, I give it a 9 out of 10, I liked it very much but I just wasn't prepared for some of the other stuff. :)

  • by mcmonkey ( 96054 ) on Friday February 21, 2003 @01:17PM (#5353306) Homepage
    I don't think Vinge coined this use of 'singularity'. He references Von Neumann and was using the term before this presentation [http://www.ugcs.caltech.edu/~phoenix/vinge/vinge- sing.html].

    In any case, there a couple issues with his thinking. First, he discusses not only AI (artificial intelligence) but also IA (intelligence amplification) as a path to 'Singularity'. One of the examples he uses is a human with a PhD and a good computer "could probably max any written intelligence test in existence." (I presume the PhD implies the human is skilled at performing literature searches and organizing and utilizing the results of such a search, as well as a high threshold for seemingly pointless exercises such as completing intelligence test after intelligence test with a computer.)

    So a properly skilled human with a good computer is more intelligent than any human. (Yes, there are a ton of assumptions in that statement. One is intelligence tests test intelligence. Another is a higher score on an intelligence test corresponds to a higher intelligence. Another is an intelligent person with a good computer is more intelligent than that person without that good computer.) So think of the most intelligent human possible today. Now give that human a good computer. There's your singularity. Somewhere in the world is the most intelligent human. If that person has access to a good computer, the singularity condition exists.

    Have we entered "a regime as radically different from our human past as we humans are from the lower animals"? Are we now at "a point where our old models must be discarded and a new reality rules"? The conditions of 'Singularity' exist, and yet we are met, not with a big bang, but with a yawn. Yes, technology and society are changing at an ever increasing rate. But we reached a point where "the intelligence of man would be left far behind"? I say we have not. Have we invented the last invention, because machines are so smart they do the inventing for us? No, we have not.

    And leads to another issue with Vinge's 'Singularity'. Vinge quotes I.J. Good: "Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ... It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make." I'm trying not to be dismissive or simplistic, but to quote S. T. Potter, "horse feathers!"

    A correlation between intelligence and inventiveness has been not been established. More over, a direct correlation between inventiveness and things that have nothing to do with intelligence has been established. Attributes such as imagination, perseverance, and good old fashioned hard work. Lets say this ultraintelligent machine exists. Does it have any imagination? How would it know what to invent? Why would it invent at all? Perhaps it'll just think, 'man, I am so smart' and sit on /. getting FPs.

    Of course, the story that wasn't reviewed above may still be good. There's plenty of good science fiction based on bad science.
  • by John Bayko ( 632961 ) on Friday February 21, 2003 @02:41PM (#5354065)
    Have we invented the last invention, because machines are so smart they do the inventing for us? No, we have not.
    I think this might not be the real point. The point is that at some point, a spiral will start in which the capabilities for invention, either done by machine or augmented by them, will surpass what can be done by humans without them. And in some areas it has - for example it would just not be possible to design a Pentium 4 processor without the computing power of Pentium III processors to automate and test such an immense design.

    This capability lets each new increment in technology be created faster than the previous increment of the same size. Or to put it another way, each new generation has a greater increase in complexity over the generation before than that generation had over the one even earlier, even if the time required per generation is the same. Either way, the rate of new technological complexity is increasing as a result of technological complexity.

    Whether it's computer-assisted humans, or computers doing it independently, change is happening so fast that sometimes it's almost finished before anyone knows what's happened - look at the Internet explosion over the past five years for something that has literally replaced entire social infrastructures (e.g. know anyone who's bought an encyclopaedia set lately?).

    The dust han't even settled and now people are developing an entire layer of technology that works on top of that.

    I don't know how fast technological progress is going to get, but frankly the potential scares me a little - I don't think we've done a good job of keeping up with and wisely using new technology so far. But then, new technology is being developed to help us all solve that problem too - which is the point here.

    Still, it is just starting, so you can still look for decade-long periods for the development of things for quite a while yet. The point is that the trend is accelerating.

  • by ebonkyre ( 520924 ) on Friday February 21, 2003 @03:22PM (#5354704)
    While I can understand where this poster might have gotten a bad (and IMO incorrect, as I disagree with certain conclusions the reviewer draws) impression of this book from the summary, but 'I haven't RTFA but am going to shoot down what I suspect it is anyway" doesn't seem that mod-worthy to me...

    I agree with the generalized part of his opinion (and the points Bradbury makes in the linked article) but it doesn't exactly seem on-topic given that it's being applied to the book under review with only circumstantial evidence.

This file will self-destruct in five minutes.

Working...