Charles Stross Interview 157
An anonymous reader writes "I'm surprised nobody mentioned this yet: a very interesting interview with author Charles Stross, whose current cycle of singularity-based stories Accelerando (featuring character Manfred Macx) is as tightly-packed with cutting-edge speculations as Bruce Sterling's work. An excerpt from the first of those stories is currently available on the Asimov's Science Fiction Magazine website."
Re:In case it's slashdotted (Score:1)
Re:In case it's slashdotted (Score:1)
Singularity (Score:2, Interesting)
For a while this was the link Jeeves gave you if you asked him the meaning of life, it was the only useful thing I ever found using that search engine.
Re:Singularity (Score:2, Informative)
http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-
It was in trying to imagine a world where this wouldn't happen that he created his "Zones of Thought" novels.
Re:Singularity (Score:3, Insightful)
Of course, once we can make an A.I. as smart as a lab rat, progress should be happen really, really fast thereafter, so maybe he's right.
All this reminds me of some old Chinese curse having to do with living in interesting times.
Re:Singularity (Score:1)
a human brain run from 100 to 100,000 Teraflops.
A commodity cluster can be bought for about $300K
per Teraflops at the moment. A human-equivalent
machine is worth about 5 humans because it can
work 24x7, and a human (including overhead +
salary + benefits) costs something like $120K per
year, if they are doing technical work like
designing CPUs. Assume you would want to amortize
your computer cluster over 2 years. Therefore
a human-equivalent machine would be worth 10
man-years, or $1.2M. Today this buys 4 Teraflops,
or 1/25th of the lower bound for human-equivalence.
So, applying an 18 month Moore's Law doubling time,
We have 7 to 22 years until human equivalent
machines become affordable, plus however long it
will take to program them and/or let them learn on their own. This will be in the range of 0 to 7
years. Once you get more-than-human equivalent
machines, the Moore's Law time constant will shrink
as they design their successors faster and
faster. In another 3 years (18 months + 9 months
+ 4.5 months +
Singularity or smack into some fundamental limit
of the universe that prevents further progress.
Aside from the machines designing the next
generation of smarter machines in an accelerating
feedback loop, other machines will be accelerating
progress in all other scientific and technical
fields.
To sum up, The End of Life as We Know It is due
in about 10 to 32 years unless (a) there is a
limit to technology, especially in computers,
that we hit before the singularity, or (b) we
sufficiently mess up our civilization to stop
or set back progress; i.e. nuclear war, someone
crosses the flu and ebola viruses, etc.
Daniel
Re:Singularity (Score:2)
Thanks for the very interesting response. My thoughts below.
> So, applying an 18 month Moore's Law doubling time,
> We have 7 to 22 years until human equivalent
> machines become affordable, plus however long it
> will take to program them and/or let them learn on
> their own. This will be in the range of 0 to 7
> years.
Yes, this is a possible timeline assuming that developing a functional A.I. is based on the human model of intelligence. I think this is mistaken. Definition time, so here goes.
Traditionally, A.I. has been defined as functional intelligence based on the human intelligence model, as much as we can understand what _is_ human intelligence. But do we need to use the human model to get functional artificial (created) intelligence? I don't think so. In fact, I think the usage of the human model for the creation of A.I. has been an interesting exercise in collective masturbation; stimulating and amusing maybe, but not very satisfying when one considers the results.
I suggest maybe we need to further define what we mean when we use "artificial intelligence" and so I propose this forking of the term: When refering to attempts to replicate human intelligence with machines, we use the term "A.H.I." or Artificial Human Intelligence and when we're refering to attempts to create intelligence on machines without using the human model we use the term "A.M.I." or Artificial Machine Intelligence.
Wnen you look at these two different problem sets, it becomes immediately obvious (at least to me) that there is a magnitude of difference in the complexity of the problem sets between creating a functional A.H.I. and creating a functional A.M.I. Where A.H.I. requires building a human mind in a machine to achieve success, A.M.I. merely requires that the machine be intelligent enough to accomplish some specific set of tasks or goals. In other words, the A.M.I. does not need to be as smart as a Plato, only as smart as a lab rat: Successful A.M.I. doesn't require all the complexity of A.H.I. to achieve the goal of functional intelligence.
I suspect the I.B.M. researchers working on what I'm referring to as A.M.I. are using basically the same definition; they are already claiming they've achieved machine self-awareness, a key element required for building complexity in a thinking machine capable of independent, constructive actions.
> Once you get more-than-human equivalent
> machines, the Moore's Law time constant will shrink
> as they design their successors faster and
> faster. In another 3 years (18 months + 9 months
> + 4.5 months +
> Singularity or smack into some fundamental limit
> of the universe that prevents further progress.
>
> To sum up, The End of Life as We Know It is due
> in about 10 to 32 years unless (a) there is a
> limit to technology, especially in computers,
> that we hit before the singularity, or (b) we
> sufficiently mess up our civilization to stop
> or set back progress; i.e. nuclear war, someone
> crosses the flu and ebola viruses, etc.
I'm not at all sure we'll ever get more-than-human equivalent machines using A.H.I., but I'm sure we'll get different-than-human equivalents using A.M.I.
To digress a bit into a different but related problem I'm currently working on, the problem of contextual self-awareness. This is most interesting when examined from the two different perspectives of A.H.I. and A.M.I. Where A.H.I. requires tremendous amounts of data from multiple sources be retrieved and sorted, analyzed and sorted again, catagorized and sorted again, stored and sorted again and again and again when retrieved for processing (thinking process), A.M.I. requirements for data are much, much simpler for all steps involved.
Anyway, enough for now. To sum up this post in a sentence: We may never see successful artifical human intelligence (AHI) in a machine, but we'll see artifical machine intelligence (AMI) in machines very, very soon. In fact, I suspect it's aready extant and coming soon to a machine near you.
Cheers and thanks again for the reply.
James
Re:Singularity (Score:3, Funny)
I would expect diamondoid drextech - full-scale molecular nanotechnology - to take a couple of days; a couple of hours minimum, a couple of weeks maximum. Keeping the Singularity quiet might prove a challenge, but I think it'll be possible, plus we'll have transhuman guidance. Once drextech is developed, the assemblers shall go forth into the world, and quietly reproduce, until the day (probably a few hours later) when the Singularity can reveal itself without danger - when there are enough tiny guardians to disable nuclear weapons and shut down riots, keeping the newborn Mind safe from humanity and preventing humanity from harming itself.
The planetary death rate of 150,000 lives per day comes to a screeching halt. The pain ends. We find out what's on the other side of dawn.
(After a series of Singularity/Monty Python takeoffs on the Extropians list, Joseph Sterlynne suggested that the systems should be programmed so that the instant before it happens we hear the calm and assured voice of John Cleese:
"And now for something completely different.")
Re:Singularity (Score:2, Interesting)
Maybe our machines will leave Earth to go someplace where humanity can't bother them, and leave humanity here to rot. The high atmosphere of Venus, or the asteroid belt, or the mantle of the Earth...the smart assemblers could quickly adapt to many possible different homes. Sure, the humans could eventually follow and start annoying the smart nanobots, but if they can think as fast as some people think they might, they could quickly evolve into something even further beyond us (Femtotech? Machines based on the strong nuclear force? WTF-tech?) and leave us mouth-breathers mired in molecular molasses while they colonize the core of the sun, or quit the universe altogether!
Perhaps more likely is the idea that it would deliberately disassemble the OLD biosphere as raw material to build the new one. I don't mean an accidental "gray goo" scenario, but rather a deliberate decision by the most advanced 'life' form to dismantle a collection of obsolete machinery to free up space. After all, we've consciously done something similar to countless other species in the name of progress, too.
I wouldn't hold my breath hoping that "we" will find out what's on the other side of the singularity...even if the first group to build nanotech doesn't use it to kill 99.99% of us because they want mansions with BIG front lawns, then it's possible our tools will simply 'get uppety' and decide that they simply don't need us anymore.
When I hear discussions about how we will all see a utopic future brought about by some future technology, I'm reminded of the Sci-fi classic "When Worlds Collide". I'm sure that many of the people building the rocket to take humanity to the new world (and away from the doomed Earth) thought they were going to get a seat for themselves. But in the end, almost everyone in the world got left behind when the fateful moment arrived.
-dexter "Two thousand zero zero, party almost out of time" riley
Re:_party over, oops, out of time_ (Score:1)
Re:Singularity (Score:2)
But we should remember that nano-tech may buy us a stay of execution from Malthus, it's not a pardon until we change our ways. Not with the best will in the world on the part of the machines.
And the problem that's always bothered me about these scenarios is, how do you implement the control that is needed? Of course, nobody knows the answer to that one in detail, but I haven't even heard any crude guesses at a workable direction.
Still, that's only one form that the sigularity could take. Don't fixate on it, or you will be quite surprised. Personally I expect that one day the net will wake up, without any real planning on anybody's part. And we may not even know when it happens. (Just because it's awake doesn't mean that it speaks any human language. It merely means that it's become aware of being aware of it's environment. And that it uses this information to select it's purposes, priorities, and actions.)
Maybe it's approaching time for the "Genetic AI @ Home" screen saver.
Re:Singularity (Score:2)
Whether this analogy works is the question. Transhumans speak of technology and "advancement" as if it is some kind of measurable substance, like phlogiston.
I like to think of humanity's future more in terms of currently observable animal phenomenon, considering our great similarities to other fauna. For instance, take the humble Dictyostelium discoideum. These amoebae will live on their own, foraging for food. Once it becomes scarce, they signal each other, and come together as one. They form into a vehicle, a slug, and move to another food source. Then some amoebae sacrifice themselves to form a solid stalk, which the rest of the amoebae climb to the top of. They form a spore, which then explodes and scatters all over the food, allowing them to forage as individuals, once again. Read more here [uni-muenchen.de]
Considering that as technology gets more powerful more and more people will have the ability to destroy the human race with a flip of a switch, there will have to be some survival mechanism [see above] in which we can scatter ourselves across the [solar system/galaxy/universe/multiverse/spiritual sky] to assure our survival.
Technology like the internet brings us closer together as one. For those of you who experiment with psychedelics, you may or may not already know that telepathy is possible, so the nature of humanity coming together doesn't necessarily have to be only technological. In fact, boundries are simply models we place on the world to understand it, and we're all together in a big mush anyway, we just don't realize it. Maybe technology and religion aren't that different...
peace out
LS
Re:Singularity (Score:2)
No no, that's German for 'The Mentifex, The'
Re:Stross (Score:1)
Re:Stross (Score:1)
Re:Stross (Score:1)
Re:Stross (Score:1)
Re:Stross (Score:1)
Sterling's strength is not (Score:2, Interesting)
How does he compare to Vernor Vinge? (Score:3, Informative)
His writings are suffused with it. It is a key theme in A Fire Upon the Deep [amazon.com] and Marooned in Realtime [amazon.com]. It also weighs heavily in the background of A Deepness in the Sky [amazon.com]. All IMO are brilliant pieces of SF.
Re:How does he compare to Vernor Vinge? (Score:2)
Singularity? Please! (Score:2)
Re:Singularity? Please! (Score:3, Informative)
Extropian graphs are like metaphors... they are a way of describing something, but they do not take priority over the real thing. Similarly, those graphs are just a demonstration of the larger point of the difficulty of predicting the near-future in an exponentially-progressing-technology era.
The graphs flow from the arguments, not vice versa.
Obviously (Score:2)
You have obviously never had to give a presentation to upper management. They are a peculiar species, unable to understand words. They can only be communicated to in a very limiting fashion via colored 3d graphs and charts. Unfortunately most of the important information is lost in the translation...
Re:Obviously (Score:1)
1) Does it go up toward the right? Good!
2) Does it go down toward the right? Bad!
A mystery solved (Score:2)
CS on intellectual "property" (Score:1)
Are you still surprised? (Score:1)
And now, are you surprised nobody's commenting on this story? Perhaps there's a pattern, here.
Speaking of news sites dying... (Score:3, Interesting)
Worst S/N ratio ever!
</CBG>
Just to stay on-topic to some extent, here's [asimovs.com] his story in Asimov's [asimovs.com]. Definitely worth a read! Has a sense of humor that reminds me of Stephenson.
Re:Speaking of news sites dying... (Score:1)
Please remove this deep link immediately or we will sue. Have a nice day.
Sincerely,
Asimov's Science Fiction
Re:Speaking of news sites dying... (Score:2)
But it seems more fantasy than attempted projection. Sorry, but I don't feel that people would ever choose to create that society. Shockwave Rider (John Brunner) was more convincing. Also, it implies a much slower rise time for the Singularity than I find probable. (In his PeaceWar, Vernor Vinge was explicit in saying that he had to insert a war to slow down the rate of technical expansion.)
I, personally, expect the singularity to arrive before 2030, and I would be surprised if it arrived before 2010. And 2010 is pretty close. In fact, extremely close. But watch the way the news fluctuates from day to day, or look at how fantastic Science Fiction (not fantasy) is becoming, and you'll see signs.
The current exterme reactions of the government are partially caused by the growing awareness that they can't project very far into the future. It's the butterfly principle writ large. In a stable environment, most of the chaos averages out, and only a little is left. We have been creating an environment where each change amplifies other changes that were in the process of implementation, so you have cascades of changes. Some of them act like fashions, and have no lasting effect. Others, unpredictably, sweep over everything like a phase change. And you can't tell which is which in advance. Well, WE know that computers are one of the big ones, and now everyone knows that. But is nano-tech? Probably. But there's that level of uncertainty. And it's not a yes or no question. Once you decide it's important, you need to decide how it's going to act, and how you should respond to it. And given that it's only one of numerous changes in progress simultaneously...
Some days I wake up, look at the news, and say to myself "Only the singularity can save us now". Other days I wake up, look at the news, and say to myself "We'd have it made if it weren't for the singularity." Is one true? The other? Both?
I think that this is what he is trying to convey with his piling of fantastic feature on top of fantastic feature. It doesn't work for me, but then I don't know what could. (True Names comes close, but that's a one of a kind.)
This is what Robert Anton Wilson called the "Jumping Jesus" phenomenon. (Take all the knowledge in the world at 1 AD, and call that the standard unit of 1 Jesus. What's the doubling time? He figured that it was a decreasing function (i.e., each successive doubling took less time than the previous one. And the 2 Jesus mark was reached before the Renaissance.)
I find that interesting and provocative, but the important interval measures applied techniques, and the closest thing I seen to that is the number of patents (a grossly misleading statistic). So without a meaningful measure, or even a useful unit, all I've got is a gut feeling. But it seems that the relevant function is increasing quite rapidly. Thus my estimate of 2010 to 2030. I would be moderatly surprised if the people of 2031 still spoke a language that I would recognize as English. I expect that much change.
has anyone here actually read the story? (Score:1, Insightful)
i can't believe that people are saying that this story is dead, based on it's alleged "low" numbers.
based on the interview that i read i would have to say the dude sounds pretty intersesting and well read. that and his experience as a writer lead me to believe that this could be a kewl read.
it sounds to me as if the above posts, in the majority, have not read the actual story.
what gives. i thought we were supposed to be smart.
Interesting stuff... (Score:2)
Personally, I don't care if the guy is an asshole or a saint, it's his ideas and the mixing of ideas which is interesting and fun.
Comparing authors is pointless to me in that no two are alike even if they're writing on similar subjects. This Lobster story is still fresh and to say it's just more cyberpunk is both unfair and untrue. It's like saying the punk rock of the early 80's left no room for anything else and all the new punk stuff is therefore just rehashed trash (which is obviously not true.)
"Lobster" was a good, if challenging, read and the author proves interesting in the interview. I'll be looking for more of his work to read and I'm sure -- I do mean positive - that many of the readers of Slashdot would enjoy both the lobster story and the interview.
Is there a troll-fest happening tonight? I must 'ave lost me invite!
2000 story (Score:4, Interesting)
Please explain "geek code" (Score:1, Offtopic)
Re:Please explain "geek code" (Score:2, Informative)
HTH, HAND.
test (Score:1)
It has lots of Linux, Perl and SF - what more could you want?
I administer Charlie's webserver (Score:2)
Watching the logs it looks like we're OK at the moment, but we don't have all the bandwidth in the world.
Oh and he just signed my emergency passport application, so I'm not going to say anything else rude about him
Re:I administer Charlie's webserver (Score:5, Interesting)
Incidentally,I have it on good authority that the Oxford English Dictionary is going to cite "Lobsters" as the first use of slashdot as a verb -- turns out that the OED editors have still got this quaint prejudice in favour of hardcopy, so being in a book in the British Library (or US Library of Congress) gets you into the OED, and being on slashdot itself doesn't.
The OED's prejudice is reasonable (Score:2)
I occasionally review technical papers, and people are increasingly using URLs as references. Trouble is, in a large number of cases the URLs are dead links by the time I do the review; by the time of publication it's completely dead.
At least dead trees don't have the habit of disappearing from existence without warning.
Re:The OED's prejudice is reasonable (Score:1)
Unreleased novel: "Scratch Monkey" (Score:2, Informative)
Scratch Monkey is definitely worth reading.
PS: hi Charlie! This article is the equivalent of being on the cover of the Rolling Stone, yea?
overdose on Slashdot :-) (Score:1)
CS: I wrote "Lobsters" and showed it to a friend. He said "that's really cool, but you'll never sell it--the audience would have to overdose on Slashdot for six months before they got it." He was completely right--he just underestimated the number of people out there who overdose on Slashdot!
Cyberpunk and Free software (Score:1)
Aineko? (Score:1)
Re:Aineko? (Score:1)
Re:Aineko? (Score:1)
Re:Aineko? (Score:1)
What will the universe allow? (Score:3, Insightful)
The most significant factor in singularity is determining what is actually possible under the constraints of physical laws. In all likelihood the universe is not infinitely maliable to our will. Eventually, what is technically possible will reach a plateau, where nothing more advanced can be made.
The most straightforward example is faster than light travel. The universe seems to have a set limit for allowing an object from going from point A to point B. There may be ways around this by warping space. But there are limits on how much space you can warp. Eventually we will reach a point where we cannot travel faster from point A to B.
There are probably some people out there saying "But we don't know what the limits are. People used to say it was impossible to go faster than the speed of sound." That's true, we don't know what the limits are, therefore we should act like there are no limits ... yet. But someday we will figure this universe out and then we'll know the limits. We'll know the fastest speed. We'll know the bountries of what is possible, and we will build to those bountries. We'll travel as fast as possible. We'll make ourselves as intelligent as beings can be under the constraints of the universe. We'll live as long as possible. And technology will be at a plateau from which it cannot grow any higher.
Re:What will the universe allow? (Score:1)
The confidence with which you make your unbased claims is hilarious.
Re:What will the universe allow? (Score:2)
Are you assuming that technology has no limits placed upon it by the laws of the universe?
Re:What will the universe allow? (Score:1)
No, I do not make unbased assumptions. I, however, find the possibility of technology being able to change the laws of the universe at some point in time plausible.
Re:What will the universe allow? (Score:2)
I stated that at this point we should proceed as if there is no limit, because we don't know what it is. But I believe (and I could be wrong) that some day we will hit a wall, where the universe will allow no more technological advancement. To assume otherwise is to believe that the universe in infinitely maliable to our will. An interesting philosophical question is whether or not an infinitely maliable universe is possible.
FTL? Perhaps not, but... (Score:2)
If you want to build an interworld empire, then you appear to have problems, but if you want to shorten the trip, then several approaches are plausible.
The simplest one is frozen sleep.
The fanciest one is to upload yourself into a computer, put yourself on pause, until you reach the destination, and then download yourself into a new body.
The best one is MacroLife. Redesigning things so that you live in a mobile space colony that roams from star to star, grazing on the cometary belts, and occasionally mining from the moons or asteroids (usually only needed for major repairs, or to fission the colony into two).
The physical vessel that will contain the MacroLife should be buildable before the singularity. The design of the society is more dubious. It would need to be quite stable. And if it were too aggressive, then it would be dangerous to create, whereas if it were too passive, then it would be subject to hostile takeovers. Not an easy problem.
Re:FTL? Perhaps not, but... (Score:2)
An interstellar empire would be feasible if there existed sentient beings with a lifespan that was measured in the millions of years. Then the trips between stars at about 10% c wouldn't seem all that long, and there would be enough continuity to maintain an interstellar culture.
Re:FTL? Perhaps not, but... (Score:1)
The passage of time is relative, with a ratio of (1/(1-(v/c)^2). If your v is small compared to c, then the factor is near 1. If your v is, say,
If you can go
It should take about a year to get up to near light speed at acceleration of 1 gravity. Of course, you have to get all that energy from somewhere, but I'm sure you can pull together some kind of Bussard Ramjetty thing to do it with, since we're assuming that we're at the singularity.
Re:FTL? Perhaps not, but... (Score:2)
Now it you could tap the vacuum point energy... but that one's probably a fantasy. That's probably one that the universe doesn't permit.
Re:FTL? Perhaps not, but... (Score:1)
I agree the energy requirements are ludicrous, but we are talking about the capabilities of entities capable of whatever is physically possible.
Your radical ideas about... (Score:1)
That url takes care of responding to most of your post.
Now to comment on the first part of your first sentence:
'The most significant factor in singularity' - that wordset is polysemous. Do you mean 'the most significant factor in the character of what life will feel like beyond singularity', 'the most significant factor in whether (and when) there will be a singularity', 'the most significant factor in the present day discussion of what it will feel like/whether there will be a singularity', I could go on, I'm just getting started, 'the most significant factor in where the present day discussion of singularity *should be at or should go*', etc. etc.
You have given us a post with almost infinite interpretations. Polysemy is a good thing, as long as the number of potential interpretations doesn't get out of hand. You have given us a post with *too many* interpretations. Please more sharply specify what you are saying so that we can attack or praise it specifically.
- kaidaejin@NoSpam@hotmailcom
A Colder War (Score:1)
I used to live with his girlfriend... (Score:1)
Re:I used to live with his girlfriend... (Score:1)