Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

Charles Stross Interview 157

An anonymous reader writes "I'm surprised nobody mentioned this yet: a very interesting interview with author Charles Stross, whose current cycle of singularity-based stories Accelerando (featuring character Manfred Macx) is as tightly-packed with cutting-edge speculations as Bruce Sterling's work. An excerpt from the first of those stories is currently available on the Asimov's Science Fiction Magazine website."
This discussion has been archived. No new comments can be posted.

Charles Stross Interview

Comments Filter:
  • Singularity (Score:2, Interesting)

    For some background on the coming technological singularity, and some general good reading, see The Meaning Of Life [sysopmind.com]

    For a while this was the link Jeeves gave you if you asked him the meaning of life, it was the only useful thing I ever found using that search engine.
    • Re:Singularity (Score:2, Informative)

      by spinwards ( 468378 )
      Here is what Vernor Vinge has to say about the singularity. Watch out, its a bit fatalistic.

      http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-s ing.html [caltech.edu]

      It was in trying to imagine a world where this wouldn't happen that he created his "Zones of Thought" novels.

      • Re:Singularity (Score:3, Insightful)

        Well, I haven't yet read through all of his paper (I'll work on it) but his base proposition that super A.I. will take over the universe is a bit of a stretch considering we can't even make A.I. that can match wits with a lab rat and pull a draw.

        Of course, once we can make an A.I. as smart as a lab rat, progress should be happen really, really fast thereafter, so maybe he's right.

        All this reminds me of some old Chinese curse having to do with living in interesting times.

        • Estimates of the computer power required to match
          a human brain run from 100 to 100,000 Teraflops.
          A commodity cluster can be bought for about $300K
          per Teraflops at the moment. A human-equivalent
          machine is worth about 5 humans because it can
          work 24x7, and a human (including overhead +
          salary + benefits) costs something like $120K per
          year, if they are doing technical work like
          designing CPUs. Assume you would want to amortize
          your computer cluster over 2 years. Therefore
          a human-equivalent machine would be worth 10
          man-years, or $1.2M. Today this buys 4 Teraflops,
          or 1/25th of the lower bound for human-equivalence.

          So, applying an 18 month Moore's Law doubling time,
          We have 7 to 22 years until human equivalent
          machines become affordable, plus however long it
          will take to program them and/or let them learn on their own. This will be in the range of 0 to 7
          years. Once you get more-than-human equivalent
          machines, the Moore's Law time constant will shrink
          as they design their successors faster and
          faster. In another 3 years (18 months + 9 months
          + 4.5 months + ...) you will either reach a
          Singularity or smack into some fundamental limit
          of the universe that prevents further progress.

          Aside from the machines designing the next
          generation of smarter machines in an accelerating
          feedback loop, other machines will be accelerating
          progress in all other scientific and technical
          fields.

          To sum up, The End of Life as We Know It is due
          in about 10 to 32 years unless (a) there is a
          limit to technology, especially in computers,
          that we hit before the singularity, or (b) we
          sufficiently mess up our civilization to stop
          or set back progress; i.e. nuclear war, someone
          crosses the flu and ebola viruses, etc.

          Daniel
          • Hi Daniel,

            Thanks for the very interesting response. My thoughts below.

            > So, applying an 18 month Moore's Law doubling time,
            > We have 7 to 22 years until human equivalent
            > machines become affordable, plus however long it
            > will take to program them and/or let them learn on
            > their own. This will be in the range of 0 to 7
            > years.

            Yes, this is a possible timeline assuming that developing a functional A.I. is based on the human model of intelligence. I think this is mistaken. Definition time, so here goes.

            Traditionally, A.I. has been defined as functional intelligence based on the human intelligence model, as much as we can understand what _is_ human intelligence. But do we need to use the human model to get functional artificial (created) intelligence? I don't think so. In fact, I think the usage of the human model for the creation of A.I. has been an interesting exercise in collective masturbation; stimulating and amusing maybe, but not very satisfying when one considers the results.

            I suggest maybe we need to further define what we mean when we use "artificial intelligence" and so I propose this forking of the term: When refering to attempts to replicate human intelligence with machines, we use the term "A.H.I." or Artificial Human Intelligence and when we're refering to attempts to create intelligence on machines without using the human model we use the term "A.M.I." or Artificial Machine Intelligence.

            Wnen you look at these two different problem sets, it becomes immediately obvious (at least to me) that there is a magnitude of difference in the complexity of the problem sets between creating a functional A.H.I. and creating a functional A.M.I. Where A.H.I. requires building a human mind in a machine to achieve success, A.M.I. merely requires that the machine be intelligent enough to accomplish some specific set of tasks or goals. In other words, the A.M.I. does not need to be as smart as a Plato, only as smart as a lab rat: Successful A.M.I. doesn't require all the complexity of A.H.I. to achieve the goal of functional intelligence.

            I suspect the I.B.M. researchers working on what I'm referring to as A.M.I. are using basically the same definition; they are already claiming they've achieved machine self-awareness, a key element required for building complexity in a thinking machine capable of independent, constructive actions.

            > Once you get more-than-human equivalent
            > machines, the Moore's Law time constant will shrink
            > as they design their successors faster and
            > faster. In another 3 years (18 months + 9 months
            > + 4.5 months + ...) you will either reach a
            > Singularity or smack into some fundamental limit
            > of the universe that prevents further progress.
            >
            > To sum up, The End of Life as We Know It is due
            > in about 10 to 32 years unless (a) there is a
            > limit to technology, especially in computers,
            > that we hit before the singularity, or (b) we
            > sufficiently mess up our civilization to stop
            > or set back progress; i.e. nuclear war, someone
            > crosses the flu and ebola viruses, etc.

            I'm not at all sure we'll ever get more-than-human equivalent machines using A.H.I., but I'm sure we'll get different-than-human equivalents using A.M.I.

            To digress a bit into a different but related problem I'm currently working on, the problem of contextual self-awareness. This is most interesting when examined from the two different perspectives of A.H.I. and A.M.I. Where A.H.I. requires tremendous amounts of data from multiple sources be retrieved and sorted, analyzed and sorted again, catagorized and sorted again, stored and sorted again and again and again when retrieved for processing (thinking process), A.M.I. requirements for data are much, much simpler for all steps involved.

            Anyway, enough for now. To sum up this post in a sentence: We may never see successful artifical human intelligence (AHI) in a machine, but we'll see artifical machine intelligence (AMI) in machines very, very soon. In fact, I suspect it's aready extant and coming soon to a machine near you.

            Cheers and thanks again for the reply.

            James

    • by zephc ( 225327 )
      there is a great clipping from one of the papers on the singularity and nanotech:

      I would expect diamondoid drextech - full-scale molecular nanotechnology - to take a couple of days; a couple of hours minimum, a couple of weeks maximum. Keeping the Singularity quiet might prove a challenge, but I think it'll be possible, plus we'll have transhuman guidance. Once drextech is developed, the assemblers shall go forth into the world, and quietly reproduce, until the day (probably a few hours later) when the Singularity can reveal itself without danger - when there are enough tiny guardians to disable nuclear weapons and shut down riots, keeping the newborn Mind safe from humanity and preventing humanity from harming itself.

      The planetary death rate of 150,000 lives per day comes to a screeching halt. The pain ends. We find out what's on the other side of dawn.

      (After a series of Singularity/Monty Python takeoffs on the Extropians list, Joseph Sterlynne suggested that the systems should be programmed so that the instant before it happens we hear the calm and assured voice of John Cleese:

      "And now for something completely different.")
      • Re:Singularity (Score:2, Interesting)

        The picture you describe of the singularity is prosaic, but here are two other options.

        Maybe our machines will leave Earth to go someplace where humanity can't bother them, and leave humanity here to rot. The high atmosphere of Venus, or the asteroid belt, or the mantle of the Earth...the smart assemblers could quickly adapt to many possible different homes. Sure, the humans could eventually follow and start annoying the smart nanobots, but if they can think as fast as some people think they might, they could quickly evolve into something even further beyond us (Femtotech? Machines based on the strong nuclear force? WTF-tech?) and leave us mouth-breathers mired in molecular molasses while they colonize the core of the sun, or quit the universe altogether!

        Perhaps more likely is the idea that it would deliberately disassemble the OLD biosphere as raw material to build the new one. I don't mean an accidental "gray goo" scenario, but rather a deliberate decision by the most advanced 'life' form to dismantle a collection of obsolete machinery to free up space. After all, we've consciously done something similar to countless other species in the name of progress, too.

        I wouldn't hold my breath hoping that "we" will find out what's on the other side of the singularity...even if the first group to build nanotech doesn't use it to kill 99.99% of us because they want mansions with BIG front lawns, then it's possible our tools will simply 'get uppety' and decide that they simply don't need us anymore.

        When I hear discussions about how we will all see a utopic future brought about by some future technology, I'm reminded of the Sci-fi classic "When Worlds Collide". I'm sure that many of the people building the rocket to take humanity to the new world (and away from the doomed Earth) thought they were going to get a seat for themselves. But in the end, almost everyone in the world got left behind when the fateful moment arrived.

        -dexter "Two thousand zero zero, party almost out of time" riley
        • It all depends on goals, motives, and technical competence. And it you don't have enough technical competence the most likely result is a disgusting failure.

          But we should remember that nano-tech may buy us a stay of execution from Malthus, it's not a pardon until we change our ways. Not with the best will in the world on the part of the machines.

          And the problem that's always bothered me about these scenarios is, how do you implement the control that is needed? Of course, nobody knows the answer to that one in detail, but I haven't even heard any crude guesses at a workable direction.

          Still, that's only one form that the sigularity could take. Don't fixate on it, or you will be quite surprised. Personally I expect that one day the net will wake up, without any real planning on anybody's part. And we may not even know when it happens. (Just because it's awake doesn't mean that it speaks any human language. It merely means that it's become aware of being aware of it's environment. And that it uses this information to select it's purposes, priorities, and actions.)

          Maybe it's approaching time for the "Genetic AI @ Home" screen saver.

    • The concept of the singularity is an obvious analogy to the black hole, except with an infinite density of technology instead of mass. With a black hole, there is a feedback loop, with gravity pulling particles closer together, therefore increasing gravity, which then pulls them even closer together, and so on. With technology, the feedback loop is that of building technology to enhance humanity, whom then can build even better technology to enhance humanity further, leading to some state of divine catharsis, where "technology is indistinguishable from magic", quoth Arthur C. Clarke. Or where the boundries of reality break.

      Whether this analogy works is the question. Transhumans speak of technology and "advancement" as if it is some kind of measurable substance, like phlogiston.

      I like to think of humanity's future more in terms of currently observable animal phenomenon, considering our great similarities to other fauna. For instance, take the humble Dictyostelium discoideum. These amoebae will live on their own, foraging for food. Once it becomes scarce, they signal each other, and come together as one. They form into a vehicle, a slug, and move to another food source. Then some amoebae sacrifice themselves to form a solid stalk, which the rest of the amoebae climb to the top of. They form a spore, which then explodes and scatters all over the food, allowing them to forage as individuals, once again. Read more here [uni-muenchen.de]

      Considering that as technology gets more powerful more and more people will have the ability to destroy the human race with a flip of a switch, there will have to be some survival mechanism [see above] in which we can scatter ourselves across the [solar system/galaxy/universe/multiverse/spiritual sky] to assure our survival.

      Technology like the internet brings us closer together as one. For those of you who experiment with psychedelics, you may or may not already know that telepathy is possible, so the nature of humanity coming together doesn't necessarily have to be only technological. In fact, boundries are simply models we place on the world to understand it, and we're all together in a big mush anyway, we just don't realize it. Maybe technology and religion aren't that different...

      peace out

      LS
  • his cutting-edge speculation. Sci-Fi writers with cutting-edge speculation and interesting futurist ideas are a dime a dozen. Sterling's strength is in making it fun to read! And creating a very detailed and believable context for the ideas to be presented in.

  • by dmiller ( 581 ) <djm@mindro[ ]rg ['t.o' in gap]> on Wednesday July 24, 2002 @09:55PM (#3949068) Homepage
    Wasn't it Vernor Vinge who coined the term Singularity [caltech.edu] in relation to exponential technologic growth which overwhelms our ability to predict and comprehend?

    His writings are suffused with it. It is a key theme in A Fire Upon the Deep [amazon.com] and Marooned in Realtime [amazon.com]. It also weighs heavily in the background of A Deepness in the Sky [amazon.com]. All IMO are brilliant pieces of SF.
  • A singularity is a feature of a graph. Now, I'm as rational-reductionist as the next geek, but reducing all of human progress to a graph reduces reductionism to the riduclous!
    • by Jerf ( 17166 )
      Yeah, reducing entire philosophies to an incorrect one-phrase blurb does tend to produce ridiculousness.

      Extropian graphs are like metaphors... they are a way of describing something, but they do not take priority over the real thing. Similarly, those graphs are just a demonstration of the larger point of the difficulty of predicting the near-future in an exponentially-progressing-technology era.

      The graphs flow from the arguments, not vice versa.
    • A singularity is a feature of a graph. Now, I'm as rational-reductionist as the next geek, but reducing all of human progress to a graph reduces reductionism to the riduclous!

      You have obviously never had to give a presentation to upper management. They are a peculiar species, unable to understand words. They can only be communicated to in a very limiting fashion via colored 3d graphs and charts. Unfortunately most of the important information is lost in the translation...
      • Tips on communicating with upper management via graphs:

        1) Does it go up toward the right? Good!
        2) Does it go down toward the right? Bad!
      • Aha! So Vernor Vinge stories are all written for the pointy-haired bozos! That explains a great deal.
  • here's an excerpt I particularly enjoyed... for those too lazy to read it, you're missing a fantastic interview. The best so far.

    Now, as to the political old guard approaching obsolescence, for a microcosmic view of what I'm talking about, look at the music and film industries. I could write a book--or at least a chapter--describing the insanities of the studio system, but, in a nutshell, the situation is that big music or movie distributors find it easier to distribute a homogeneous product. It's cheaper to sell a billion copies of one record than a million copies each of a thousand discs. So they're squeezing the variety, thinking that they're selling a physical product--but they're not: they're trying to sell ideas. There is a fundamental contradiction at the heart of the term "intellectual property," because information isn't transferred between brains: it's copied. The music and film industries are finally waking up to the fact that as they squeeze their product range down, people lose interest in the range and look outside it for independent productions. So they're panicking, blaming the new business model, fighting a zero-sum rear-guard action, and trying to ban progress.


    They may succeed. If so, I fear we're doomed to live in a world not unlike that of Rebecca Ore's Outlaw School (Tor, 2000). And I really don't want to go there.
  • An surprised anonymous wrote: I'm surprised nobody mentioned this yet...

    And now, are you surprised nobody's commenting on this story? Perhaps there's a pattern, here.
  • by HorsePunchKid ( 306850 ) <sns@severinghaus.org> on Wednesday July 24, 2002 @10:48PM (#3949284) Homepage
    <CBG>
    Worst S/N ratio ever!
    </CBG>

    Just to stay on-topic to some extent, here's [asimovs.com] his story in Asimov's [asimovs.com]. Definitely worth a read! Has a sense of humor that reminds me of Stephenson.

    • Dear Slashdot,

      Please remove this deep link immediately or we will sue. Have a nice day.

      Sincerely,

      Asimov's Science Fiction

    • It's wierd. And it's talking about things that couldn't happen now. And I didn't notice any that were physically impossible.

      But it seems more fantasy than attempted projection. Sorry, but I don't feel that people would ever choose to create that society. Shockwave Rider (John Brunner) was more convincing. Also, it implies a much slower rise time for the Singularity than I find probable. (In his PeaceWar, Vernor Vinge was explicit in saying that he had to insert a war to slow down the rate of technical expansion.)

      I, personally, expect the singularity to arrive before 2030, and I would be surprised if it arrived before 2010. And 2010 is pretty close. In fact, extremely close. But watch the way the news fluctuates from day to day, or look at how fantastic Science Fiction (not fantasy) is becoming, and you'll see signs.

      The current exterme reactions of the government are partially caused by the growing awareness that they can't project very far into the future. It's the butterfly principle writ large. In a stable environment, most of the chaos averages out, and only a little is left. We have been creating an environment where each change amplifies other changes that were in the process of implementation, so you have cascades of changes. Some of them act like fashions, and have no lasting effect. Others, unpredictably, sweep over everything like a phase change. And you can't tell which is which in advance. Well, WE know that computers are one of the big ones, and now everyone knows that. But is nano-tech? Probably. But there's that level of uncertainty. And it's not a yes or no question. Once you decide it's important, you need to decide how it's going to act, and how you should respond to it. And given that it's only one of numerous changes in progress simultaneously...

      Some days I wake up, look at the news, and say to myself "Only the singularity can save us now". Other days I wake up, look at the news, and say to myself "We'd have it made if it weren't for the singularity." Is one true? The other? Both?

      I think that this is what he is trying to convey with his piling of fantastic feature on top of fantastic feature. It doesn't work for me, but then I don't know what could. (True Names comes close, but that's a one of a kind.)

      This is what Robert Anton Wilson called the "Jumping Jesus" phenomenon. (Take all the knowledge in the world at 1 AD, and call that the standard unit of 1 Jesus. What's the doubling time? He figured that it was a decreasing function (i.e., each successive doubling took less time than the previous one. And the 2 Jesus mark was reached before the Renaissance.)

      I find that interesting and provocative, but the important interval measures applied techniques, and the closest thing I seen to that is the number of patents (a grossly misleading statistic). So without a meaningful measure, or even a useful unit, all I've got is a gut feeling. But it seems that the relevant function is increasing quite rapidly. Thus my estimate of 2010 to 2030. I would be moderatly surprised if the people of 2031 still spoke a language that I would recognize as English. I expect that much change.
  • by Anonymous Coward
    duh,
    i can't believe that people are saying that this story is dead, based on it's alleged "low" numbers.
    based on the interview that i read i would have to say the dude sounds pretty intersesting and well read. that and his experience as a writer lead me to believe that this could be a kewl read.
    it sounds to me as if the above posts, in the majority, have not read the actual story.
    what gives. i thought we were supposed to be smart.
  • ...in both the interview and the Lobster story, which was enjoyable enough to make me want to find more of his work.

    Personally, I don't care if the guy is an asshole or a saint, it's his ideas and the mixing of ideas which is interesting and fun.

    Comparing authors is pointless to me in that no two are alike even if they're writing on similar subjects. This Lobster story is still fresh and to say it's just more cyberpunk is both unfair and untrue. It's like saying the punk rock of the early 80's left no room for anything else and all the new punk stuff is therefore just rehashed trash (which is obviously not true.)

    "Lobster" was a good, if challenging, read and the author proves interesting in the interview. I'll be looking for more of his work to read and I'm sure -- I do mean positive - that many of the readers of Slashdot would enjoy both the lobster story and the interview.

    Is there a troll-fest happening tonight? I must 'ave lost me invite!
  • 2000 story (Score:4, Interesting)

    by apsmith ( 17989 ) on Wednesday July 24, 2002 @11:27PM (#3949419) Homepage
    Coincidentally, I saw this /. item just as I had finished reading Stross's "Antibodies", a short story, in a collection of the best science fiction of 2000. I'd never heard of the guy before, but his writing is wonderfully close to my experience and that of most /.'ers - I guess he's a bit new as a recognized author so not many of us know much about him. What I've read so far seems very promising though!
  • From the article:
    On his webpage, he describes his salient characteristics in a compact form of Geek code:


    GTW/CS/L/MD d-- s:+ a? C++++$ UL++++$ UC++$ US+++$ P++++$ L+++$ E--- W+++$ N+++ o+ K+++ !w--- O- M+ V- PS+++ PE Y++ PGP+ !t 5? X-- !R(+++) tv-- b+++ DI++++/++ !D G+ e+++ h++/-/--- r++ z?
    What the heck is all that supposed to mean??
  • by RDW ( 41497 )
    Nobody seems to have linked Charlie's excellent website, so I will: Antipope.org [antipope.org]

    It has lots of Linux, Perl and SF - what more could you want?

  • I do most of the system administration on Charlie's web server (which also serves my website). I wish he'd warn me when we were in danger of being slashdotted...

    Watching the logs it looks like we're OK at the moment, but we don't have all the bandwidth in the world.

    Oh and he just signed my emergency passport application, so I'm not going to say anything else rude about him :-)
    • by charlie ( 1328 ) <charlie.antipope@org> on Thursday July 25, 2002 @04:39AM (#3950149) Homepage Journal
      Hey, nobody warned me that I was going to be slashdotted!

      Incidentally,I have it on good authority that the Oxford English Dictionary is going to cite "Lobsters" as the first use of slashdot as a verb -- turns out that the OED editors have still got this quaint prejudice in favour of hardcopy, so being in a book in the British Library (or US Library of Congress) gets you into the OED, and being on slashdot itself doesn't.

      • I occasionally review technical papers, and people are increasingly using URLs as references. Trouble is, in a large number of cases the URLs are dead links by the time I do the review; by the time of publication it's completely dead.

        At least dead trees don't have the habit of disappearing from existence without warning.

  • As chance wants it, I came across Charlie's writing just a couple of weeks ago - and concur with all the positive comments. He also has a unreleased novel called Scratch Monkey on his website [antipope.org] (right at the bottom), for which you need to request the "keys" [antipope.org] before being able to access [antipope.org] it.

    Scratch Monkey is definitely worth reading.

    PS: hi Charlie! This article is the equivalent of being on the cover of the Rolling Stone, yea?
  • from the interview

    CS: I wrote "Lobsters" and showed it to a friend. He said "that's really cool, but you'll never sell it--the audience would have to overdose on Slashdot for six months before they got it." He was completely right--he just underestimated the number of people out there who overdose on Slashdot!

  • Possibly more noise, however I was curious that the interview involves discussions on cyberpunk and free software, it is a topic which I attempted to research and write about for an undergrad postmodernism class. Taking artificial intelligence networks as a metaphor for ownership of information I tried to argue that history should be openly intact in "code" and that literature is interpreted through collaboration, a groove between writers and readers like a customisable interface. I've uploaded my article [ihug.com.au] , if it is interesting at all please e-mail [mailto] me, I haven't found much on this topic so far.
  • Is that japanese, or just a coincidence? (Ai = love, neko = cat) (Of course, in the best /. tradition I haven't read the whole thing, or looked to closely at earlier coments..)
    • I think he's just trying to recycle Aibo (which actually comes from the Japanese word 'aibou'="partner"). Aineko doesn't really make a lot of sense, though - it'd be like calling a cat robot "Lovely Cat". I'm sure Sony can come up with something catchier than that...

      • Umm... I spent a chunk of last year working for them, and I suspect they'd have loved the pun... I suspect Charlie was simply after AI + Neko, though....
  • by invid ( 163714 ) on Thursday July 25, 2002 @06:57AM (#3950371)

    The most significant factor in singularity is determining what is actually possible under the constraints of physical laws. In all likelihood the universe is not infinitely maliable to our will. Eventually, what is technically possible will reach a plateau, where nothing more advanced can be made.

    The most straightforward example is faster than light travel. The universe seems to have a set limit for allowing an object from going from point A to point B. There may be ways around this by warping space. But there are limits on how much space you can warp. Eventually we will reach a point where we cannot travel faster from point A to B.

    There are probably some people out there saying "But we don't know what the limits are. People used to say it was impossible to go faster than the speed of sound." That's true, we don't know what the limits are, therefore we should act like there are no limits ... yet. But someday we will figure this universe out and then we'll know the limits. We'll know the fastest speed. We'll know the bountries of what is possible, and we will build to those bountries. We'll travel as fast as possible. We'll make ourselves as intelligent as beings can be under the constraints of the universe. We'll live as long as possible. And technology will be at a plateau from which it cannot grow any higher.

    • The confidence with which you make your unbased claims is hilarious.

    • There are ways of effectively travelling faster than light, depending on what your purposes are.

      If you want to build an interworld empire, then you appear to have problems, but if you want to shorten the trip, then several approaches are plausible.

      The simplest one is frozen sleep.
      The fanciest one is to upload yourself into a computer, put yourself on pause, until you reach the destination, and then download yourself into a new body.

      The best one is MacroLife. Redesigning things so that you live in a mobile space colony that roams from star to star, grazing on the cometary belts, and occasionally mining from the moons or asteroids (usually only needed for major repairs, or to fission the colony into two).

      The physical vessel that will contain the MacroLife should be buildable before the singularity. The design of the society is more dubious. It would need to be quite stable. And if it were too aggressive, then it would be dangerous to create, whereas if it were too passive, then it would be subject to hostile takeovers. Not an easy problem.

      • An interstellar empire would be feasible if there existed sentient beings with a lifespan that was measured in the millions of years. Then the trips between stars at about 10% c wouldn't seem all that long, and there would be enough continuity to maintain an interstellar culture.

      • If special relativity is correct, you don't have to work to survive long trips at speeds near c.

        The passage of time is relative, with a ratio of (1/(1-(v/c)^2). If your v is small compared to c, then the factor is near 1. If your v is, say, .9c, then the factor is (1/(1-.9^2)) = 1/.81 =~ 12. So a 4 light year trip at .9c would take only about 4 months in ship time, while it would take about 4 years from the point of view of those who didn't undergo the acceleration of the trip.

        If you can go .99c, then the time factor is (1/(1-.99^2)) =~ 50, so the same trip would take about a month ship time, while still taking about 4 years from the planet-bound point of view.

        It should take about a year to get up to near light speed at acceleration of 1 gravity. Of course, you have to get all that energy from somewhere, but I'm sure you can pull together some kind of Bussard Ramjetty thing to do it with, since we're assuming that we're at the singularity.
        • You might want to look at the energy requirements of what you are proposing. Even with total conversion (100% efficient), I feel you would find it of dubious practicality.

          Now it you could tap the vacuum point energy... but that one's probably a fantasy. That's probably one that the universe doesn't permit.
          • With total conversion, reaching .99c should take only (only ;) 100 times the mass of the object you are accelerating.

            I agree the energy requirements are ludicrous, but we are talking about the capabilities of entities capable of whatever is physically possible.
    • http://everything2.org/?node=your+radical+ideas+ab out+radical+ideas+have+already+occurred+to+others

      That url takes care of responding to most of your post.
      Now to comment on the first part of your first sentence:
      'The most significant factor in singularity' - that wordset is polysemous. Do you mean 'the most significant factor in the character of what life will feel like beyond singularity', 'the most significant factor in whether (and when) there will be a singularity', 'the most significant factor in the present day discussion of what it will feel like/whether there will be a singularity', I could go on, I'm just getting started, 'the most significant factor in where the present day discussion of singularity *should be at or should go*', etc. etc.

      You have given us a post with almost infinite interpretations. Polysemy is a good thing, as long as the number of potential interpretations doesn't get out of hand. You have given us a post with *too many* interpretations. Please more sharply specify what you are saying so that we can attack or praise it specifically.

      - kaidaejin@NoSpam@hotmailcom
  • Check out freesfonline [freesfonline.de] for links to a bunch of his stories. Two of them made it into last year's The Years Best Science Fiction 18 (Gardner Dozois, ed.), and while one poster mentioned that Antibodies was good (it reminded me of The One for some reason), A Colder War is far better.

Keep up the good work! But please don't ask me to help.

Working...