Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Books AI

100-Year-Old James Lovelock Predicts Humans Will Be Replaced by Self-Aware AI (nbcnews.com) 166

A new book by futurist James Lovelock argues "Our supremacy as the prime understanders of the cosmos is rapidly coming to end. The understanders of the future will not be humans but what I choose to call 'cyborgs' that will have designed and built themselves."

An anonymous reader quotes NBC News' Mach blog: Lovelock describes cyborgs as the self-sufficient, self-aware descendants of today's robots and artificial intelligence systems. He calls the looming era of their dominance the Novacene -- literally, the "new new" age...

Unlike technoskeptics, including University of Louisville computer scientist Roman Yampolskiy, Lovelock thinks it unlikely that our machines will turn against us, Terminator-style. And unlike utopians like futurist Ray Kurzweil, he doesn't envision humans and machines merging blissfully into a union that some call the singularity. Rather, Lovelock views the rise of technology through an evolutionary lens, in keeping with his decades of research and thinking about ecological and biological systems. He also brings the unique perspective of a scientist who just marked his 100th birthday, with a deep awareness of changing scientific fashions and with nothing left to prove. It's an outlook that pushes him to conclusions at once optimistic and deeply disturbing.

Once established, the cyborgs will remain dominant on our planet. "The Novacene," Lovelock says, "will probably be the final era of life on Earth..." Lovelock believes that advances like AlphaZero mean we don't have to look to the distant future to see how the story will unfold. "The crucial step that started the Novacene was, I think, the need to use computers to design and make themselves," he writes. "It now seems probable that a new form of intelligent life will emerge from an artificially intelligent precursor made by one of us, perhaps from something like AlphaZero."

Once we get used to being treated like houseplants, the early days of the Novacene might not be so bad. For one thing, Lovelock says, cyborgs and humans will have a shared interest in protecting Earth from climate change, because neither we nor they can tolerate temperatures beyond about 50 degrees Celsius (122 Fahrenheit). If humans fail to find ways to mitigate the effects of global warming, then the cyborgs will need to do it. "They will, of course, bring something new to the party, probably in the field of geoengineering -- large-scale projects to protect or modify the environment. Such projects will be well within the capacity of electronic life," Lovelock writes. For instance, the cyborgs might cover large areas of Earth's surface with mirrors to reduce the amount of absorbed solar heat.

As the Novacene progresses, the cyborgs might decide to remake Earth's ecosystem. With no need for oxygen or water, they might create a new world that is better for them but lethal for us... Given their complete dominion over Earth, the cyborgs would become our planet's final inhabitants.

This discussion has been archived. No new comments can be posted.

100-Year-Old James Lovelock Predicts Humans Will Be Replaced by Self-Aware AI

Comments Filter:
  • Those systems are wholly deterministic.

    • by MrNaz ( 730548 ) on Monday August 26, 2019 @03:50AM (#59124650) Homepage

      TLDR:
      Old guy who doesn't understand technology makes outlandish predictions that would have made for a great sci fi novella in 1965.

      • True, old people usually spend more time reflecting on the past than anticipating the future, beyond a certain age the brains plasticity is gone forever.

        As far as cyborgs go, it's nearly November 2019 now, I was hoping we would have this [ytimg.com], instead we're stuck with this.... [youtube.com] Jesus wept.

      • Most likely. He specifically cites things like AlphaGo, which is the result of the game's computational/solution space being fully mapped. Instead of using just search trees though, it uses a combination of search trees, and a neural network simulation (which is just a set of co-morbid proxy values that the algorithm can leverage to arrive at a weighted confidence decision.) It's still wholly deterministic.

        the jury is still out on whether humans are wholly deterministic logic engines too or not, (and ther

        • there's also enough nondeterministic noise in human hardware to cause enough nondeterminism to make things interesting and unpredictable.

          But does noise make things better ?

          Either way, natural noise is present in nearly all real life inputs. An AI like a self-driving car with cameras picks up plenty of noise.

          • by Dunbal ( 464142 ) *

            But does noise make things better ?

            "Better" belongs to pseudoscience. It's entirely subjective, a value judgement, and not quantifiable. Better for you is not better for me. Noise makes things DIFFERENT. That can be quantified and argued. But if you purport to show some skew towards or away from a set of arbitrary values you are engaging in morality and religion, not science.

            • "Better" belongs to pseudoscience. It's entirely subjective, a value judgement, and not quantifiable

              You don't even try. Let's take a concrete example of a self-driving vehicle. You can define "better" in various quantifiable terms, like safety record, smoothness of drive, and ability to correctly navigate the roads.

              For other systems, you can find other ways to define "better". We do this all the time for people. Better lawyers win more cases, better stock traders make more profitable deals. Better CEOs produce more shareholder profit.

        • Re: (Score:3, Interesting)

          by chthon ( 580889 )

          I would like to add the following to this.

          Deterministic means same input, same output.

          However, humans are able to recognise this fact, especially when things go wrong. (Some people don't, they will redo the same thing over and over again, and still question why things don't work out like they think it should).

          When things go wrong or are incorrect, we are able to change our behaviour so that same input will deliver another kind of output.

          My take on things is that real AI can only be obtained by evolution

          • Deterministic means same input, same output.

            No, deterministic output depends on input + internal state. And the state is affected by previous inputs (i.e. memory of earlier events). It's trivial for a deterministic system to try out different approaches if the first attempt fails. A roomba can do that.

            Determinism leads to the implications of GÃdel's Law: either you have a system that is completely consistent, but then there are things that it cannot describe, or else it can describe anything, but then it is not consistent any more

            Goedel's theorem is about limitations of formal axiomatic system capable of modelling basic arithmetic. It has nothing to do with brains or intelligence or determinism. See: https://en.wikipedia.org/wiki/... [wikipedia.org]

            Also, there are plenty of math problems that h

            • The deal is that humans have noise in their internal state, due to how we store and retrieve data, as well as how we process it to begin with.

              a twist of a protein here, chemicals from a parasite (or from food) there-- The state is not predictable, because it cannot be properly known.

              I am not suggesting there is some kind of "magic" inside a human that makes them special, just that they are not the same class of computation system as would be an AI that is hosted on traditional computing hardware. (Not unles

              • The deal is that humans have noise in their internal state

                Right, but is that noise helpful, or does it just interfere with the accurate decision making ? In either case, it would be quite trivial to add a dedicated noise source to a computer.

                • If you mean, "Would humans be better if they had perfect recall, instead of relying on faulty procedural generation to store memories?" then I don't have an answer; I lack a suitable human that stores reliable memories instead of procedural metadata, and does so in a lossless format. Such a human does not exist. We might succeed in making one if we keep on the current track with some AI research paths, but I would say the answer is "no" because of the debilitating life consequences people with "Superior a

                • ...it would be quite trivial to add a dedicated noise source to a computer.

                  Yes - but would it be truly random noise?

                  • You can get truly random noise from a diode.
                    https://en.wikipedia.org/wiki/... [wikipedia.org]

                    The issue is that the noise inside a human is complex noise, interacting with the functional components used in computation and data storage inside the human. Such things like quantum uncertainties all piling on top of each other when a protein folds, or the consequences of a charged particle from the sun zipping through the human at random.

                    Those kinds and sources of noise would have to have analogous effects on the AI.

                  • You can make truly random noise if you want. A simple high bandwidth source can be obtained from a cheap web cam with the lens cover on, and gain turned up.

                    But you could also make a pseudo random generator capable of fooling any statistical test we can come up with.

              • Not to mention that there's simply "noise" inherent in our inputs, at all times.
                The amount of external inputs we have and the amount of constant momentum in everything around you makes for a limitless amount of entropy.
        • by e3m4n ( 947977 )

          There is no way humans are logic engines, have you seen some of the people that shop at walmart? Im telling you, some make you wonder if

          #include <stdio.h>
          void main()
          {
              print(“hello world!”);
          }

          Is the extent of their base programming. Hell there used to be a pretty active People of Walmart site until we started accepting that as a norm.

        • If Alpha Go was using neural networks the way they should be using for strong AI, then there would be no need for search trees at all.
          • I'm sure it's possible, but there's no good reason to be artificially constrained to particular implementations. Human players implement a poor tree search on top of a neural network, but it's not a great solution. A traditionally programmed tree search is much faster, more accurate, and uses less resources.

            • The artificial constraint is the rigidity of decision trees! You will never have general purpose knowledge while relying on decision trees, you'll just be playing a game with strict rules.
              • You will never have general purpose knowledge while relying on decision trees, you'll just be playing a game with strict rules.

                Do you have reason to believe your own brain is not following the strict rules of physics ? Apparently, that's not a problem.

                • My brain doesn't use decision trees. No one did surgery on me at birth and installed a part of my brain with a single purpose to process gravity. I learned all this through my life's experiences.
      • "Old guy who doesn't understand technology makes outlandish predictions that would have made for a great sci fi novella in 1965."

        It's even worse. The title suggests that the 'prediction' was done 100 years ago, after reading a bit, it's just the author who is that old, his prediction is from last week.

      • TLDR: Old guy who doesn't understand technology makes outlandish predictions that would have made for a great sci fi novella in 1965.

        There was another guy who made great sci fi back in the 1940's. Went by the name of Orwell.

        One would think we would have learned by now regarding dismissing outlandish predictions. Only took a few decades to turn a sci fi writer into a prophet.

      • Reality has a very long way to go before it catches up with the best SF of the 1940s, let alone the 1960s.

        But then someone who talks about "sci fi" probably doesn't know much about science fiction (normally abbreviated "SF").

      • Re: (Score:3, Insightful)

        by Whibla ( 210729 )

        TLDR:
        Old guy who doesn't understand technology makes outlandish predictions that would have made for a great sci fi novella in 1965.

        I rarely get 'nasty' on /. but seriously, who the fuck rated your comment as insightful, and exactly what technology have you invented that you can dismiss his contributions so casually?

        In his own words:

        "Among my inventions are detectors and other devices for use in gas chromatography. The argon detector was the first practical sensitive detector. It realized the potential of the gas chromatography. The electron capture detector was invented in 1956 and is still among the most sensitive of chemical analytic

    • Most people sadly don't behave beyond automatons, so this is entirely predictable.
    • Humans are wholly deterministic.
    • Those systems are wholly deterministic.

      No they aren't. Computers are made of electronics, electronics is made of quantum goings on and quantum goings on are no deterministic.

      Look up "entropy source". There is at least one and probably three in your computer

      • Doesn't do you any good if those entropy sources are excluded from providing input.
        There are lots of sources of entropy in my computer. Yet it still faithfully follows instructions with highly precise timing.
  • by Kokuyo ( 549451 ) on Monday August 26, 2019 @03:46AM (#59124642) Journal

    ...but good riddance.

    The question then becomes is humanity capable of creating a successor that doesn't have the same weaknesses.

    I doubt that we're that capable to be honest. If we were, you'd expect we could do a better job as a species in the first place.

    • by Viol8 ( 599362 ) on Monday August 26, 2019 @04:58AM (#59124740) Homepage

      If you're worried about our poor record on the enviroment then what do you think a "species" (no idea what you'd call AI) that has zero reliance on the biological world do to the planet? Sufficiently robust AIs could exist on venus, never mind a totally fucked up earth. So don't assume our demise is good news for the rainforest, pandas etc.

  • Interesting way of naming things, to define cyborg as amalgamate of robot and AI without any biological part, when 'org' in cyborg stands for 'organic'...

    • Cyborg means cybernetic organism.
      • True, but it refers to some form of living organism that has robotic parts. Whereas an android (meaning manlike) is a robot that looks and acts like a human.
        See: http://sf-encyclopedia.com/ent... [sf-encyclopedia.com]

        The term "cyborg" is a contraction of "cybernetic organism" and refers to the product of human/machine hybridization. David Rorvik popularized the idea in his nonfiction As Man Becomes Machine (1971), writing of the "melding" of human and machine and of a "new era of participant evolution". Elementary medical cyborgs – people with prosthetic limbs or pacemakers – are already familiar, ...

    • There should be an organic component to cyborgs, for example DC's Cyborg is a human with enhancements, and the original terminator is a robot with organic tissue. This guy doesn't seem to understand that.
    • Interesting, to decide that " because neither we nor they can tolerate temperatures beyond about 50 degrees Celsius (122 Fahrenheit)" without knowing a thing about how such composite beings would be organized. It reminded me a little of Dr John Lilly deciding that the machines were against humanity because humans were largely made of water, and that machines were subject to rust.

  • ...does that mean that it happened elsewhere? Alien (to us) civilizations will have themselves been replaced by cyber-AI. It is only logical.

  • by OrangeTide ( 124937 ) on Monday August 26, 2019 @04:26AM (#59124698) Homepage Journal

    Will the human species change faster than evolution would predict? I think we can agree this is likely. From this point forward we'll be intentionally modifying ourselves and adapting the definition of what it means to be human. That means hacking our genome or augmenting ourselves with mechanical, electronic and biologic components. Other than the technological hurdle, is it really so far of a leap to go from corrective laser eye surgery to an eidetic memory implant?

    I suspect the whole business will creep up on us rather slowly, and we'll grow used to people with extra fingers or webbed feet maybe only 100 years from now. Where as evolution would need tens to hundreds of thousands of years to produce a useful adaptation.

    • by znrt ( 2424692 )

      From this point forward we'll be intentionally modifying ourselves and adapting the definition of what it means to be human.

      actually, we have been doing this for thousands of years already, it just accelerates exponentially.

      what's human? did a human 100, 1000 or 10,000 years ago contemplate another human the same as a human would now? there is no absolute meaning of "human", it evolves around an ever changing narrative that tries to accommodate the present. at some point in history we were even multiple "humans". i think we will diverge again, given enough time. then the term "human" will have lost most of its glamour.

      • There is a universal definition of human today. But most people find the definition to broad to be satisfying.

        Those other species aren't what I'm talking about. And I'm not suggesting that we'll ever diverge into new species. I believe our technological prowess and social structure wouldn't allow for it.

    • "That means hacking our genome or augmenting ourselves with mechanical, electronic and biologic components."
       
      Who says? "Futurists"? You guys just take stuff for granted.

  • So we will be replaced by machines that are based on 100 years of human design and skip millions of years of evolution?
    I wouldn't be too sure about that. Alpha Go is impressive, but I still can't hold a conversation about the weather after it beat me at Go.

  • Having just finished the first 5 series of Black Mirror, I'm inclined to the view that most uses of AI interacting with people are somewhere between cruel and nasty.

  • It is all about sustainability. Humans can perfectly replicate without any help from another species. Even it all technology would break down babies still would be born. In the case of cyborgs: not so much. They need a specialized infrastructure that *at the moment* can only be run by humans. And even the most simple tasks like collecting garbage (hello roomba) are hard for them. And even if they would have solved all that what happens when a big catastropy like an asteroid hits earth and destroys their mai

  • Fermi Paradox? (Score:3, Insightful)

    by ClickOnThis ( 137803 ) on Monday August 26, 2019 @04:41AM (#59124722) Journal

    If what he says is true (including that this has happened already on other worlds) then a cyborg species that can live forever makes the Fermi Paradox all that more inconvenient. Such a species would be able to travel across the galaxy just by taking its time. So why aren't they already here on Earth?

    • The cyborg species may not have sufficient motivation to spread. Also, space is still big. It's still unclear whether a motivated species with superior problem solving skills can actually travel through the galaxy using their local resources.

    • So why aren't they already here on Earth?

      Well, assuming they exist at all (which I personally very much doubt) that's easy to explain because space is really big and so at current speeds it will take millions of years to cross the Milky Way alone and billions of years to cross from nearby galaxies (other than the Magellanic clouds). So if they were not from our galaxy the universe is probably not yet old enough for them to reach us at current technology speeds.

      However, even if they are from our galaxy it is worth remembering that there are abo

      • even our radio/TV bubble announcing our existence is about 100 lightyears in radius vs. 100klyrs for the size of the galaxy and not that easy to detect given the feeble signal strength

        Our TV signals are so weak that it would take dedicated effort to pick them up beyond orbit of Pluto (think Arecibo scale), never mind a few lightyears away. And that's for analog. Modern OFDM modulated signals are much harder to pick up, because they just look like noise to uninformed observer.

      • The problem with the radio/tv argument is that we only had about a century of large-scale broadcasting that wasn't practically indistinguishable from random noise.

        * on-off carrier keying? Easy to detect.

        * amplitude modulation (and SSB)? Easy to detect.

        * frequency modulation? Easy to recognize if you're even halfway looking for it.

        * CDMA, vestigial siteband, OFDM, etc? 99%+ random noise, EVEN IF you know it's there.

        Inevitably, RF transmission is moving towards digital modes that approach statistical noise, p

    • Because the technology needed for FTL makes coming here irrelevant. That's the best solution to it I have found. There must be someplace else more interesting.
    • by shanen ( 462549 )

      My interpretation of the Fermi Paradox is that they are here and curious about us, but just watching. Possibly gambling quatloos on whether we'll create such a beast before exterminating ourselves. The odds on our lasting that long dropped badly when the cheap CRISPR kits hit the market.

      From another perspective, imagine there was an aggressive GAI, where "aggressive" means eager to propagate itself and willing to exploit any available resources towards that objective. Considering the scale of geologic time

    • by urusan ( 1755332 )

      We could just be the first in this galaxy, and our immortal cyborg ancestors will prevent all other naturally evolving sentient life from emerging in this galaxy by using up all the resources in the galaxy. By the anthropic principle, no other naturally evolved species will ever see this galaxy. Even if other galaxies have their own cyborg progenitor species and cyborgs, they're too far away for us to interact with them, aside from perhaps a few galaxies like Andromeda, and even then only over galactic time

      • by shanen ( 462549 )

        So how about the hypothesis that the dark matter consists of the galaxies where all of the stars have been encased in Dyson spheres? Can't even detect the infrared emissions because they also figured how not to waste that energy...

  • With the host in this context being economy, not Earth.

    A futurist who neither knows enough of nor cares about economics, whether it's business economics or world economics, is bound to make absurd predictions. Lovelock's cyborg domination theory, as plausible as it may seem from a narrow, predominantly technological perspective (even when combined with a thorough understanding of nature and evolution), seems to assume an imminent possibility of free access to the world's resources for intelligent autonomous

  • Comment removed based on user account deletion
    • by Viol8 ( 599362 )

      "Humand did not "Replace" apes."

      No, but done quite well in destroying their natural habitats, hunting them and pushing them to the brink of extinction without a specific goal of replacing them.

  • somewhere i saw a beautifully ironic cartoon about this. two human scientists in a cage, typing "i think therefore i am". the robots discuss them:

    "i think therefore i am. they thought, therefore we are. we keep them as pets, now"...

  • So did he also predict one of these cyborgs travelling back in time to kill the mother of the future resistance leader that's preventing them from finally ridding the planet of these pesky humans?

    Just that I've heard this one before..

  • I don't call our machines "cyborgs": they're humanity. They're our children, we made them. Therefore I view them as the evolution of our own species, only sped up billions of times. Or said another way, pure luck bootstrapped wetware humanity over many aeons, and we bootstrapped the hardware evolution of it within 150 years. But it's just the continuation of our species, and I reckon it's a Good Thing[tm] as, presumably, machines won't have to be burdened by the physical and emotional shortcomings of their

  • Natural Selection will be as strong in the computer world as in the natural world. It already happens. Some software is successful, others die. And software needs computers to run on. And computers need resources. And humans consume resources.

    See
    http://computersthink.com/ [computersthink.com]

    There is a nice podcast there.

    • by Dunbal ( 464142 ) *
      There's nothing "natural" about say, software publishers striking deals and colluding with hardware manufacturers to intentionally obsolete previous versions of their software, for example. If anything this is artificial selection. You are assuming some sort of ideal scenario where all computer hardware that can run software is equivalent and not itself influenced by exogenous forces. It's kind of like saying I will decide which ants live or die by burning all the ones I don't like with a magnifying glass,
  • predictions (Score:5, Interesting)

    by Tom ( 822 ) on Monday August 26, 2019 @05:47AM (#59124806) Homepage Journal

    Predictions of long-term futures are always such a thing... We also thought we would have flying cars. Or steam-powered giant robots. Every generation extrapolates whatever is the hot thing of the day and thinks the future will be more of that.

    None of them ever account for the unpredictable new thing that actually becomes the future. Steam engines were replaced by electricity and petrol-based transportation. The number of people who had even an inkling of computers before, say, 1930 is probably countable on one hand. And even early computing pioneers didn't envison that almost everyone is now 24/7 connected to a global computer network.

    AI is the "hot thing" today. That doesn't mean it will mean much in a hundred years.

    • by lorinc ( 2470890 )

      To quote the danish proverb that is now in every machine learning lecture: "Prediction is difficult, especially when dealing with the future".

  • Lovelock thinks it unlikely that our machines will turn against us, Terminator-style.

    Well thank God for that...

    As the Novacene progresses, the cyborgs might decide to remake Earth's ecosystem. With no need for oxygen or water, they might create a new world that is better for them but lethal for us.

    Oh

  • Nothing left to do now but hook up with Sarah Connor.
  • The problem with AGI for the past 60 years has been that it'd happen 20 years from now. A little bit like fusion power except the latter looks to be manageable 'cause at least we understand how it works in theory.

    We don't understand how intelligence works even in theory. We can't even give a definition to it.

    So, sorry, this prediction is worthless at the moment but it surely adds more fuel to the AI hype.

  • This has to stop. It is creating completely false expectations and fears.

  • Interesting that 75+ posts in so far and nobody has mentioned "The Matrix."

    Isn't this pretty much the same plot? The only difference being is instead of being eradicated humans are used as some kind of psychic food.

    Or for that matter The Terminator. Skynet seemed to have the same agenda.

    I think I have read a number of sci-fi novels where run-away AI takes over and the story is about the battle to beat it back. Oh, and anime series as well. I can't even remember all the titles.

    It isn't that it

  • Not sure about Lovelock's ultimate predictions, but just think about what is going to happen when Big Business starts to use Super AI to maximize their profits in the marketplace. Big Finance, Big Pharma, Big Resources, Big Tech, etc, will come up with the most Byzantine economic. social and financial machinations, assisted or controlled by Super AI, manipulating us and our existing systems and institutions, in order to dominate the marketplace and make ever-bigger profits. And we thought Facebook was bei
  • People look at developments like AlphaZero and cry "look, it's smarter than humans". In reality this computer is better than humans in 1 single task, game theory. Humans are really good at multiple tasks. Someone can be a great artist, writer, rebuild automotive engines and still drive the car he/she just rebuilt. The key I believe, is people miss the human / animal ability of "general intelligence". The ability to relate education and experience and skill from one discipline and apply it to another.
  • We built machines to take care of the brawn. We are building machines to take care of the brains. But there is a third requirement: motivation.

    It takes brawn, brains, and motivation.

    So the thinking machines are going to provide their own motivation? We sure don't want them to have glands, so what's that leave? We won't want them making up a list of possible things that can do, and then rolling dice to figure out which they want to do. That would be freaking crazy.

    Intelligence is a tool to be used tow
  • We will have practical, ubiquitos fusion power generation long before we have full-on, self-aware, fully reasoning, human-level 'AI'.

    Remember you heard it here first.
    As I have been saying, we don't even begin to understand how a living brain actually works, and especially we don't understand how a human brain produces the phenomena we refer to as 'thought', 'self-awareness', and so on.
    We don't even have sufficient instrumentation to begin to map how a healthy, living human brain works, not anywhere ne
    • by geekoid ( 135745 )

      " we don't even begin to understand how a living brain actually works"
      no really relevant.

      I can look at a car, and wit enough money build a copy without ever knowing how internal combustion works.
      We see that just simulating the brain and a simple level acts like the equivalent brain.

      Your hubris is also in the fact that the human brain is the only form of intelligence. The more we study other creatures, the less special it turns out we are.

      While I call bullshit on Lovelock(again) not for the reason you state.

      • Congratulations you just proved beyond a shadow of a doubt that you're not a sentient being. Or you're Just Another Shitty Troll, same difference.
  • Anybody who really understands "Machine Learning" and "AI" knows that sentience is harder than the wildly optimistic futurists realize. Creativity is not well understood and I suspect a highly underrated human capability.

    That said, smart self aware creative machines are going to happen at some point. They will force us to reconsider what defines life. Kurzweil is right in that their evolution will be vastly faster than biological evolution and that it many ways they will comprehend more that a human.

    App
  • Lovelock is the science version of Sylvia Browne. Nearly as accurate to.

  • People like to think that machines are 'better' than humans. NOPE. Processing speed per ounce is no where near organic yet, and already advances are slowing.

    Humans win all the time. And will always do so. If silicon based life was better than humans, it would have evolved.

    The main issue is that people tend to think of humans major strengths as weakness. The soft, unarmored nature of our bodies? Has a ton of advantages. Armor should be strapped on, not built in - by definition it gets damaged and

    • Exactly right. All of these futurists remind me of Roy Batty when he meets Dr. Tyrell in 'Blade Runner'. They want immortality. His creator basically tells him the facts of life: "ain't happening".

      The human brain does what it does with three pounds and 100 watts of power, and it's self-replicating using readilly-available materials on this planet. It is the product of millions of years of evolution, finely tuned for its environment, encased in a body possessing a dazzling array of sensors and mechanisms for

  • It will be fine until the machines do something that negatively affects us humans ... and _then_ we will resist. It's what humans do, we don't go down without a fight.
    And they will counter with escalating technology until ... Terminators.

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...