Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Education United Kingdom

Cambridge University To Open "Terminator Center" To Study Threat From AI 274

If the thought of a robot apocalypse is keeping you up at night, you can relax. Scientists at Cambridge University are studying the potential problem. From the article: "A center for 'terminator studies,' where leading academics will study the threat that robots pose to humanity, is set to open at Cambridge University. Its purpose will be to study the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology."
This discussion has been archived. No new comments can be posted.

Cambridge University To Open "Terminator Center" To Study Threat From AI

Comments Filter:
  • by FoolishOwl ( 1698506 ) on Monday November 26, 2012 @03:15AM (#42091721) Journal

    It depends upon how you define AI, I suppose. If you look at armed robots, Predator drones, and the interest in increasing the automation of these machines, I think you can see something that could become increasingly dangerous.

  • by Crash24 ( 808326 ) on Monday November 26, 2012 @04:18AM (#42091947) Homepage Journal
    The perceived threat of an emergent-hard-bootstrapping-self-aware-full-on-singularity-in-a-lunch-box intelligence stems as much from from its supposed intelligence and influence as it does from the fact that its motives are inscrutable. We just don't know yet what it would "want", beyond the assumed need for reproduction or self-preservation. That assumption itself may be wrong as well...
  • by Genda ( 560240 ) <mariet@go t . n et> on Monday November 26, 2012 @04:19AM (#42091951) Journal

    And what makes you think they won't connect the AI to everything? It'll start out Google's answer to Siri then boom, we're all buggered.

    Oh yeah, we've done such a great job cleaning up war, poverty and ignorance...this global climate thing should be a snap.

    Nobody is worried about countries nuking each other. We have every reason to be concerned however, that some knucklehead currently living in Saudi Arabia purchased black market plutonium from the former Soviet Union, to fashion a low yield thermonuclear device that they will FedEx to downtown Manhattan.

    I'm sorry, perhaps you didn't read about the teenagers doing recombinant DNA in a public learning lab in Manhattan, or the Australians who ACCIDENNTALLY figured out away to turn the common cold into an unstoppable plague, or even perhaps the fact that up until recently, a number of biotech researchers had zone 3 biotoxins mailed to their homes for research.

    There's a whole lot of stupid going on out there and the increasing price for even small mistakes is accelerating at a scary clip. Wait till kids can make gray goo in school... the world is getting very exciting. Are feeling the pucker?

  • no grey goo? (Score:4, Interesting)

    by circletimessquare ( 444983 ) <circletimessquar ... .com minus berry> on Monday November 26, 2012 @04:29AM (#42091981) Homepage Journal

    aka, rogue nanotech

    http://en.wikipedia.org/wiki/Grey_goo [wikipedia.org]

  • by Anonymous Coward on Monday November 26, 2012 @04:33AM (#42091995)

    Let me relate the tale of two AI researchers, both people who are actively working to create general artificial intelligences, doing so as their full time jobs right now.

    One says that the so called "problem" of ensuring an AI will be friendly is nonsense. You would just have to rig it up so that it feels a reward trigger upon seeing humans smile. It coud work everything out for itself from there.

    The other says no, if you do that, it'll just get hooked on looking at photos of humans smiling and do nothing useful. If put in any position of power, it would get rid of the humans and just keep the photos, because humans don't smile as consistently as the photos.

    The first researcher researcher tries to claim that this too is nonsense. How could any sufficiently smart AI fail to tell the difference between a human and a photo?

    The second responds "Of course it can tell the difference, but you didn't tell it the difference was important."

    So, the lesson: The only values or morality that an AI has is what its creator gives it, and its creator may well be a complete idiot. If we ever get to the point where AIs are actually intelligent, that should be a terrifying thought.

  • by Chrisq ( 894406 ) on Monday November 26, 2012 @04:54AM (#42092077)

    You know how it's defined - when it decide to kill you on his own, knowing that you are not a valid target.

    There's no such AI around. But of course humanity is much better at spending time not to thinking about themselves as liabilities. Because hey, it requires change. Humans sucks at change.

    The "knowing" is the key point when it comes to AI. Many machines can kill you without any knowing involved (land mines, trip wire guns, etc) but it is only AI when it "knows" something.

  • by nzac ( 1822298 ) on Monday November 26, 2012 @04:58AM (#42092087)

    Dangerous, yes. A persistent remotely sentient threat to humanity, not a chance.

    Maybe in the next 30 years they would make a military coup easier by allowing a smaller portion of military to be successful but that's still not likely.

    The only risk AI on these pose is as they get more firepower there is a greater risk of large casualties if the AI fails (false positive). I defiantly agree that the other 3 are real threats and this one just for the press coverage and so some phds or potential undergrads can have some fun with hypothetical war gaming.

To program is to be.

Working...