Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Education United Kingdom

Cambridge University To Open "Terminator Center" To Study Threat From AI 274

If the thought of a robot apocalypse is keeping you up at night, you can relax. Scientists at Cambridge University are studying the potential problem. From the article: "A center for 'terminator studies,' where leading academics will study the threat that robots pose to humanity, is set to open at Cambridge University. Its purpose will be to study the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology."
This discussion has been archived. No new comments can be posted.

Cambridge University To Open "Terminator Center" To Study Threat From AI

Comments Filter:
  • by Anonymous Coward on Monday November 26, 2012 @03:09AM (#42091691)

    Of the four things cited, AI is perhaps the least likely to kill us all, seeing as it doesn't exist.

  • by Crash24 ( 808326 ) on Monday November 26, 2012 @03:18AM (#42091735) Homepage Journal
    Relevant - if facetious - commentary by Randall Munroe. [xkcd.com] Seriously though, I think a hostile hard AI would get away with much more damage as a software entity on the Internet than in physical space.
  • by Anonymous Coward on Monday November 26, 2012 @03:27AM (#42091773)

    Movie-style AI might not exist today. However, we do have drones flying around, the better ones depending only very little on their human controller. It won't be too long before our friends at Raytheon etc. convince our others friends in the government that their newest drone is capable of making the 'kill decision' all by itself using some fancy schmancy software.

  • by Anonymous Coward on Monday November 26, 2012 @03:29AM (#42091783)

    It takes only 1 dumb human to remove the air gap or allow for a system that removes air gaps of other systems.

  • by Anonymous Coward on Monday November 26, 2012 @03:51AM (#42091879)

    To summarize the summary of the summary: People are a problem.

  • by wienerschnizzel ( 1409447 ) on Monday November 26, 2012 @04:24AM (#42091971)

    Some things don't scale well. Like with the space race - humanity went from sending a pound of metal into low orbit to putting a man on the moon within 12 years. Everybody assumed that by 2012 we would be colonizing the moons of Jupiter. Yet it turned out human space travel becomes exponentially difficult with the distance.

    I'm afraid the same thing goes for software. The more complicated it gets the more fragile it is.

  • by durrr ( 1316311 ) on Monday November 26, 2012 @05:16AM (#42092151)

    Of the four things cited, none is "giant rock from space" which is pretty much more likely to kill us than the four mentioned combined.

  • Daily Mail Source? (Score:4, Insightful)

    by BiophysicalLOVE ( 2650233 ) on Monday November 26, 2012 @05:53AM (#42092267)
    If the Daily Mail is your source for any story, it would be in your best interests to instantly dismiss it.
  • by Captain Hook ( 923766 ) on Monday November 26, 2012 @07:26AM (#42092567)
    My understanding of those robo-turrents is that they have sufficent image processing to identify a human, but nowhere near enough to identify friend or foe or to infer anything based on actions and expected behaviours, it's why they need to send video feeds back to the control center so there is still a human in the loop to decide on firing.

    That doesn't mean the turret couldn't be left in free fire mode incase of an all out ground attack from the NK line and it just shoots at anything that moves but that only makes it a very complicated reusable anti-personel mine. There isn't much "AI" there, only a shape recognition.

    What people tend to mean about proper AI in this context is to identify humans, recognising friend or foe, either through appearance or behaviour and choose an appropriate course of action without human interaction - a bit like ED-209 from Robocop, a room full off people but it identified the guy holding a gun as the possible threat and only the guy holding the gun, of course when the gun was put down it didn't change it's threat assessment so there were bugs in the system :)

Always try to do things in chronological order; it's less confusing that way.

Working...