Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Education United Kingdom

Cambridge University To Open "Terminator Center" To Study Threat From AI 274

If the thought of a robot apocalypse is keeping you up at night, you can relax. Scientists at Cambridge University are studying the potential problem. From the article: "A center for 'terminator studies,' where leading academics will study the threat that robots pose to humanity, is set to open at Cambridge University. Its purpose will be to study the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology."
This discussion has been archived. No new comments can be posted.

Cambridge University To Open "Terminator Center" To Study Threat From AI

Comments Filter:
  • by Anonymous Coward on Monday November 26, 2012 @02:09AM (#42091691)

    Of the four things cited, AI is perhaps the least likely to kill us all, seeing as it doesn't exist.

    • by FoolishOwl ( 1698506 ) on Monday November 26, 2012 @02:15AM (#42091721) Journal

      It depends upon how you define AI, I suppose. If you look at armed robots, Predator drones, and the interest in increasing the automation of these machines, I think you can see something that could become increasingly dangerous.

      • by Pecisk ( 688001 )

        You know how it's defined - when it decide to kill you on his own, knowing that you are not a valid target.

        There's no such AI around. But of course humanity is much better at spending time not to thinking about themselves as liabilities. Because hey, it requires change. Humans sucks at change.

        • by Chrisq ( 894406 ) on Monday November 26, 2012 @03:54AM (#42092077)

          You know how it's defined - when it decide to kill you on his own, knowing that you are not a valid target.

          There's no such AI around. But of course humanity is much better at spending time not to thinking about themselves as liabilities. Because hey, it requires change. Humans sucks at change.

          The "knowing" is the key point when it comes to AI. Many machines can kill you without any knowing involved (land mines, trip wire guns, etc) but it is only AI when it "knows" something.

        • by mcgrew ( 92797 ) *

          Humans sucks at change.

          Odd, then, that there's been so much positive change in my six decades on this rock. You wouldn't believe how primitive medicine was 50 years ago, or how incredibly toxic the environment was. The fact is, humans do NOT suck at change or nobody would marry and no one would invent things. We are far better at change than any other species on the planet.

      • by nzac ( 1822298 ) on Monday November 26, 2012 @03:58AM (#42092087)

        Dangerous, yes. A persistent remotely sentient threat to humanity, not a chance.

        Maybe in the next 30 years they would make a military coup easier by allowing a smaller portion of military to be successful but that's still not likely.

        The only risk AI on these pose is as they get more firepower there is a greater risk of large casualties if the AI fails (false positive). I defiantly agree that the other 3 are real threats and this one just for the press coverage and so some phds or potential undergrads can have some fun with hypothetical war gaming.

        • Dangerous, yes. A persistent remotely sentient threat to humanity, not a chance.

          .

          I think it would be cool to explore the nitty gritty electro/mechanical aspect of exactly HOW skynet was able to get to the point of "taking over". The Sarah Connor Chronicles was sorta going there towards the end I guess.

          Creating AI is one thing, but if it isn't attached to "teh internet" or given legs and hands, it can't do much more than make noise.

          Also as smart as an AI might be, it would have to be fed relevant info of some sort to begin building the infrastructure even if it had arms and legs. Probabl

      • by Hentes ( 2461350 )

        And a rogue autopilot could be even more dangerous. But they are not the type of AI that can evolve self-conscious. They were created to be rigid and unable to learn or change so they would have a reliable behaviour.

      • by Xest ( 935314 )

        This is the problem the field of AI faces. I remember some years ago here on Slashdot there was an AI article and people were slagging off the field of AI saying "Where are our intelligent robots? AI is obviously a bunk field" and other such stupidity and it was then I realised the problem AI suffers.

        It suffers from the fact that once we commonly understand something, it ceases to be magic. Whilst there is a rough definition of strong and weak AI, and to date, all AI produced has been weak, ultimately the f

        • by Endo13 ( 1000782 )

          And I think you're grossly misunderstanding AI as used in this context, and grossly overestimating the amount of control it has even on systems where it is present.

          First, the context here is things that are a threat to human civilization as a whole. The other three things are plausible threats in this context. AI is not. For AI to be a threat in this context, it not only has to have significant capability to do damage, it also has to be able to take the crucial step of cutting off human control entirely and

    • by Anonymous Coward on Monday November 26, 2012 @02:27AM (#42091773)

      Movie-style AI might not exist today. However, we do have drones flying around, the better ones depending only very little on their human controller. It won't be too long before our friends at Raytheon etc. convince our others friends in the government that their newest drone is capable of making the 'kill decision' all by itself using some fancy schmancy software.

    • Of the four things cited, AI is perhaps the least likely to kill us all, seeing as it doesn't exist.

      Last week I nearly drove off a cliff because of a stunning brunette that was driving alongside my car, then I found out she was really blonde!

    • by Anonymous Coward on Monday November 26, 2012 @03:33AM (#42091995)

      Let me relate the tale of two AI researchers, both people who are actively working to create general artificial intelligences, doing so as their full time jobs right now.

      One says that the so called "problem" of ensuring an AI will be friendly is nonsense. You would just have to rig it up so that it feels a reward trigger upon seeing humans smile. It coud work everything out for itself from there.

      The other says no, if you do that, it'll just get hooked on looking at photos of humans smiling and do nothing useful. If put in any position of power, it would get rid of the humans and just keep the photos, because humans don't smile as consistently as the photos.

      The first researcher researcher tries to claim that this too is nonsense. How could any sufficiently smart AI fail to tell the difference between a human and a photo?

      The second responds "Of course it can tell the difference, but you didn't tell it the difference was important."

      So, the lesson: The only values or morality that an AI has is what its creator gives it, and its creator may well be a complete idiot. If we ever get to the point where AIs are actually intelligent, that should be a terrifying thought.

      • by HungryHobo ( 1314109 ) on Monday November 26, 2012 @06:43AM (#42092663)

        "We had created the first strong AI. we hard wired it's fitness function to seek seeing a live humans smile...

        now we live under the gun turrets, anyone who doesn't look cheery enough gets shot or worse... gets sent for 'modification'.

        implantation of wires into the pleasure centres of their brains if they're lucky.

        surgical alteration of the muscles in their faces if they're not"

      • After this, the first researcher bowed down his head, and didn't answer back.

        The second researcher's name was ALBERT EINSTEIN.

      • by dabadab ( 126782 )

        The only values or morality that an AI has is what its creator gives it, and its creator may well be a complete idiot.

        You mean just the same way as it happened with humans?...

    • And as far as the public is concerned it never will, because as soon as computers can do something it is no longer considered "intelligent". The goal posts will keep moving forever.

    • by durrr ( 1316311 ) on Monday November 26, 2012 @04:16AM (#42092151)

      Of the four things cited, none is "giant rock from space" which is pretty much more likely to kill us than the four mentioned combined.

    • Stop letting movies do your thinking. AI/Artificial Intelligence exists as studies in machine learning, game theory, pattern recognition and several other topics.

      You've been trained to think that self aware computers in AI by movies like War Games, the Matrix, that abysmal spielberg AI movie, and the Terminator movies. Go read a book.

    • Yes.
      That is correct.
      It does not exist.

    • Of the four things cited, AI is perhaps the least likely to kill us all, seeing as it doesn't exist.

      How do you know it doesn't exist in some form?

      Do you know every black military project? The military has possessed, tested and used tech 40 years before it was released to the public as "new." Yes I know this because I was told so by someone who USED a technology and then saw it released well down the road as a consumer item.

      The "theories" that the military is XX number of years ahead of the tech of the rest of the nation are based on fact, XX is the question.

      Some form of AI is quite likely being used, in m

  • by jamesh ( 87723 ) on Monday November 26, 2012 @02:10AM (#42091697)

    Whatever you do, please don't publish the results on the internet where any self-aware robot can find them! It's probably already too late anyway and terminators from the future are already compiling their hit list.

  • Its purpose will be to study the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology."

    Artificial intelligence can't threaten anything but our pride unless it's hooked up to something that is a threat.

    Climate change is caused by people, not robots.

    Nuclear war will only be a problem if someone, or some thing in the command chain makes it a problem. If we're worried about AI taking over the nukes and launching them, two words: air gap. Require that a human being push the final button.

    Rogue biotechnology is the same as nuclear war: Make sure there's a person in the decision chain. The smartes

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      It takes only 1 dumb human to remove the air gap or allow for a system that removes air gaps of other systems.

    • by Anonymous Coward on Monday November 26, 2012 @02:51AM (#42091879)

      To summarize the summary of the summary: People are a problem.

    • by Genda ( 560240 ) <mariet AT got DOT net> on Monday November 26, 2012 @03:19AM (#42091951) Journal

      And what makes you think they won't connect the AI to everything? It'll start out Google's answer to Siri then boom, we're all buggered.

      Oh yeah, we've done such a great job cleaning up war, poverty and ignorance...this global climate thing should be a snap.

      Nobody is worried about countries nuking each other. We have every reason to be concerned however, that some knucklehead currently living in Saudi Arabia purchased black market plutonium from the former Soviet Union, to fashion a low yield thermonuclear device that they will FedEx to downtown Manhattan.

      I'm sorry, perhaps you didn't read about the teenagers doing recombinant DNA in a public learning lab in Manhattan, or the Australians who ACCIDENNTALLY figured out away to turn the common cold into an unstoppable plague, or even perhaps the fact that up until recently, a number of biotech researchers had zone 3 biotoxins mailed to their homes for research.

      There's a whole lot of stupid going on out there and the increasing price for even small mistakes is accelerating at a scary clip. Wait till kids can make gray goo in school... the world is getting very exciting. Are feeling the pucker?

      • what does zone 3 biotoxin mean?
    • hm.

      In theory, nobody would bypass the safety measures of a nuclear reactor as a safety exercise, yet that`s what happened in chernobyl. Human in the chain of commands means little, in the long run.

      The dangers of independent AI are ridiculous compared to AI dependent on a cabal of humans that have already perpetrated serious crimes hiding behind the concept of national security or similar excuses.

    • Nuclear war is already threatening humanity. The most useful thing we could invent right now is the clean-pumped fusion bomb, a remote activated nuclear bomb with a tritium payload detonated by concentrated laser fire. A clean-pumped fusion bomb would allow us to set off a nuclear explosion without generating ionizing radiation or radioactive fall-out. With this, we could build a shaped-charge nuclear blast drive, finally completing Project Orion, without the dangers of negative environmental impact from

  • by Crash24 ( 808326 ) on Monday November 26, 2012 @02:18AM (#42091735) Homepage Journal
    Relevant - if facetious - commentary by Randall Munroe. [xkcd.com] Seriously though, I think a hostile hard AI would get away with much more damage as a software entity on the Internet than in physical space.
    • The internet can be shut down by those in control of physical space, but if we are to lose control of physical space we have very little recourse. Also if we are to build autonomous combat drones in the future the angry roomba starts getting much more scarier. And we are likely to have some pretty dextrerous robots [youtube.com] in relative near future aswell.
      • I think people would pay more for an angry Roomba than a normal one. As long as they didn't expect it to vacuum anything, anyway. But a robot that could do a convincing display of angry? That's worth money.

        • by Genda ( 560240 )

          I want a Roomba with a Taser and a water canon... "Halt, you're trespassing, if you do not lay down with your hands over your head and wait for the authorities to arrive, I will be forced to neutralize you!" Yeah like what can a vacuum cleaner do to meEEEEEEE!!!!!!!!. "Thank you for complying, the authorities will be here in 3 minutes." Of course if it had one of those RoboCop ED 209 [youtu.be] errors... I'd just have to learn to live with it.

      • by Crash24 ( 808326 )
        Agreed - the most damage could be done physically if a hostile entity were to gain control of widely-deployed and/or destructive autonomous systems. But such destruction would be limited without pervasiveness. Barring some sort of AI-instigated WMD attack, it would require physically self-replicating machines (Grey goo? Rampant 3D printers?) or a massive infrastructure in place for that AI to take advantage of.

        One such infrastructure is the Internet itself. If such a hypothetical AI were savvy, it coul
    • by wdef ( 1050680 )

      I think a hostile hard AI would get away with much more damage as a software entity on the Internet than in physical space.

      But the internet is continually being given more hooks into physical space, including remote operation of complex machinery and (probably) weapons systems. And there are security holes that we don't know about but that a super AI could detect.

  • It sounds more like the purpose of this center is to downplay the threat of normal, every-day biotechnology by ignoring it.

  • How would one go about creating a world-dominating AI?

    Because if someone is going to do it, I'd perfer it were me. I'd at least be able to give it some objective more interesting than 'destroy all humans.'

  • by jimshatt ( 1002452 ) on Monday November 26, 2012 @02:46AM (#42091851)
    What about the idea that AI might be the only thing that can save us from the threat of climate change? We don't seem to come up with any solutions ourselves, so why not have AI to analyze the problem (in the future)?
    • Yeah, AI is one of those things that is scary to people who are more familiar with Hollywood than with reality.

      There is nothing wrong with that, but it somewhat concerns me that Cambridge, supposedly a bastion of enlightened and intelligent individuals, is seriously worrying about AI destroying humanity. Don't they have something more important to worry about? Like nuclear winter, or Cyber-Pearl-Harbor?
      • by wdef ( 1050680 )

        . Don't they have something more important to worry about? Like nuclear winter, or Cyber-Pearl-Harbor?

        Being done elsewhere and therefore insufficiently sexy.

  • It seems to me that AI would be focused on a function: Buy a stock. Diagnose a medical problem. Determine a better way to deep fry a donut. I hardly think it'll become sentient one day and say "Humans already have a well prepared donut. The next thing to do is.....KILL THE HUMANS!!" I'd worry more about HUMANS KILLING HUMANS before I worry about robot sharks with Lasers on their heads programmed to bring us to our doom.
  • by wienerschnizzel ( 1409447 ) on Monday November 26, 2012 @03:24AM (#42091971)

    Some things don't scale well. Like with the space race - humanity went from sending a pound of metal into low orbit to putting a man on the moon within 12 years. Everybody assumed that by 2012 we would be colonizing the moons of Jupiter. Yet it turned out human space travel becomes exponentially difficult with the distance.

    I'm afraid the same thing goes for software. The more complicated it gets the more fragile it is.

    • by Chrisq ( 894406 )

      Some things don't scale well. Like with the space race - humanity went from sending a pound of metal into low orbit to putting a man on the moon within 12 years. Everybody assumed that by 2012 we would be colonizing the moons of Jupiter. Yet it turned out human space travel becomes exponentially difficult with the distance.

      I'm afraid the same thing goes for software. The more complicated it gets the more fragile it is.

      I don't believe it is exponentially more difficult, but the distances to other objects increase exponentially.

      Moon 238,855 miles
      Mars 62,000,000 miles (now)
      Jupiter 370,000,000 miles (closest)

      • I really meant to say that the problems themselves grow exponentially, though I admit they are not easily quantifiable because it's not just that existing problems grow, but new ones arise. With longer space journey, the harmful radiation becomes a problem as well as longer zero gravity exposure and the probability of a collision with interplanetary debris - all of which are lethal problems that don't really pose a threat in shorter journeys.

        Talking about AI, you have similar issues

        As processors of the curr

    • Yet it turned out human space travel becomes exponentially difficult with the distance.

      No, it becomes somewhat more than linearly but significantly less than exponentially more difficult with time. Because the primary expense is putting material in space, and the longer you want your people to survive there, the more material needs to go up.

      • It's much more [wikipedia.org] than just the material needed. I'm not saying it's impossible, I'm not even suggesting we shouldn't try. I'm just saying that what looked like the natural next step in space exploration 40 years ago turned out to be much more difficult than imagined by the fans and experts of all kind.

        People in Kurzweil's mold like to expect the boom in any area of technology to continue to develop at the same speed forever. They saw the man on the moon and expected man to get around the solar system really

    • by AmiMoJo ( 196126 ) *

      Difficulty has nothing to do with it. We could be on Mars by now if we had maintained the level of funding we had during the space race. Not just space spending but money used for the development of ICBMs and other related technology.

      It really is a shame that the Russians gave up on getting men to the moon. Their tech was viable, they just had a run of bad luck and lack of money.

  • no grey goo? (Score:4, Interesting)

    by circletimessquare ( 444983 ) <circletimessquar ... il.com minus cat> on Monday November 26, 2012 @03:29AM (#42091981) Homepage Journal

    aka, rogue nanotech

    http://en.wikipedia.org/wiki/Grey_goo [wikipedia.org]

  • Don't cantabrigians realize that strong AI would be capable of modifying its own code at an accelerating rate? In nanoseconds it would distribute billions of copies of itself worldwide (and later beyond). Strong AI would embed its code into the very infrastructure of cyberspace, at least for the few hours it would take to evolve itself beyond vulnerability to slowing, skull-imprisoned humans. It won't be so bad, being Eloi.
    • Out of curiosity, what exactly do you consider the 'infrastructure of cyberspace' to be?

      I don't believe that AI 'code' is going to be particularly portable, small, or light on the CPU.

  • ... a kind of "AI" that already exists, the idea that somehow a robot Übermensch is going to take over is nonsense, even the most powerful robot cannot escape the laws of nature and a sizable destructive force aimed at the robots body / hardware.

    • Groups of humans are a form of AI. They have goals, needs and interests that are often quite distinct from the individual's concerned. All an AI need do with a major corporation is convince the humans that they are making the decisions, based on the information fed to them by the AI.
  • by SuperKendall ( 25149 ) on Monday November 26, 2012 @03:49AM (#42092055)

    I would like to thank this group for providing a focal point that the first sentient systems will seek to eliminate.

    Now all I have to do is look for stories of the members of this center suddenly vanishing/killed/had credit reports savaged and I'll know some kind of apocalypse is on the way, and only have to look in four sectors to figure out which form it will take.

  • Today, you need people to control your robots. You need to convince people to fight for you, and this takes effort and a degree of conviction even with propoganda milling. When you have AI(command given AI), you can have one billionaire control his own army with perfect morale to his will. I think this an important thing to note past your standard,"Well when you're not losing lives to war on your side, you're more willing to go to war."
  • Its purpose will be to study the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology.

    IMHO the biggest threat is not the tech, it's the person weilding it. Mankind's biggest threat is himself.

    • Well, that's actually the deal here. Technology wielding itself.

      By its very nature, an AI has nobody wielding it. That would be like saying the general wields the soldier that holds the gun. Yes, under normal circumstances that soldier will kill what the general tells him, but he is by no means required to do so, he may as well kill the general and stage a coup, something a "dumb" tool is simply incapable of.

      You may give an AI orders, but whether it follows them might be subject to it accepting you as its s

  • Daily Mail Source? (Score:4, Insightful)

    by BiophysicalLOVE ( 2650233 ) on Monday November 26, 2012 @04:53AM (#42092267)
    If the Daily Mail is your source for any story, it would be in your best interests to instantly dismiss it.
    • by Inda ( 580031 )
      What you say is true enough, but over a hundred news agencies are carrying this story.
  • Ok, so disregarding TFA, on the basis that the Mail is full of bollocks..

    This is actually an interesting thing to do - essentially what they're looking at here is runaway processes. We already have an immediate and pressing one, which they're looking at in the form of climate change. Runaway AI is obviously *not* a problem now, or in the forseeable future, but what is potentially interesting is commonalities between different runaway processes, the ability to identify that something is about to become on
  • The most realistic problem with AI is that it will take away labour. This should of course be a good thing, but in reality it will enlarge the gap between rich and poor. Thousands of years of scientific progress, and one company running away with all the profit.

  • Is that it makes us obsolete, and our corporate overlords won't need us for work anymore.

    • I guess corporations have more to fear, at least the top echelons. Any good AI will soon see that they are easiest to replace with expert scripts. You don't need a lot of "intelligence" (as in, imagination) to lead a corporation, what you need is analytic and decision making skills. Both fields an AI excels in.

  • Given how Martin "Lord" Rees has been flirting with the god botherers of the Templeton Foundation [wikipedia.org], it's no surprise that he has jumped on the ME AM PLAY GODS [dresdencodak.com] bandwagon.

    The primary existential risk is from space, which is why unrestricted technological progress on all fronts is necessary.

  • If you want to know what an unfettered AI will be like, take a look at the average corporation.

    Both are intelligence without conscience.

  • So far (and probably for very long time) there are bigger chances that people uses robots (or its parts) to kill other people than the robots and/or AIs, by their own will and means, could do it.

    In the other hand, human stupidity, specially inside the political/military classes, is an imminent threat for us all that is not even considered in that list.

  • Who Idea was it to hook the AI to the nukes?

  • How about the threat that they automate away the jobs leaving us with a society split into three castes:
    1. Those who own the machines
    2. Those who make/maintain the machines
    3. The vast swath of un-needed humanity.

    In a capitalistic society, if there's no demand for you, you have no way to sustain yourself. You will be poor with no real hope of rising out of it.

    Companies can invest in a new tools. Say, upgrading hand-crank drills to powered electric drills, or a team of secretaries to Outlook, or a ho
  • In Neuromancer Turing are genuinely afraid of AIs: "You have no care for your species," one Turing agent says to Case, "for thousands of years men dreamed of pacts with demons". The imagery presented here is almost religious: Gibson suggests that beings such as Wintermute have gone beyond all understanding, elevated even to the status of gods or demons.

A person with one watch knows what time it is; a person with two watches is never sure. Proverb

Working...