Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI United States Government Technology

New Report Cites Dangers of Autonomous Weapons 94

HughPickens.com writes: A new report written by a former Pentagon official who helped establish United States policy on autonomous weapons argues that autonomous weapons could be uncontrollable in real-world environments, where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries. The report contrasts these completely automated systems, which have the ability to target and kill without human intervention, to weapons that keep humans "in the loop" in the process of selecting and engaging targets. "Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to 'p.m.' instead of 'a.m.,' or any of the countless frustrations that come with interacting with computers, has experienced the problem of 'brittleness' that plagues automated systems," Mr. Scharre writes.

The United States military does not have advanced autonomous weapons in its arsenal. However, this year the Defense Department requested almost $1 billion to manufacture Lockheed Martin's Long Range Anti-Ship Missile, which is described as a "semiautonomous" weapon. The missile is controversial because, although a human operator will initially select a target, it is designed to fly for several hundred miles while out of contact with the controller and then automatically identify and attack an enemy ship. As an alternative to completely autonomous weapons, the report advocates what it describes as "Centaur Warfighting." The term "centaur" has recently come to describe systems that tightly integrate humans and computers. Human-machine combat teaming takes a page from the field of "centaur chess," in which humans and machines play cooperatively on the same team. "Having a person in the loop is not enough," says Scharre. "They can't be just a cog in the loop. The human has to be actively engaged."
This discussion has been archived. No new comments can be posted.

New Report Cites Dangers of Autonomous Weapons

Comments Filter:
  • by Anonymous Coward

    Having any type control link is susceptible to multiple types of attacks. This will drive the push for more autonomy and AI.

    • by gweihir ( 88907 )

      And in addition, the enemy will really love this, as instead of buying their own weapons they can just hack and re-purpose those of the enemy. Ideal terrorist weapon too. Anybody that thinks the government can secure these systems is off their rocker.

      • In the short term, possibly. In the long run the enemy won't be able to maintain them because they don't have spanners that are 17/23 the width of King Henry's willy.

        Unless the US invades Singapore.

        • by gweihir ( 88907 )

          Of course, a smart opponent will only hijack the weapons shortly before use. That way the US will do all the maintenance!

    • by Rakarra ( 112805 )

      Having any type control link is susceptible to multiple types of attacks. This will drive the push for more autonomy and AI.

      That's unfortunate. The US and the Soviets avoided World War III a few times when humans made judgement calls and ignored machine readings.
      Nov 1973: NORAD systems detected a full-scale Soviet attack had been launched. A computer had been placed into test mode where it had generated an Armageddon scenario; this was interpreted by the other computers as being real events.

      Sep 1983: The nuclear early warning system alerts the Soviets of an impending nuclear strike. Stanislav Petrov did not report the strike as

  • by mspohr ( 589790 ) on Monday February 29, 2016 @04:53PM (#51610543)

    They needed a high level official report to figure this out?

    • They needed a high level official report to figure this out?

      Yes, because otherwise they wouldn't have created their own version of skynet [wired.com], even calling it skynet.

    • Of course. The military won't believe anything that hasn't been stated by a high level official report costing $10s of millions.

  • by Anonymous Coward on Monday February 29, 2016 @04:56PM (#51610563)

    What about the KH-22 (or AS4 "Kitchen") [wikipedia.org] that the Soviets/Russians have actually fielded - since 1962.

    The Kh-22 uses an Isayev liquid-fuel rocket engine, fueled with TG-02 (Tonka-250) and IRFNA (inhibited red fuming nitric acid), giving it a maximum speed of Mach 4.6 and a range of up to 600 km (320 nmi). It can be launched in either high-altitude or low-altitude mode. In high-altitude mode, it climbs to an altitude of 27,000 m (89,000 ft) and makes a high-speed dive into the target, with a terminal speed of about Mach 4.6. In low-altitude mode, it climbs to 12,000 m (39,000 ft) and makes a shallow dive at about Mach 3.5, making the final approach at an altitude under 500 m (1,600 ft). The missile is guided by a gyro-stabilized autopilot in conjunction with a radio altimeter.

    Fly 600 KM - then hit whatever it happens to find. Potentially with a nuclear warhead.

    Oh, that's right. That doesn't fit into typical thoughtless anti-US bullshit. Sorry to mess up your narrative.

    • ..not to mention the current American workhorse, the cruise missile, whose current incarnation is initially guided by GPS but uses automatic target recognition (artificial intelligence) one close to the target.
      • Some current and historical anti-ship missiles have the capacity to take target designation and/or mid-course guidance from a designating vessel; the Tu-95 RTS 'Bear D', with its 'Big Bulge' radar, is one example of such a vessel. However, in the absence of such direction, or if the missile does not have the capacity for direction by uplink, the choice of target is entirely up to the logic of the missile's seeker, making it just as autonomous as the Harpoon or TASM.

        • Re: (Score:2, Insightful)

          by rtb61 ( 674572 )

          I could give you a current real world autonomous weapon system the US has not only fully funded but put quite chaotically into the field. That would be proxy terrorist fighters, quite the mess they made with that autonomous weapon system and a real warning of what can happen when you attempt the same digitally. Of course we have yet to see the full repercussions of that, say a tow missile on a power boat taking down an oil tanker, either manned by those the weapon was given or those it was on sold to (you

      • It's not anything nearly as fancy as AI.

        For the land attack flavors:

        TERCOM / GPS flys along a preprogrammed path, DSMAC takes over for final target comparison / verification.

        Overwater flight is static planned just prior to launch to route around known vessels / structures. Once it reaches the shoreline, the pre-planned mission takes over.

        If an anti-ship variant, once the platform reaches the final static waypoint, it fires up the active seeker and starts looking for a target within the AOU. ( it is here

    • by LWATCDR ( 28044 )

      Much the same as other AntiShip missiles including the Harpoon.
      Guided weapons have been around since WWII. Torpedos are a prime example

      • Missiles and torpedo's still have a human in the loop responsible for identifying the target and hitting the launch button. A truly autonomous weapon system would identify it's own targets after the launch button is pushed.

        • by LWATCDR ( 28044 )

          And their is a human in the loop that launches any missile.
            And no you are wrong. The Captor mine is a good example. You set it on the seabed and it waits for a ship with the right signature to pass over it. It then fires a torpedo that homes in on the ship or sub.

    • The differences (Score:4, Insightful)

      by DrYak ( 748999 ) on Monday February 29, 2016 @06:53PM (#51611291) Homepage

      The missile is guided by a gyro-stabilized autopilot in conjunction with a radio altimeter.

      Fly 600 KM - then hit whatever it happens to find.

      That is the main difference between classical intercontinental ballistic/guided missiles and the autonomous weapons mentionned here.

      Classical missile mainly flight to a specific point (which was decided in advance by a human being) a go ka-boom on whatever happens to be at that point.
      If the intelligence on which the human was acting is precise (i.e.: exact coordinate of the position of the targets are known) the missile exactly hits the target that the human intended. If the intelligence is wrong, the missile still goes exactly where it was asked to, it's the human who asked the wrong thing.
      Think throwing a rock on a target, shooting a target with an arrow. Only with more complex gadgets.

      Autonomous weapon on the other hand a deployed or reach a region (which is what was decided by the human being) and then on *their own* start looking around to find potential target that they engage on their own autonomous decision. The human being is not the own who is taking the final decision in the grand scheme of things, it's the AI running inside the autonomous weapon. The weapon is at risk of misinterpreting what it perceives and wrongly take decisions to engage.
      Think Aliens movie-style automatic gun turrets.

      So the historic precedent of such unwanted destruction isn't as much classical missile that you mention (where the commander giving the order to fire more or less knows what is going to happen).
      The closest historic precedent are *mines*. Object that are left on order by human, but then would activate and explode without much control by the ordering humans. With very strong risks that they'll end up harming the wrong target (left over mine that explode and maim local civilian population long after the conflict is finished). That's why mines got banned by several countries.

      That's why it's really risky to leave an AI (That could be hacked or spoofed) to make the decisions.

      • by khallow ( 566160 )

        Think throwing a rock on a target, shooting a target with an arrow. Only with more complex gadgets.

        I think you missed the part where the missile's on board guidance tracks the missile onto whatever it happens to find.

        That's why it's really risky to leave an AI (That could be hacked or spoofed) to make the decisions.

        Seems to me that you're just as dead, if the missile hits you because you're there and radar reflective, rather than hits you because you're there and it thought you needed killing.

  • by iggymanz ( 596061 ) on Monday February 29, 2016 @04:59PM (#51610581)

    land mines are autonomous weapons, no human is in the decision loop to fire when the preset conditions for detonation are met.

    http://www.un.org/en/globaliss... [un.org]

    • You're correct, though the Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction treaty was signed in 1997, and has since then accumulated 133 signatory parties all doing their part, keeping those EOD boys and girls excited.

      http://www.un.org/disarmament/... [un.org]

      AP Mines are recognized as pure evil, and we no longer make and sell them to dictators and such. (or at least that's what we SAY) Our evils are much smarter now, and the smarter evi

      • so funny, the evil world dictator is us

        and of course in general we arm groups affiliated with terrorist groups, Obama admin did that in Syria for example. or going back in history we gave Saddam dual use tech and money to make WMD

    • by Tablizer ( 95088 )

      But those don't have glowing eyes or fire shooting out of their tail. It's all in the presentation. [youtube.com]

    • land mines are autonomous weapons, no human is in the decision loop to fire when the preset conditions for detonation are met.

      Land mines are an area denial weapon, not a targeted one. A human makes the conscious decision to attack anything that enters the area when the mines are placed. Just because it may be years before that happens does not mean mines are autonomous, just delayed. Autonomy implies some sort of ability for decision making and control, which is far more desirable than how mines actually operate (although some do have the ability to self-deactivate after a set time).

      • by DrYak ( 748999 ) on Monday February 29, 2016 @07:02PM (#51611351) Homepage

        Autonomy implies some sort of ability for decision making and control, which is far more desirable than how mines actually operate (although some do have the ability to self-deactivate after a set time).

        Though we must concede that you're right in that mine are really primitive mecanisme that don't exactly have an AI and thus are far from autonomous...

        A human makes the conscious decision to attack anything that enters the area when the mines are placed. Just because it may be years before that happens does not mean mines are autonomous, just delayed.

        ...mines are still the best historical analogy that we have for problems brought by autonomous weapon.

        In both situation, human have only a vague input about the region that should be attacked.
        - mines are deployed over an area
        - autonomous weapons are sent to seek for potential target in a designated area

        In both situation the human ARE NOT the one making the decision about the detonation.
        - mine detonate on their own when they sense some form of proximity
        - autonomous weapons are autonomous, they are suposed to pick up and engage their target on their own without further human input

        In both situation things can go horribly wrong
        - mines have been left over for long period of time and have often maimed innocent civilians long after the conflict is finished.
        - AI can go wrong in lots of ways (wrong instruction, or plain hostile hacking/spoofing) and end up engaging the wrong target.

        Currently mines are banned by lots of countries.
        Same should be done with pure autonomous unsupervised weapons.

      • a human makes the conscious decision to have the AI system attack anything in an area and we have autonomous "area denial".... which is what a land mine can do

        The land mine can attack someone without human intervention, it is autonomous. You have no point

         

  • by sinij ( 911942 ) on Monday February 29, 2016 @05:05PM (#51610619)
    Autonomous Weapons are high value targets for hacking. More so than banking. I don't envy poor souls that were tasked with meeting such design challenges.

    Imagine you had to design portable ATM that has to operate flawlessly even when moved to a crack den without having reliable connectivity to C&C.
    • You could couple autonomous weapons with autonomous cars and have autonomous drive by shootings... or just hack the autonomous car and crash it into someone or something.

      This is what I don't understand is... we can't create autonomous emergency braking systems that avoid unforeseen circumstances and manufacturing defects how the hell is autonomous weapons even remotely a good idea let alone autonomous cars.

    • by Kjella ( 173770 )

      More like an ATM that should self-destruct if it can't guarantee the integrity of its cash store, which seems a lot more doable. Autonomous weapons aren't humans, they're expendable like bomb robots and indeed bombs themselves. Being expendable they also don't need to consider the operator, a drone can easily default to self-terminate where a plane can not. And unless you've got some extremely fancy equipment, of course they'll only take cryptographically signed orders from the chain of command. Unlike toda

    • by hey! ( 33014 )

      It's not really analogous at all, because banking applications tend to have a public facing interface anyone has access to. Attacking autonomous weapons is more analogous to the Stuxnet attack on Iranian uranium enrichment facilities; it'd be very, very hard to mount an attack on them, but by the same token it would be very hard to defend against the kind of parties that do have the range of capabilities to make a realistic attempt.

      ATMS are a terrible analogy to use in any kind of thinking about security,

  • https://www.youtube.com/watch?v=l0WG0B2JYLQ [youtube.com]

    General Beringer: Mr. McKittrick, after very careful consideration, sir, I've come to the conclusion that your new defense system sucks.

  • by stabiesoft ( 733417 ) on Monday February 29, 2016 @05:15PM (#51610689) Homepage

    There was a autonomous gun system demo'ed for the DMZ between the Korea's. Don't know if they ever deployed it, but it "locked" on to anyone who moved in the target zone and fired.

  • Back in the days they always said Phoenix missile on the F14 can take out an enemy aircraft 120 miles away (or some long distance like that). There are other missiles of this capability so seeing a blip on radar but what is it really? Enemy aircraft or something else like a civilian airliner or a UH60 carrying UN officials? There are many other cases of friendly fire, what thought has been put into this (like everyone else, I didn't RTFA).
    • by LWATCDR ( 28044 )

      "There are other missiles of this capability so seeing a blip on radar but what is it really? Enemy aircraft or something else like a civilian airliner or a UH60 carrying UN officials? "
      Actually modern radar systems can identify enemy aircraft at long range now. How they do it is classified but the F-18,16,15, 22, and 35 can all do it. Even back in the day you had IFF which told you if it was a friendly or not.
      And you do not see all that many FF air to air problems with US systems.
      Now ground targets are a m

  • In terms of operating the weapon, there's little difference between the new LRASM [wikipedia.org] and a classic Tomahawk [wikipedia.org] or any other cruise misslle made in the last 30 years. It's just HughPickens being a sensationalist.

    The actual report talks about AI based systems with kill authority (aka SkyNet).

  • It's pretty nice as an intermediary step before "cyborg" (which seems like it should need direct connection to the nervous system to apply as a term, despite the way usage has expanded recently to include heavy cell phone usage).

  • That's why the best autonomous weapon is big, dumb bomb. You drop it and it autonomously drops and levels the area.
  • I've got to say neigh to that one.

  • Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to 'p.m.' instead of 'a.m.,' or any of the countless frustrations that come with interacting with computers, has experienced the problem of 'brittleness' that plagues automated systems,

    While true, I can also recount numerous frustrations originating from human interventions that lead to disaster such as initiating an emergency procedure ultimately leading to a nuclear reactor explosion [atomicinsights.com], fai

    • At least make congress watch the original robocop movie before they vote on it, and have a few of the machines in the chamber just to keep them safe.
  • For the ruling class. As a member of the ruling class the only real threat to your never-ending rule is the military. It's just too tempting to cut them out of the loop....
  • As soon as it develops an Austrian accent shut it down quickly.

  • The idea of trying to hold a nation or area by using a free fire zone grid is not new.
    https://en.wikipedia.org/wiki/... [wikipedia.org]
    The French and US in "Vietnam", the French in Algeria https://en.wikipedia.org/wiki/... [wikipedia.org], the US and NATO and their Middle East and North Africa drone zones.
    Using an AI or humans to kill everything in a "free fire zone" is not a new US tactic. The results of past wars and regime change/US backed coups tactics should by now by understood. by the smarter mil and contractors the US go
  • Send such weapons to target with no outside communications what-so-ever. Any open port for communications makes hacking much more likely. But if it is a set it and forget it device,it will do what it is supposed to do. Drones are now saving the lives of our soldiers and they are also saving the lives of innocents, If we did not use drones we would be bombing cities and suburbs and killing huge numbers of civilians to get the bad guys.

It is easier to write an incorrect program than understand a correct one.

Working...