New Report Cites Dangers of Autonomous Weapons 94
HughPickens.com writes: A new report written by a former Pentagon official who helped establish United States policy on autonomous weapons argues that autonomous weapons could be uncontrollable in real-world environments, where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries. The report contrasts these completely automated systems, which have the ability to target and kill without human intervention, to weapons that keep humans "in the loop" in the process of selecting and engaging targets. "Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to 'p.m.' instead of 'a.m.,' or any of the countless frustrations that come with interacting with computers, has experienced the problem of 'brittleness' that plagues automated systems," Mr. Scharre writes.
The United States military does not have advanced autonomous weapons in its arsenal. However, this year the Defense Department requested almost $1 billion to manufacture Lockheed Martin's Long Range Anti-Ship Missile, which is described as a "semiautonomous" weapon. The missile is controversial because, although a human operator will initially select a target, it is designed to fly for several hundred miles while out of contact with the controller and then automatically identify and attack an enemy ship. As an alternative to completely autonomous weapons, the report advocates what it describes as "Centaur Warfighting." The term "centaur" has recently come to describe systems that tightly integrate humans and computers. Human-machine combat teaming takes a page from the field of "centaur chess," in which humans and machines play cooperatively on the same team. "Having a person in the loop is not enough," says Scharre. "They can't be just a cog in the loop. The human has to be actively engaged."
The United States military does not have advanced autonomous weapons in its arsenal. However, this year the Defense Department requested almost $1 billion to manufacture Lockheed Martin's Long Range Anti-Ship Missile, which is described as a "semiautonomous" weapon. The missile is controversial because, although a human operator will initially select a target, it is designed to fly for several hundred miles while out of contact with the controller and then automatically identify and attack an enemy ship. As an alternative to completely autonomous weapons, the report advocates what it describes as "Centaur Warfighting." The term "centaur" has recently come to describe systems that tightly integrate humans and computers. Human-machine combat teaming takes a page from the field of "centaur chess," in which humans and machines play cooperatively on the same team. "Having a person in the loop is not enough," says Scharre. "They can't be just a cog in the loop. The human has to be actively engaged."
Autonomous = Future (Score:1)
Having any type control link is susceptible to multiple types of attacks. This will drive the push for more autonomy and AI.
Re: (Score:3)
And in addition, the enemy will really love this, as instead of buying their own weapons they can just hack and re-purpose those of the enemy. Ideal terrorist weapon too. Anybody that thinks the government can secure these systems is off their rocker.
Re: (Score:2)
In the short term, possibly. In the long run the enemy won't be able to maintain them because they don't have spanners that are 17/23 the width of King Henry's willy.
Unless the US invades Singapore.
Re: (Score:2)
Of course, a smart opponent will only hijack the weapons shortly before use. That way the US will do all the maintenance!
Re: (Score:2)
They played that joke on me on my first day as an apprentice. But I outsmarted them, I took a Bahco and filed it down.
Re: (Score:1)
Well Everybody that's ever read a book or seen a movie knows full well that killer robots is a Bad Thing.. but I guess it's not "official" until a report is drawn up on it.. y'think?
Re: (Score:2)
Having any type control link is susceptible to multiple types of attacks. This will drive the push for more autonomy and AI.
That's unfortunate. The US and the Soviets avoided World War III a few times when humans made judgement calls and ignored machine readings.
Nov 1973: NORAD systems detected a full-scale Soviet attack had been launched. A computer had been placed into test mode where it had generated an Armageddon scenario; this was interpreted by the other computers as being real events.
Sep 1983: The nuclear early warning system alerts the Soviets of an impending nuclear strike. Stanislav Petrov did not report the strike as
So there was this human controller... (Score:2)
Re: (Score:2)
Duh... (Score:3)
They needed a high level official report to figure this out?
Re: (Score:2)
They needed a high level official report to figure this out?
Yes, because otherwise they wouldn't have created their own version of skynet [wired.com], even calling it skynet.
Re: (Score:3)
Of course. The military won't believe anything that hasn't been stated by a high level official report costing $10s of millions.
Re: (Score:2)
and crash into something when it was done
Don't forget the hours to weeks (depending on how long the nuclear engine lasts) of running over the rubble at low altitude, supersonic speeds, and said cloud of fallout. It might be directly killing people somewhere in the world well after the war ends.
Re: (Score:2)
Autonomous weapons would not make mistakes. They would do their jobs -- too well.
That's a big assumption. Autonomous covers a wide range of behaviours. We already have one example of an autonomous (i.e. long term deployment requiring no human intervention to remain operable) weapon: landmines. I wouldn't say that they don't make mistakes.
The *US* missile is "controversial"?!?!?! (Score:4, Insightful)
What about the KH-22 (or AS4 "Kitchen") [wikipedia.org] that the Soviets/Russians have actually fielded - since 1962.
The Kh-22 uses an Isayev liquid-fuel rocket engine, fueled with TG-02 (Tonka-250) and IRFNA (inhibited red fuming nitric acid), giving it a maximum speed of Mach 4.6 and a range of up to 600 km (320 nmi). It can be launched in either high-altitude or low-altitude mode. In high-altitude mode, it climbs to an altitude of 27,000 m (89,000 ft) and makes a high-speed dive into the target, with a terminal speed of about Mach 4.6. In low-altitude mode, it climbs to 12,000 m (39,000 ft) and makes a shallow dive at about Mach 3.5, making the final approach at an altitude under 500 m (1,600 ft). The missile is guided by a gyro-stabilized autopilot in conjunction with a radio altimeter.
Fly 600 KM - then hit whatever it happens to find. Potentially with a nuclear warhead.
Oh, that's right. That doesn't fit into typical thoughtless anti-US bullshit. Sorry to mess up your narrative.
Re: (Score:2)
Re: (Score:2)
Some current and historical anti-ship missiles have the capacity to take target designation and/or mid-course guidance from a designating vessel; the Tu-95 RTS 'Bear D', with its 'Big Bulge' radar, is one example of such a vessel. However, in the absence of such direction, or if the missile does not have the capacity for direction by uplink, the choice of target is entirely up to the logic of the missile's seeker, making it just as autonomous as the Harpoon or TASM.
Re: (Score:2, Insightful)
I could give you a current real world autonomous weapon system the US has not only fully funded but put quite chaotically into the field. That would be proxy terrorist fighters, quite the mess they made with that autonomous weapon system and a real warning of what can happen when you attempt the same digitally. Of course we have yet to see the full repercussions of that, say a tow missile on a power boat taking down an oil tanker, either manned by those the weapon was given or those it was on sold to (you
Re: The *US* missile is "controversial"?!?!?! (Score:3)
It's not anything nearly as fancy as AI.
For the land attack flavors:
TERCOM / GPS flys along a preprogrammed path, DSMAC takes over for final target comparison / verification.
Overwater flight is static planned just prior to launch to route around known vessels / structures. Once it reaches the shoreline, the pre-planned mission takes over.
If an anti-ship variant, once the platform reaches the final static waypoint, it fires up the active seeker and starts looking for a target within the AOU. ( it is here
Re: (Score:2)
Much the same as other AntiShip missiles including the Harpoon.
Guided weapons have been around since WWII. Torpedos are a prime example
Re: (Score:2)
Missiles and torpedo's still have a human in the loop responsible for identifying the target and hitting the launch button. A truly autonomous weapon system would identify it's own targets after the launch button is pushed.
Re: (Score:2)
And their is a human in the loop that launches any missile.
And no you are wrong. The Captor mine is a good example. You set it on the seabed and it waits for a ship with the right signature to pass over it. It then fires a torpedo that homes in on the ship or sub.
The differences (Score:4, Insightful)
The missile is guided by a gyro-stabilized autopilot in conjunction with a radio altimeter.
Fly 600 KM - then hit whatever it happens to find.
That is the main difference between classical intercontinental ballistic/guided missiles and the autonomous weapons mentionned here.
Classical missile mainly flight to a specific point (which was decided in advance by a human being) a go ka-boom on whatever happens to be at that point.
If the intelligence on which the human was acting is precise (i.e.: exact coordinate of the position of the targets are known) the missile exactly hits the target that the human intended. If the intelligence is wrong, the missile still goes exactly where it was asked to, it's the human who asked the wrong thing.
Think throwing a rock on a target, shooting a target with an arrow. Only with more complex gadgets.
Autonomous weapon on the other hand a deployed or reach a region (which is what was decided by the human being) and then on *their own* start looking around to find potential target that they engage on their own autonomous decision. The human being is not the own who is taking the final decision in the grand scheme of things, it's the AI running inside the autonomous weapon. The weapon is at risk of misinterpreting what it perceives and wrongly take decisions to engage.
Think Aliens movie-style automatic gun turrets.
So the historic precedent of such unwanted destruction isn't as much classical missile that you mention (where the commander giving the order to fire more or less knows what is going to happen).
The closest historic precedent are *mines*. Object that are left on order by human, but then would activate and explode without much control by the ordering humans. With very strong risks that they'll end up harming the wrong target (left over mine that explode and maim local civilian population long after the conflict is finished). That's why mines got banned by several countries.
That's why it's really risky to leave an AI (That could be hacked or spoofed) to make the decisions.
Re: (Score:2)
Think throwing a rock on a target, shooting a target with an arrow. Only with more complex gadgets.
I think you missed the part where the missile's on board guidance tracks the missile onto whatever it happens to find.
That's why it's really risky to leave an AI (That could be hacked or spoofed) to make the decisions.
Seems to me that you're just as dead, if the missile hits you because you're there and radar reflective, rather than hits you because you're there and it thought you needed killing.
u.s. has had them for decades (Score:5, Insightful)
land mines are autonomous weapons, no human is in the decision loop to fire when the preset conditions for detonation are met.
http://www.un.org/en/globaliss... [un.org]
Re: (Score:2)
Re: (Score:2)
You're correct, though the Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction treaty was signed in 1997, and has since then accumulated 133 signatory parties all doing their part, keeping those EOD boys and girls excited.
http://www.un.org/disarmament/... [un.org]
AP Mines are recognized as pure evil, and we no longer make and sell them to dictators and such. (or at least that's what we SAY) Our evils are much smarter now, and the smarter evi
Re: (Score:2)
so funny, the evil world dictator is us
and of course in general we arm groups affiliated with terrorist groups, Obama admin did that in Syria for example. or going back in history we gave Saddam dual use tech and money to make WMD
Re: (Score:1)
The devil you know.
Re: (Score:1)
But those don't have glowing eyes or fire shooting out of their tail. It's all in the presentation. [youtube.com]
Re: (Score:2)
land mines are autonomous weapons, no human is in the decision loop to fire when the preset conditions for detonation are met.
Land mines are an area denial weapon, not a targeted one. A human makes the conscious decision to attack anything that enters the area when the mines are placed. Just because it may be years before that happens does not mean mines are autonomous, just delayed. Autonomy implies some sort of ability for decision making and control, which is far more desirable than how mines actually operate (although some do have the ability to self-deactivate after a set time).
Still, the best analogy (Score:4, Insightful)
Autonomy implies some sort of ability for decision making and control, which is far more desirable than how mines actually operate (although some do have the ability to self-deactivate after a set time).
Though we must concede that you're right in that mine are really primitive mecanisme that don't exactly have an AI and thus are far from autonomous...
A human makes the conscious decision to attack anything that enters the area when the mines are placed. Just because it may be years before that happens does not mean mines are autonomous, just delayed.
...mines are still the best historical analogy that we have for problems brought by autonomous weapon.
In both situation, human have only a vague input about the region that should be attacked.
- mines are deployed over an area
- autonomous weapons are sent to seek for potential target in a designated area
In both situation the human ARE NOT the one making the decision about the detonation.
- mine detonate on their own when they sense some form of proximity
- autonomous weapons are autonomous, they are suposed to pick up and engage their target on their own without further human input
In both situation things can go horribly wrong
- mines have been left over for long period of time and have often maimed innocent civilians long after the conflict is finished.
- AI can go wrong in lots of ways (wrong instruction, or plain hostile hacking/spoofing) and end up engaging the wrong target.
Currently mines are banned by lots of countries.
Same should be done with pure autonomous unsupervised weapons.
Re: (Score:2)
Surprisingly, quite hard. Its not about the quantity of mines, its about how large the mined areas are. Its not just small 1 mile sections of old fronts, its hundreds of thousands of square miles. The mine clearing machines like the one you mentioned cover a very small amount of land at a very slow speed. Its almost comparable to a commercial lawn mower - half to three times slower than a mower but about 3-4x wider.
Re: (Score:2)
They also destroy anything else in their path, and you might want to keep your fields and forests and orchards for the commercial and other value they represent.
The problem is one of cost. Placing a mine can cost as little as a few dollars, but clearing one cost on the order of a thousand. So you have to be really rich for there to be parity. And this is also what we see, in that mines in rich countries aren't that much of a problem, they've been mostly cleared. (Together with the unexploded ordinance, that
Re: (Score:2)
a human makes the conscious decision to have the AI system attack anything in an area and we have autonomous "area denial".... which is what a land mine can do
The land mine can attack someone without human intervention, it is autonomous. You have no point
Autonomous Weapons = High Value Target (Score:3)
Imagine you had to design portable ATM that has to operate flawlessly even when moved to a crack den without having reliable connectivity to C&C.
Re: (Score:3)
You could couple autonomous weapons with autonomous cars and have autonomous drive by shootings... or just hack the autonomous car and crash it into someone or something.
This is what I don't understand is... we can't create autonomous emergency braking systems that avoid unforeseen circumstances and manufacturing defects how the hell is autonomous weapons even remotely a good idea let alone autonomous cars.
Re: (Score:2)
More like an ATM that should self-destruct if it can't guarantee the integrity of its cash store, which seems a lot more doable. Autonomous weapons aren't humans, they're expendable like bomb robots and indeed bombs themselves. Being expendable they also don't need to consider the operator, a drone can easily default to self-terminate where a plane can not. And unless you've got some extremely fancy equipment, of course they'll only take cryptographically signed orders from the chain of command. Unlike toda
Re: (Score:2)
It's not really analogous at all, because banking applications tend to have a public facing interface anyone has access to. Attacking autonomous weapons is more analogous to the Stuxnet attack on Iranian uranium enrichment facilities; it'd be very, very hard to mount an attack on them, but by the same token it would be very hard to defend against the kind of parties that do have the range of capabilities to make a realistic attempt.
ATMS are a terrible analogy to use in any kind of thinking about security,
Yes - What could possibly go wrong? (Score:2)
https://www.youtube.com/watch?v=l0WG0B2JYLQ [youtube.com]
Korea DMZ (Score:3)
There was a autonomous gun system demo'ed for the DMZ between the Korea's. Don't know if they ever deployed it, but it "locked" on to anyone who moved in the target zone and fired.
Don't they know that guns don't kill people? (Score:2)
PEOPLE kill people.
Duh.
Re: (Score:3)
So an autonomous gun wouldn't kill anyone?
verify an over the horizon target? (Score:2)
Re: (Score:2)
"There are other missiles of this capability so seeing a blip on radar but what is it really? Enemy aircraft or something else like a civilian airliner or a UH60 carrying UN officials? "
Actually modern radar systems can identify enemy aircraft at long range now. How they do it is classified but the F-18,16,15, 22, and 35 can all do it. Even back in the day you had IFF which told you if it was a friendly or not.
And you do not see all that many FF air to air problems with US systems.
Now ground targets are a m
Sesationalism (Score:2)
In terms of operating the weapon, there's little difference between the new LRASM [wikipedia.org] and a classic Tomahawk [wikipedia.org] or any other cruise misslle made in the last 30 years. It's just HughPickens being a sensationalist.
The actual report talks about AI based systems with kill authority (aka SkyNet).
Re: (Score:1)
Nice try, the system didn't 'lock' on to anything. It hit the GPS coordinates it was told to. You need to check with whomever relayed those coordinates. Hint: They weren't US FACs.
Re: (Score:2)
someone ordered an airstrike there, military didn't check the coords from another source if it's a no shoot coordinate.
basically the problem is that us military works as order-an-explosion service for whoever social engineers how to get them to shoot. whoever provides them with the 'intel' gets to enjoy the benefits. like all the yemeni 'rebels' getting hellfired. who fingers them? their local rivals, duh. neither the fingerer or the one who gets exploded is particularly pro-US or anti-US in any way and bec
It's true, just ask Mr. Kinney (Score:1)
https://www.youtube.com/watch?... [youtube.com]
I, for one, welcome this new half-horse terminolog (Score:2)
It's pretty nice as an intermediary step before "cyborg" (which seems like it should need direct connection to the nervous system to apply as a term, despite the way usage has expanded recently to include heavy cell phone usage).
Good old stuff (Score:2)
Centaur warfighting? (Score:2)
I've got to say neigh to that one.
No reason to stop development (Score:1)
Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to 'p.m.' instead of 'a.m.,' or any of the countless frustrations that come with interacting with computers, has experienced the problem of 'brittleness' that plagues automated systems,
While true, I can also recount numerous frustrations originating from human interventions that lead to disaster such as initiating an emergency procedure ultimately leading to a nuclear reactor explosion [atomicinsights.com], fai
Re: (Score:2)
Too much value (Score:2)
There would be a clear warning (Score:2)
As soon as it develops an Austrian accent shut it down quickly.
Political and profits (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
The French and US in "Vietnam", the French in Algeria https://en.wikipedia.org/wiki/... [wikipedia.org], the US and NATO and their Middle East and North Africa drone zones.
Using an AI or humans to kill everything in a "free fire zone" is not a new US tactic. The results of past wars and regime change/US backed coups tactics should by now by understood. by the smarter mil and contractors the US go
Make It Absolutely Autonomous. (Score:2)