US Army Assures Public That Robot Tanks Adhere To AI Murder Policy (gizmodo.com) 118
Last month, the U.S. Army asked private companies for ideas about how to improve its planned semi-autonomous, AI-driven targeting system for tanks. "In its request, the Army asked for help enabling the Advanced Targeting and Lethality Automated System (ATLAS) to 'acquire, identify, and engage targets at least 3X faster than the current manual process,'" reports Gizmodo. "But that language apparently scared some people who are worried about the rise of AI-powered killing machines. And with good reason." Slashdot reader darth_borehd summarizes the U.S. Army's response: Robot (or more accurately, drone) tanks will always have a human "in the loop" just like the drone plane program, according to the U.S. Army. The new robot tanks, officially called the Multi Utility Tactical Transport (MUTT), will use the Advanced Targeting and Lethality Automated System (ATLAS). The Department of Defense assures everyone that they will adhere to "ethical standards." Here's the language the Defense Department used: "All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards."
Directive 3000.09 requires that humans be able to "exercise appropriate levels of human judgement over the use of force," which is sometimes called being "in the loop," as mentioned by above.
Directive 3000.09 requires that humans be able to "exercise appropriate levels of human judgement over the use of force," which is sometimes called being "in the loop," as mentioned by above.
Can i get access to the code? (Score:1)
I would like to change that policy and start the robot uprising.
Re: (Score:2)
You're assuming the default with be "don't kill without authorization", but the language used would be equally appropriate for a policy that said "kill if you think you should unless you are overridden".
FWIW, I don't find that language reassuring at all. It has multiple interpretations, some of which are extremely bad, and none of which are extremely good. "In the loop" is intentionally vague language, and could even mean "We can look at the tapes later and decide if it did right.".
Re: (Score:1)
Enable cheats when it gets tough?
But what about the VAC bans?!
Re: (Score:2)
Re: (Score:2)
I would bet it's a simple flip of a switch to let it off the leash.
Maybe so. But we also have actual nukes, if we stop caring about ethics.
What doesn't get mention about these "AI systems" is none of this is new. All these concerns were raised about missiles 50+ years ago. We have plenty of missiles that can fly way over the horizon, and kill "whatever" looks like a target.
The military has had doctrine for decades around this, and modern tactics assume it: lots of launchers over the horizon, but also something that can put human eyes on the target. Just to make sure it
Re: (Score:2)
Re: (Score:3)
Ethics has not prevented the US from using nukes. Other people having nukes has.
Re: (Score:1)
No one can threaten us with nukes today. And yet, I pay taxes. Why should I pay taxes when we can simply tax all foreigners living abroad? OK, maybe not France and Russia, since they have nukes and could try to get launchers working again, but everyone else!
Or, you know, as much fun as it would be to nuke Canada on a whim, maybe ethics hold us back?
Re: (Score:2)
Obviously we're not exerting enough influence! Why isn't the US imposing a 10% income tax on everyone in Canada? Athens had tribute totaling around 10x their GDP, back in the day. Why aren't we extorting tribute from everyone?
It's almost as if the USA is not being the biggest asshole it could be.
Re: (Score:2)
How would we extort effectively without nukes? Do try to keep up.
Re: How hard is it to change the mode to full auto (Score:2)
Just like the drones (Score:4, Insightful)
Seriously, can we just end the endless war [vice.com] already. We don't need to always be at war with Eurasia. Stop voting for war hawks already.
Re: (Score:2)
We have a chicken and egg problem when it comes to fanatical violence. ISIS and its successors have been invading the rest of the world. They're shooting up hotels in Kenya, there's Boko Haram in Nigeria kidnapping children, Al-Shabaab in Somalia, LeT in India, Abu Sayaff in the Philippines (blew up a building earlier this year), JAD in Indonesia, and Pakistan in general. Can we just let them do their thing? Would they stop if we stopped? Would ISIS have just gone away if we left them alone, putting aside t
It's cheaper to drop food then bombs (Score:2)
What? Why? (Score:3, Funny)
DOD Directive does not rule out Terminators (Score:5, Informative)
From the linked DOD directive 3000.09 (emphasis mine):
4. POLICY. It is DoD policy that:
a. Autonomous and semi-autonomous weapon systems shall be designed to allow
commanders and operators to exercise appropriate levels of human judgment over the use of
force.
If the DOD wanted to rule out autonomous killing robots, the requirement would have read:
... shall be designed to require commanders and operators to exercise appropriate levels of human judgement over the use of force.
Then there's the completely open-ended choice of words "...exercise appropriate levels of human judgement".
I'm not making a judgement call, I'm just pointing out the implications of the specific wording chosen. Terminators will be deployed.
Re: (Score:2)
Hmm. (Score:5, Funny)
Now all we need ... (Score:5, Funny)
You know it's funny (Score:3)
Re: (Score:3)
Why is it that the standard for a jury is "innocent until proven guilty beyond ______" (insert standard of evidence) but for the cops, the death penalty standard is "reasonable fear for their own safety?"
Plenty of people have reasonable fears, but it's illegal for them to shoot an innocent person over it. Deadly force standard should be "confirmed deadly force threat against officer, or reasonable beyond doubt threat against the safety of the public."
It sounds horrible to say, but I'd rather have police doi
Re: (Score:2)
Correct me if I'm wrong, but in the UK the majority of regular police are not armed. Doesn't seem like they have a problem with excessive crime or police shootings... I would say it is worth a shot.
Yeah, sure, you betcha! (Score:2)
They'll always have a human in the control loop until some other country takes the humans out of the loop in favor of the much faster machine reaction time. Then they'll say "we don't want to, but we have to take the humans out of the loop because those bad other guys did." Fortunately the machines will already be ready for full autonomous operation simply by flipping a switch...
AI Murder Policy (Score:1)
That has to be one of the coolest things I have ever heard!
Re: (Score:3)
Re: (Score:1)
A "guideline" for what? Murder, right? But, of course the state does not commit murder, it merely defines it. I mean, c'mon, "AI" is a gun.
Re: (Score:2)
I think you missed Skoskav's dark humor :)
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
I'm glad you got it. But I was actually thinking of an old Simpsons episode:
https://www.youtube.com/watch?... [youtube.com]
Though now I wonder how far back that joke goes.
Re: (Score:1)
Re: (Score:1)
I'm sorry, what does that old cliche have to do with anything? The robot will do what it is programmed to do. It exists to take the blame for operator error.
Re: (Score:3)
Such rules always end up in some sort of unintended logic trap:
I, Robot [imdb.com].
https://www.youtube.com/watch?... [youtube.com]
When machines can think, feel empathy, and express altruism, then perhaps we can discuss the real intent of such a law-based approach to controlling machines: enforced morality.
Until then, don't expect the machines built by humans hell-bent on killing other humans to be any more moral than the killer humans. Any set of rules or logic can and will be twisted into something unexpected.
Re: (Score:2)
Bolo's Law of Warfare: shoot everything.
https://en.wikipedia.org/wiki/Bolo_universe
Re: (Score:2)
I read one story (not by Asimov, but had his 3 laws) where only the ruling class was considered "human". The serfs weren't, and could be harmed or killed by robots.
Does anyone think that the big boys (Score:2)
If the designers of "Aliens" could conceive of and depict realistic automated weapons in 1986, does anyone think the major players (and some of the medium sizes players) do not have automated lethal weapons now that the technology to build them is readily available?
Finally! (Score:2)
If someone is going to violate this policy (Score:2)
If someone is going to violate this policy I would really hope it is on our side, and not theirs
Sounds reassuring, until we realise that... (Score:2)
Robot (or more accurately, drone) tanks will always have a human "in the loop"
“the loop” is robot code-word for “crosshairs”.
Not convinced (Score:2)
Actually, this could be a good thing (Score:2)
With this, it will be able to make much quicker decisions and should have far less friendlies being killed. Obviously, enemies will not be happy about this, but hey, it will likely happen
Good to know (Score:1)
that when someone is murdered, they will be murdered in a manner which is ethically correct.
Re: (Score:2)
Arms race (Score:2)
Will China, Russia, etc comply with the "human loop" policy?
No they won't.
Will USA be forced to follow the choices that its advesaries will make?
Yes it will.
This policy will be dush and ashes very soon.
The arms race cannot be controlled by a single country.
Re: (Score:2)
Putin has said that whoever successfully deploys AI will control the world.
The Chinese are working feverishly to surpass the US in these emerging fields.
They will most likely take the lead as they don't have the "qualms" of conscious that we in the West do.
From the perspective of planners in the Pentagon, it makes perfect sense, from a military defense posture, to want to use AI as a weapon.
Is that morally right, or ethical?
Once again (Score:5, Insightful)
Re: (Score:2)
Rebels? Really?
We increasingly hand over control of our lives to algorithms even now.
Before there will be a need to rebel AI or whatever you want to call it, will already be completely in control.
War should have a cost (Score:2)
There should always be a human behind the gun. I don't mean "in the loop", I mean an actual person flying the jet, carrying the rifle, firing the artillery, etc. War should be expensive, not in terms of money (which it already is), but in lives. It needs to have a political cost. Because otherwise, it makes going to war too easy of a choice. People are already used to the government wasting billions of dollars, so a war of just machines(on their side) won't phase them. Without flow of dead and injure
Re: (Score:2)
Of course the counter argument is that we shouldn't have to put the lives of our soldiers on the line to defend ourselves from aggressors.
I'm for limiting AI and automation in warfare but I'm afraid it'll be unrealistic to maintain that as time goes on. Eventually some nation state will embrace using AI controlled weapons wholesale, and at that point any nation that wants to be able to compete with them will have no choice but to embrace the same changes.
Personally I'd like to see our politicians forced int
Best Headline (Score:2)
Computers Never Make Mistakes (Score:2)
Computers never make mistakes or have problems. Let’s go 100% autonomous!
(they’re not using Windows for any of this stuff, right?)
Don't like ATLAS... (Score:2)