Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Military AI United States Technology

US Army Assures Public That Robot Tanks Adhere To AI Murder Policy (gizmodo.com) 118

Last month, the U.S. Army asked private companies for ideas about how to improve its planned semi-autonomous, AI-driven targeting system for tanks. "In its request, the Army asked for help enabling the Advanced Targeting and Lethality Automated System (ATLAS) to 'acquire, identify, and engage targets at least 3X faster than the current manual process,'" reports Gizmodo. "But that language apparently scared some people who are worried about the rise of AI-powered killing machines. And with good reason." Slashdot reader darth_borehd summarizes the U.S. Army's response: Robot (or more accurately, drone) tanks will always have a human "in the loop" just like the drone plane program, according to the U.S. Army. The new robot tanks, officially called the Multi Utility Tactical Transport (MUTT), will use the Advanced Targeting and Lethality Automated System (ATLAS). The Department of Defense assures everyone that they will adhere to "ethical standards." Here's the language the Defense Department used: "All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards."

Directive 3000.09 requires that humans be able to "exercise appropriate levels of human judgement over the use of force," which is sometimes called being "in the loop," as mentioned by above.
This discussion has been archived. No new comments can be posted.

US Army Assures Public That Robot Tanks Adhere To AI Murder Policy

Comments Filter:
  • by Anonymous Coward

    I would like to change that policy and start the robot uprising.

  • by rsilvergun ( 571051 ) on Wednesday March 06, 2019 @07:37PM (#58228674)
    Whew, that's a load off my mind [wikipedia.org]

    Seriously, can we just end the endless war [vice.com] already. We don't need to always be at war with Eurasia. Stop voting for war hawks already.
    • We have a chicken and egg problem when it comes to fanatical violence. ISIS and its successors have been invading the rest of the world. They're shooting up hotels in Kenya, there's Boko Haram in Nigeria kidnapping children, Al-Shabaab in Somalia, LeT in India, Abu Sayaff in the Philippines (blew up a building earlier this year), JAD in Indonesia, and Pakistan in general. Can we just let them do their thing? Would they stop if we stopped? Would ISIS have just gone away if we left them alone, putting aside t

  • What? Why? (Score:3, Funny)

    by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Wednesday March 06, 2019 @07:39PM (#58228680) Journal
    Who would want to murder Al [weirdal.com]?
  • by DanDD ( 1857066 ) on Wednesday March 06, 2019 @07:41PM (#58228700)

    From the linked DOD directive 3000.09 (emphasis mine):

    4. POLICY. It is DoD policy that:
    a. Autonomous and semi-autonomous weapon systems shall be designed to allow
    commanders and operators to exercise appropriate levels of human judgment over the use of
    force.

    If the DOD wanted to rule out autonomous killing robots, the requirement would have read:

    ... shall be designed to require commanders and operators to exercise appropriate levels of human judgement over the use of force.

    Then there's the completely open-ended choice of words "...exercise appropriate levels of human judgement".

    I'm not making a judgement call, I'm just pointing out the implications of the specific wording chosen. Terminators will be deployed.

  • Hmm. (Score:5, Funny)

    by fuzzyfuzzyfungus ( 1223518 ) on Wednesday March 06, 2019 @07:44PM (#58228712) Journal
    I'm assuming that this story appearing immediately above "Self-Driving Cars May Hit People With Darker Skin More Often, Study Finds" definitely is purely coincidence.
  • by PPH ( 736903 ) on Wednesday March 06, 2019 @07:46PM (#58228730)

    ... is to put a human in the loop for all police involved shootings.

    • but somehow their cameras are always off when the shooting starts.
    • Why is it that the standard for a jury is "innocent until proven guilty beyond ______" (insert standard of evidence) but for the cops, the death penalty standard is "reasonable fear for their own safety?"

      Plenty of people have reasonable fears, but it's illegal for them to shoot an innocent person over it. Deadly force standard should be "confirmed deadly force threat against officer, or reasonable beyond doubt threat against the safety of the public."

      It sounds horrible to say, but I'd rather have police doi

  • They'll always have a human in the control loop until some other country takes the humans out of the loop in favor of the much faster machine reaction time. Then they'll say "we don't want to, but we have to take the humans out of the loop because those bad other guys did." Fortunately the machines will already be ready for full autonomous operation simply by flipping a switch...

  • That has to be one of the coolest things I have ever heard!

    • Don't let the name throw you. It's more of a guideline.
    • Asimov's 3 laws of robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
      • I'm sorry, what does that old cliche have to do with anything? The robot will do what it is programmed to do. It exists to take the blame for operator error.

      • by DanDD ( 1857066 )

        Such rules always end up in some sort of unintended logic trap:

        I, Robot [imdb.com].

        https://www.youtube.com/watch?... [youtube.com]

        When machines can think, feel empathy, and express altruism, then perhaps we can discuss the real intent of such a law-based approach to controlling machines: enforced morality.

        Until then, don't expect the machines built by humans hell-bent on killing other humans to be any more moral than the killer humans. Any set of rules or logic can and will be twisted into something unexpected.

      • Bolo's Law of Warfare: shoot everything.

        https://en.wikipedia.org/wiki/Bolo_universe

      • I read one story (not by Asimov, but had his 3 laws) where only the ruling class was considered "human". The serfs weren't, and could be harmed or killed by robots.

  • If the designers of "Aliens" could conceive of and depict realistic automated weapons in 1986, does anyone think the major players (and some of the medium sizes players) do not have automated lethal weapons now that the technology to build them is readily available?

  • The fact that optical software can't recognize dark faces turns out to be a real advantage! When Skynet takes over, only the darkies will survive!
  • If someone is going to violate this policy I would really hope it is on our side, and not theirs

  • Robot (or more accurately, drone) tanks will always have a human "in the loop"

    “the loop” is robot code-word for “crosshairs”.

  • Adherence implies some sort of operational sentience - these robot tanks will *adhere* to nothing on a high level principal - they'll only operate as they are programmed, and as we all know programming is extremely fallible.
  • Seriously, there is multiple issues with current war. The biggest is that innocent civilians are being wounded/killed. The reason is that enemies like to use civilians as shields, or dress to look like civilians so as to infiltrate western troops. Likewise, we have friendly being killed.
    With this, it will be able to make much quicker decisions and should have far less friendlies being killed. Obviously, enemies will not be happy about this, but hey, it will likely happen
  • that when someone is murdered, they will be murdered in a manner which is ethically correct.

  • Will China, Russia, etc comply with the "human loop" policy?
    No they won't.

    Will USA be forced to follow the choices that its advesaries will make?
    Yes it will.

    This policy will be dush and ashes very soon.
    The arms race cannot be controlled by a single country.

    • You are exactly right. And I would call it an Arms Race to the bottom.

      Putin has said that whoever successfully deploys AI will control the world.
      The Chinese are working feverishly to surpass the US in these emerging fields.
      They will most likely take the lead as they don't have the "qualms" of conscious that we in the West do.

      From the perspective of planners in the Pentagon, it makes perfect sense, from a military defense posture, to want to use AI as a weapon.
      Is that morally right, or ethical?
  • Once again (Score:5, Insightful)

    by gijoel ( 628142 ) on Thursday March 07, 2019 @05:29AM (#58230090)
    • The problem with that assumption is that "AI rebels against human control".
      Rebels? Really?

      We increasingly hand over control of our lives to algorithms even now.

      Before there will be a need to rebel AI or whatever you want to call it, will already be completely in control.
  • There should always be a human behind the gun. I don't mean "in the loop", I mean an actual person flying the jet, carrying the rifle, firing the artillery, etc. War should be expensive, not in terms of money (which it already is), but in lives. It needs to have a political cost. Because otherwise, it makes going to war too easy of a choice. People are already used to the government wasting billions of dollars, so a war of just machines(on their side) won't phase them. Without flow of dead and injure

    • Of course the counter argument is that we shouldn't have to put the lives of our soldiers on the line to defend ourselves from aggressors.

      I'm for limiting AI and automation in warfare but I'm afraid it'll be unrealistic to maintain that as time goes on. Eventually some nation state will embrace using AI controlled weapons wholesale, and at that point any nation that wants to be able to compete with them will have no choice but to embrace the same changes.

      Personally I'd like to see our politicians forced int

  • I am not usually a fan of Gizmodo, but this is the best headline I have ever read.
  • Computers never make mistakes or have problems. Let’s go 100% autonomous!

    (they’re not using Windows for any of this stuff, right?)

  • .. how about Systematic Knowledge-base Yielding Neural Eradication Technologies?

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...