Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
United States Politics

Andrew Yang Warns Against 'Slaughterbots' and Urges Global Ban on Autonomous Weaponry (venturebeat.com) 99

Ahead of the Democratic presidential primaries that begin Monday with the Iowa caucus, presidential candidate Andrew Yang called for a global ban on the use of autonomous weaponry. In a tweet, Yang called for U.S. leadership to implement a ban on automated killing machines, then shared a link to a Future of Life Institute video titled "Slaughterbots," which offers a cautionary and dystopian vision of the future. From a report: [...] In the video, the fictional CEO promises the ability to target and wipe out "the bad guys" or people with "evil ideology" or even entire cities. The video then imagines the breaking out of partisan political warfare. The drones are used to assassinate 11 U.S. Senators of one political party at the U.S. Capitol building. In the wake of the hypothetical attack, it's unclear after assessment from the intelligence community what state, group, or individual carried it out, but in the confusion calls for war and violent crime ratchet up.

There is some precedent in reality. Russian company Kalishnakov is developing a kamikaze drone, and though it was most likely piloted by a human, the world saw one of the first targeted political assassination attempts with a drone in history in 2018 in Venezuela. DARPA is developing ways for swarms of drones to take part in military missions, and the U.S. Department of Defense developed hardware to guard against weaponized drone attacks.

This discussion has been archived. No new comments can be posted.

Andrew Yang Warns Against 'Slaughterbots' and Urges Global Ban on Autonomous Weaponry

Comments Filter:
  • The way to do it (Score:4, Insightful)

    by MikeMo ( 521697 ) on Friday January 31, 2020 @04:50PM (#59676178)
    Right. We should make policy based on a fictional movie.
    • I can't fault dramatizing it with a movie because there is not really any question whether such technologies will become increasingly possible and even easy to implement. They will. The only real question is whether humanity can exercise the collective will to avoid it. Unfortunately I doubt that very much.
      • It's going to be possible right about when I hit the "damn kids, get off my lawn" stage of my life.

        Exercise the will to avoid it? I'm going to be an early adopter.

    • Re:The way to do it (Score:5, Interesting)

      by geekmux ( 1040042 ) on Friday January 31, 2020 @05:04PM (#59676246)

      Right. We should make policy based on a fictional movie.

      Right. Because our society hasn't turned George Orwell into a fucking prophet.

      I feel quite often storytellers are writing this kind of dystopian shit in order to desensitize the masses for when the inevitable happens. I'll let you decide if that was the intent of 1984 or not. Either way, pure ignorance assumes fantasy cannot become reality.

      • Orwell was a historian, not a prophet. The setting may have been unique, but the story was a recording of current events

    • and go read some Science Fiction. Autonomous kill drone are already technically feasible. We just haven't bothered making one outside the lab yet.

      The best time to solve a problem is before it happens. Especially when killing's involved.
      • Have you watched the video? No, that video is not technically feasible. It isn't even possible with current or near-term technology.

        • The specifics of what is shown there isn't likely to be technically feasible for a long time.

          However, if you went with a bigger drone, used an NVIDIA Jetson and half-assed cameras, and used a 3-d printed gun as the weapon instead of a tiny explosive charge you could build a weapon that would be effective in limited circumstances. I'm guessing that if you bought the all the parts retail on Amazon you'd still be out less than ten grand. So for a million dollars you could build a pretty substantial fleet of

          • Except by changing all that, you've dropped the described capabilities massively. On top of making a drone that is nothing like the ones in the video, you also changed in from being smaller than your hand to requiring a camera with a large lens, a gun plus ammo plus targeting system, a radar/ultrasound 360/360 object detection system (plus a nav system to use that data), and a computer with massive storage and processing power capable of storing and searching a massive number of images and video feeds for

    • by Misagon ( 1135 )

      Slaughterbots is not a fictional movie in the normal sense. The short movie was made with the expressed intent to show an example of the dangers of autonomous killer robots, with the intention of raising an opinion for a ban.
      Your argument therefore lacks validity.

      • > Slaughterbots is not a fictional movie in the normal sense. The short movie was made with the expressed intent to ... raising an opinion for a ban.

        Ah, so propaganda, then. We should DEFINITELY make policy decisions based on propaganda.

    • Going to the Moon was once fictional. ( https://www.youtube.com/watch?... [youtube.com] ) Flying in the AIR was fictional once also, the Wright Bros made that happen. Just think about how we are talking about now was fictional, and that was only 60 or so years back too. Fiction has a bad habit of becoming real in short order...
    • We should make policy based on a fictional movie.

      Do they realize that getting people to call them Slaughterbots will increase the military demand for the technology?

      Anyway, people picture the movie Robocop, when they should really be picturing the movie Heartbeeps [youtube.com].

  • Landmines are low-tech slaughterbots and they are hanging around in Cambodia and many other countries killing and maiming people many years after conflicts have ended. I could see a crisis where you have these slaughterbots hanging around even after wars have ended popping up and killing people for years afterwards because they are fully autonomous.

    • I think you would have a power source problem. The slaughter bots would need to find a plug in to recharge ever now and then.

      • I think you would have a power source problem. The slaughter bots would need to find a plug in to recharge ever now and then.

        Well, I guess we better keep the concept of regenerative power solutions for a 21st Century autonomous design a secre...shit, now look what you made me do. Cat's out of the bag now. We're all gonna die.

      • by account_deleted ( 4530225 ) on Friday January 31, 2020 @05:18PM (#59676296)
        Comment removed based on user account deletion
      • Nuclear powered with nano-technology creation engines design to rebuild the bullets and missiles.
      • by DarkOx ( 621550 )

        I think the fears here are way over blown but...

        LiOn batteries don't lose much charge just sitting on shelf. So its not hard to imagine a slaugheter bot mostly offline sitting in some tree some place. Suppose its equipeed with a sensor package, micro-controller and dsp using just a trickle of current to listen for sounds in the frequencies of human speech; that trigger it wake the the killbot from its slumber. At which time it mistakes that troop girls scouts for column of enemy troops trying to sneak by in

    • Landmines do not track you down and blow up ;) Oh Wait now they can ;)

      Just my 2 cents ;)
    • Re: (Score:3, Interesting)

      by Misagon ( 1135 )

      Indeed. This is why most of the world's countries have signed the Ottawa Treaty [wikipedia.org], banning the use of anti-personell mines.
      U.S.A. has not.

      However, in 2014, U.S. President Obama changed the U.S. anti-personnel landmine policy to follow the requirements of the Ottawa treaty - except for Korea, where the border between North and South Korea continues to be mined.
      Trump reversed Obama's decision today [yahoo.com].
      Just so you know.

      • Trump reversed Obama's decision today.

        It's awfully easy to do when congress doesn't pass actual legislation. Executive orders are intentionally whimsical.

      • That is just propaganda though. The military already didn't think they're useful except in situations like Korea with a highly militarized, closed border.

        We don't have any places other than Korea where the military wants to use them right now. So there is no effect to the policy other than to scare people.

      • by AHuxley ( 892839 )
        If the US wanted a ban it would have passed the ban.
        What any "President" can be seen doing for a while, any later "President" can change again.
        It was legal fo a "President" to stop for a while. Its legal for any later "President" to start again.
        Want better mil laws? Pass something like a law.
  • What a great idea (Score:3, Insightful)

    by roc97007 ( 608802 ) on Friday January 31, 2020 @04:57PM (#59676220) Journal

    Autonomous slaughterbots. AI deciding who the enemy is. And let's make them self-maintaining and self-repairing to save on human casualties in the field. And wouldn't it be cool if a few of them could invade a country, dig in, and make copies of themselves?

    Wait, that sounds familiar [wikipedia.org].

  • I thought they were called killbots. Just throw wave after wave of humans at them.

  • by Alwin Barni ( 5107629 ) on Friday January 31, 2020 @05:02PM (#59676244)
    UK Novichok and Syria chemical attacks show how much some countries care about international law and restrictions.
  • by Ashthon ( 5513156 ) on Friday January 31, 2020 @05:09PM (#59676270)

    So, he's proposing we sit around doing nothing while our enemies develop low cost, highly-effective automated drones. Then, after building millions of them, they send them over to slaughter us. Meanwhile, we'll be left defenceless because Yang was opposed to advanced weaponry. Let me guess, Yang also wants nuclear disarmament so we won't even have the nuclear deterrent?

    It's been shown throughout history that if you can't defend your land, somebody will come along and take it off you. Just ask the Native Americans who tried using bows and arrows to defend against rifles and were promptly slaughtered. Superior technology is key to defence, so it's necessary we at least keep pace with the rest of the world, and that includes developing drones and anti-drone defences.

    • No (Score:4, Insightful)

      by DogDude ( 805747 ) on Friday January 31, 2020 @06:26PM (#59676562)
      No, that's not what he's proposing. If you even read the title of this article, it clearly says "Andrew Yang called for a global ban on the use of autonomous weaponry". It doesn't say anything about not developing anti-drone technology.
    • by AHuxley ( 892839 )
      That would make a fun war movie script?
      How to get millions of the "low cost, highly-effective automated" bots into the USA without the US mil going on alert?
      Fly them in? The few US mil jets that are ready for work 24/7 along each US coast will prevent that.
      Ship? Coast guard and US customs inspections will find the bots been imported during random searching.
      Make them in the USA? A few rows of production robots working on million robots in a huge "cold war" style spy sat protected windowless factory.
      A
    • In thinking about it, if I were going to design automated weapons, the first, most important target I'd have for them is other automated weapons.

      Because automated weapons are what are most to be feared. Once the automated weapons have decided the outcome, there won't be much for the humans left to do other than surrender or die.

      --PM

  • by fat man's underwear ( 5713342 ) <tardeaulardeau@protonmail.com> on Friday January 31, 2020 @05:13PM (#59676280)

    Well, an artillery shell is pretty much autonomous after it leaves the muzzle... How about an ICBM? Or a cruise missile?

    Seems he's a bit late to the game here.

    • by Misagon ( 1135 )

      An artillery shell or cruise missile does not order itself to be fired, a human does.

      • There are plenty of autonomous weapons systems in use. Most ship based defense systems are automated and they have been in use for 40 years (I know: "OK boomer")

        • Re:Autonomous ??? (Score:4, Insightful)

          by Aighearach ( 97333 ) on Friday January 31, 2020 @08:32PM (#59677004)

          They're not automated. Go to Fleet Week and take a tour of a ship.

          They have to turn two keys on the bridge to activate the semi-automated portion of the air defenses. Then they also have to depress a button. If you hold the button down, it shoots everything out of the sky that doesn't have an IFF beacon.

          • Well, uh, that's pretty much automated. It is an automated weapons system that automatically chooses a target and when to shoot and destroys it. What is your point?

            • Perchance learn to read English?

              It is an automated weapons system that automatically chooses a target and when to shoot and destroys it.

              As I said, though: You have to depress the button for it to shoot. It doesn't choose when to shoot. It shoots while the button is held down.

          • by Kjella ( 173770 )

            They're not automated. Go to Fleet Week and take a tour of a ship. They have to turn two keys on the bridge to activate the semi-automated portion of the air defenses. Then they also have to depress a button. If you hold the button down, it shoots everything out of the sky that doesn't have an IFF beacon.

            Obviously you've got safeties, but you've given up the control authority that a human should decide to pull the trigger on each individual target. I mean hopefully all robots eventually answer to some human authority and not Skynet, but if you just authorize a drone patrol and let the drone figure out what to shoot at itself that's exactly the slippery slope and pulverization of responsibility people worry about. And the potential for centralization of power, if one drone takes one pilot you need many but i

            • Obviously you've got safeties, but you've given up the control authority that a human should decide to pull the trigger on each individual target.

              In this particular application, where you have a bunch of anti-ship missiles coming at you, if you want to "pull the trigger on each individual target" you might as well just scuttle the ship when you hear that hostilities have broken out.

              Did you know that a human gunner operating AAA doesn't have verification of individual targets? They spray fire in the direction of the threat. The way to stop shooting random shit by mistake is to instead have the gunner decide when to shoot, and have the computer decide

    • From my point of view, the big risk is not that it is autonomous, but that it can be "blamed". If an "AI" weapon kills innocent people it will be easy to diffuse the blame very broadly. That will encourage the use of AI weapons in situations where there is a concern about bad publicity .

      "Unfortunately the Raptor7 AI drone destroyed a children's playground while targeting a known terrorist. It was an unfortunate tragedy, but the manufacturer has provided assurances that the training set has been update

  • by 110010001000 ( 697113 ) on Friday January 31, 2020 @05:17PM (#59676288) Homepage Journal

    Is this guy for real?

  • by wyattstorch516 ( 2624273 ) on Friday January 31, 2020 @05:25PM (#59676344)
    Who is going to protect us from the zombie hordes?
  • you really need to watch this ~ http://www.youtube.com/watch?v... [youtube.com]

  • It's Kalashnikov. You know, like the inventor of the AK-47?

    E

  • Kill all humans

    • Yes, but here we're discussing Artificial Intelligence.

      Bender is just an over-engineered pipe bending machine. Pass the butter, Bender, you're moving up in the world.

      Bender is in same situation as the humans; he'd need the help of some automated machine to actually kill all the humans.

  • When he talks about autonomous killing machines or slaughterbots, he was really referencing these things [fandom.com]. Go to the ten minute mark [youtube.com].

  • Autonomous killbots are pretty far down the list of things that are going to kill humanity. I predict that just like with all the other things that will kill humanity we'll just hide our head in the sand and pretend there isn't anything to worry about.
    • Anything where a few people can sit at a desk and kill anyone on Earth that they choose should be considered a problem. Even if the number of people killed is tiny compared to influenza.

      Humanity probably won't be wiped out by anything, not even global warming. Our society and culture will not survive indefinitely, if history is any indication. We do get to choose if we change and transcend into something new, or if we suffer all the way down.

  • Robocop, Terminator, AI Boy, Prometheus/Covenant and many more . Just saw a resurrected classic which evil synthetics a theme. The nano ninjas are coming. Humans are innovative killers, derived from survival instincts - kill competition.
  • Sounds like a fucking awesome video game or game show.
  • Obama and Trump (Score:4, Insightful)

    by OrangeTide ( 124937 ) on Saturday February 01, 2020 @01:20AM (#59677786) Homepage Journal

    Both have demonstrated that they are pro-slaughterbot.

    If you're pro-Obama, then you'll need to reconcile that we perform extra-judicial executions.

    If you're pro-Trump, you're probably really into this sort of thing. Enjoy!

  • Yang is living in a fantasy world. So, when China or N.Korea export "slaughterbots" to our enemies, Yang figures we should be fighting them with our protein-based "boys" [er, and girls of course, pc check]? Or, maybe even better, just allow them to take over and give everyone a basic income, like the N. Koreans do?

Congratulations! You are the one-millionth user to log into our system. If there's anything special we can do for you, anything at all, don't hesitate to ask!

Working...