Andrew Yang Warns Against 'Slaughterbots' and Urges Global Ban on Autonomous Weaponry (venturebeat.com) 99
Ahead of the Democratic presidential primaries that begin Monday with the Iowa caucus, presidential candidate Andrew Yang called for a global ban on the use of autonomous weaponry. In a tweet, Yang called for U.S. leadership to implement a ban on automated killing machines, then shared a link to a Future of Life Institute video titled "Slaughterbots," which offers a cautionary and dystopian vision of the future. From a report: [...] In the video, the fictional CEO promises the ability to target and wipe out "the bad guys" or people with "evil ideology" or even entire cities. The video then imagines the breaking out of partisan political warfare. The drones are used to assassinate 11 U.S. Senators of one political party at the U.S. Capitol building. In the wake of the hypothetical attack, it's unclear after assessment from the intelligence community what state, group, or individual carried it out, but in the confusion calls for war and violent crime ratchet up.
There is some precedent in reality. Russian company Kalishnakov is developing a kamikaze drone, and though it was most likely piloted by a human, the world saw one of the first targeted political assassination attempts with a drone in history in 2018 in Venezuela. DARPA is developing ways for swarms of drones to take part in military missions, and the U.S. Department of Defense developed hardware to guard against weaponized drone attacks.
There is some precedent in reality. Russian company Kalishnakov is developing a kamikaze drone, and though it was most likely piloted by a human, the world saw one of the first targeted political assassination attempts with a drone in history in 2018 in Venezuela. DARPA is developing ways for swarms of drones to take part in military missions, and the U.S. Department of Defense developed hardware to guard against weaponized drone attacks.
The way to do it (Score:4, Insightful)
Re: (Score:3)
Re: (Score:3)
It's going to be possible right about when I hit the "damn kids, get off my lawn" stage of my life.
Exercise the will to avoid it? I'm going to be an early adopter.
Re:The way to do it (Score:5, Interesting)
Right. We should make policy based on a fictional movie.
Right. Because our society hasn't turned George Orwell into a fucking prophet.
I feel quite often storytellers are writing this kind of dystopian shit in order to desensitize the masses for when the inevitable happens. I'll let you decide if that was the intent of 1984 or not. Either way, pure ignorance assumes fantasy cannot become reality.
Re: (Score:1)
Orwell was a historian, not a prophet. The setting may have been unique, but the story was a recording of current events
You need to stop reading Sci-Fi (Score:2)
The best time to solve a problem is before it happens. Especially when killing's involved.
Re: (Score:2)
Have you watched the video? No, that video is not technically feasible. It isn't even possible with current or near-term technology.
Re: (Score:2)
People don't really comprehend how few "friendly fire" incidents the US has, or how big a problem those incidents are for most militaries.
The military doesn't want things to be both autonomous and to make targeting decisions. They want something that can be autonomous, and then ask permission.
That's largely what the existing US drones do. The little ones that small countries have are usually more like radio control airplanes.
On land the problem is that the technology isn't good enough to make them able to n
Re: (Score:2)
The cameras that small with that high quality do not exist.
The real-time sensors to perfectly detect the entire 3D environment do not exist.
The flight systems do not exist.
The that level of facial recognition capability does not exist.
The online search and identity matching capability does not exist.
The semantic parsing capability does not exist.
The explosives (stable, with less than 1 cubic inch volume capable of blowing a hole more than 8" across through an exterior brick wall) do not exist.
I don't know w
Re: (Score:3)
The specifics of what is shown there isn't likely to be technically feasible for a long time.
However, if you went with a bigger drone, used an NVIDIA Jetson and half-assed cameras, and used a 3-d printed gun as the weapon instead of a tiny explosive charge you could build a weapon that would be effective in limited circumstances. I'm guessing that if you bought the all the parts retail on Amazon you'd still be out less than ten grand. So for a million dollars you could build a pretty substantial fleet of
Re: (Score:2)
Except by changing all that, you've dropped the described capabilities massively. On top of making a drone that is nothing like the ones in the video, you also changed in from being smaller than your hand to requiring a camera with a large lens, a gun plus ammo plus targeting system, a radar/ultrasound 360/360 object detection system (plus a nav system to use that data), and a computer with massive storage and processing power capable of storing and searching a massive number of images and video feeds for
Re: (Score:2)
Slaughterbots is not a fictional movie in the normal sense. The short movie was made with the expressed intent to show an example of the dangers of autonomous killer robots, with the intention of raising an opinion for a ban.
Your argument therefore lacks validity.
So not fiction then, but actually (Score:2)
> Slaughterbots is not a fictional movie in the normal sense. The short movie was made with the expressed intent to ... raising an opinion for a ban.
Ah, so propaganda, then. We should DEFINITELY make policy decisions based on propaganda.
Re: (Score:2)
Re: (Score:2)
We should make policy based on a fictional movie.
Do they realize that getting people to call them Slaughterbots will increase the military demand for the technology?
Anyway, people picture the movie Robocop, when they should really be picturing the movie Heartbeeps [youtube.com].
Re: (Score:1)
We would actually probably solve a lot of the world's violent crime problems if we could.
So you're proposing that the reduction of violent crime is reliant upon the suppression of one of the key components necessary to the survival of the species. You drink much?
Re:Not sure "slaughterbots" would be worse (Score:4, Interesting)
Authoritarian testosterone doped men kill other people (typically women) all over the world every day.
Would slaughter bots be worse?
You're asking the wrong question.
If they are, what exactly are YOU going to do about it, human?
Fucking KILLS ME (yes I see the irony) that no one is apparently smart enough to understand the REAL problem with autonomous killing machines.
Re: (Score:2)
Fucking KILLS ME (yes I see the irony) that no one is apparently smart enough to understand the REAL problem with autonomous killing machines.
That they run out of fuel fairly quickly?
Re: (Score:2)
"If an AI isn't smart enough to employ a weapon all by itself, you can't trust it very much in battle. If the AI is smart enough to employ a weapon all by itself, you can't trust it at all."
Re: (Score:2)
Isn't the point of an AI that's not trusted to employ a weapon all by itself, that you don't have any need to trust it in battle? You trust the soldier guiding it and just let it do the actual fighing.
Not unlike war-elephants or other such war-beasts of old - you don't have to trust the animal not to attack your own army, because you trust its handlers to steer its destruction appropriately.
Re: (Score:1)
Authoritarian testosterone doped men kill other people (typically women) all over the world every day.
Would slaughter bots be worse?
Women kill millions of people every year. Not sure a chemical imbalance is the root cause there either.
Re:Not sure "slaughterbots" would be worse (Score:5, Interesting)
Warfare is always going to be brutal, but technology has allowed it to became a lot more humane in many regards. Sure it makes us more deadly, but historically the winning army didn't always keep captives, particularly among the rank and file and it wasn't uncommon for them to intentionally sack a city or two along the way. My only worry is that if we could create that much precision, how much restraint would we have in using it? Right now we hesitate to use a drone strike because it might take out one or two innocent bystanders and prior to that when we only had cruise missiles we were even more hesitant in using them against anything short of a military installation because they could easily kill hundreds of civilians if they were slightly off. But if you have a robot that will only kill its intended target and no one else, will anyone feel bad about using it or complain when it is used?
Re: (Score:2, Funny)
Besides, everyone knows that slaughterbots have a set kill limit. Just send wave after wave of humans at them and they will stop when they reach their maximum kill limit....
Re: (Score:2)
And in actual reality, men typically kill men because the ones fighting wars are mostly men and that is where most of the killing is happening. Not saying this is good in any way, but you cannot build anything good on a lie, so I would advise you to stop using that particular one.
Engaging the enemy (Score:2)
Men fight men in wars because we cannot afford to lose men to engaging the enemy.
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:1)
There's no end to what could trigger a cell/organisation/nation/whatever to strike with drones in that fashion.
That is one of the points the video tries to make
Re: (Score:2)
How? People always say these fantastical things but the "bots" we have are like Spot from Boston Dynamics. It runs out of batteries in 12 minutes just walking around. You guys live in a fantasy world.
Re: (Score:1)
Think of an automated "Wall" system around a nation.
Power is a given. Reloading is a truck crew as needed.
Overlapping arcs so no zone is down for reload, cleaning, repair.
No long term stress on what was once human tower crews.
The fantasy might be the power supply in a moving bot.
Dont need to move? Its 24/7 detection and support for that nation.
Re: (Score:2)
Landmines are low-tech slaughterbots (Score:1, Interesting)
Landmines are low-tech slaughterbots and they are hanging around in Cambodia and many other countries killing and maiming people many years after conflicts have ended. I could see a crisis where you have these slaughterbots hanging around even after wars have ended popping up and killing people for years afterwards because they are fully autonomous.
Re: (Score:2)
I think you would have a power source problem. The slaughter bots would need to find a plug in to recharge ever now and then.
Re: (Score:2)
I think you would have a power source problem. The slaughter bots would need to find a plug in to recharge ever now and then.
Well, I guess we better keep the concept of regenerative power solutions for a 21st Century autonomous design a secre...shit, now look what you made me do. Cat's out of the bag now. We're all gonna die.
Comment removed (Score:5, Funny)
Re: (Score:1)
If the Brits get their way their phones will have positive ground
Re: (Score:2)
I saw that movie. After we tried to block out the sun to stop them the robots continued to fight even more vigorously as they ran on their battery reserves. The robots enslaved humans to build and maintain the fusion reactors that powered them after the war ended.
Re: (Score:2)
Re: (Score:3)
I think the fears here are way over blown but...
LiOn batteries don't lose much charge just sitting on shelf. So its not hard to imagine a slaugheter bot mostly offline sitting in some tree some place. Suppose its equipeed with a sensor package, micro-controller and dsp using just a trickle of current to listen for sounds in the frequencies of human speech; that trigger it wake the the killbot from its slumber. At which time it mistakes that troop girls scouts for column of enemy troops trying to sneak by in
Re: Landmines are low-tech slaughterbots (Score:1)
Re: (Score:2)
Just my 2 cents
Re: (Score:3, Interesting)
Indeed. This is why most of the world's countries have signed the Ottawa Treaty [wikipedia.org], banning the use of anti-personell mines.
U.S.A. has not.
However, in 2014, U.S. President Obama changed the U.S. anti-personnel landmine policy to follow the requirements of the Ottawa treaty - except for Korea, where the border between North and South Korea continues to be mined.
Trump reversed Obama's decision today [yahoo.com].
Just so you know.
Re: (Score:2)
Trump reversed Obama's decision today.
It's awfully easy to do when congress doesn't pass actual legislation. Executive orders are intentionally whimsical.
Re: (Score:2)
That is just propaganda though. The military already didn't think they're useful except in situations like Korea with a highly militarized, closed border.
We don't have any places other than Korea where the military wants to use them right now. So there is no effect to the policy other than to scare people.
Re: (Score:1)
What any "President" can be seen doing for a while, any later "President" can change again.
It was legal fo a "President" to stop for a while. Its legal for any later "President" to start again.
Want better mil laws? Pass something like a law.
What a great idea (Score:3, Insightful)
Autonomous slaughterbots. AI deciding who the enemy is. And let's make them self-maintaining and self-repairing to save on human casualties in the field. And wouldn't it be cool if a few of them could invade a country, dig in, and make copies of themselves?
Wait, that sounds familiar [wikipedia.org].
Pre set kill limit (Score:2)
I thought they were called killbots. Just throw wave after wave of humans at them.
<sk>for sure the problem would be solved (Score:5, Insightful)
Re:for sure the problem would be solved (Score:3)
Luckily those are also the countries with the highest rates of friendly fire casualties, so they're unlikely to try to build autonomous killing machines.
How to lose your country by Andrew Yang (Score:3)
So, he's proposing we sit around doing nothing while our enemies develop low cost, highly-effective automated drones. Then, after building millions of them, they send them over to slaughter us. Meanwhile, we'll be left defenceless because Yang was opposed to advanced weaponry. Let me guess, Yang also wants nuclear disarmament so we won't even have the nuclear deterrent?
It's been shown throughout history that if you can't defend your land, somebody will come along and take it off you. Just ask the Native Americans who tried using bows and arrows to defend against rifles and were promptly slaughtered. Superior technology is key to defence, so it's necessary we at least keep pace with the rest of the world, and that includes developing drones and anti-drone defences.
No (Score:4, Insightful)
Re: (Score:1)
How to get millions of the "low cost, highly-effective automated" bots into the USA without the US mil going on alert?
Fly them in? The few US mil jets that are ready for work 24/7 along each US coast will prevent that.
Ship? Coast guard and US customs inspections will find the bots been imported during random searching.
Make them in the USA? A few rows of production robots working on million robots in a huge "cold war" style spy sat protected windowless factory.
A
Highest priority targets of AW is other AW (Score:2)
In thinking about it, if I were going to design automated weapons, the first, most important target I'd have for them is other automated weapons.
Because automated weapons are what are most to be feared. Once the automated weapons have decided the outcome, there won't be much for the humans left to do other than surrender or die.
--PM
Autonomous ??? (Score:3)
Well, an artillery shell is pretty much autonomous after it leaves the muzzle... How about an ICBM? Or a cruise missile?
Seems he's a bit late to the game here.
Re: (Score:2)
An artillery shell or cruise missile does not order itself to be fired, a human does.
Re: (Score:2)
There are plenty of autonomous weapons systems in use. Most ship based defense systems are automated and they have been in use for 40 years (I know: "OK boomer")
Re:Autonomous ??? (Score:4, Insightful)
They're not automated. Go to Fleet Week and take a tour of a ship.
They have to turn two keys on the bridge to activate the semi-automated portion of the air defenses. Then they also have to depress a button. If you hold the button down, it shoots everything out of the sky that doesn't have an IFF beacon.
Re: (Score:2)
Well, uh, that's pretty much automated. It is an automated weapons system that automatically chooses a target and when to shoot and destroys it. What is your point?
Re: (Score:2)
Perchance learn to read English?
It is an automated weapons system that automatically chooses a target and when to shoot and destroys it.
As I said, though: You have to depress the button for it to shoot. It doesn't choose when to shoot. It shoots while the button is held down.
Re: (Score:2)
They're not automated. Go to Fleet Week and take a tour of a ship. They have to turn two keys on the bridge to activate the semi-automated portion of the air defenses. Then they also have to depress a button. If you hold the button down, it shoots everything out of the sky that doesn't have an IFF beacon.
Obviously you've got safeties, but you've given up the control authority that a human should decide to pull the trigger on each individual target. I mean hopefully all robots eventually answer to some human authority and not Skynet, but if you just authorize a drone patrol and let the drone figure out what to shoot at itself that's exactly the slippery slope and pulverization of responsibility people worry about. And the potential for centralization of power, if one drone takes one pilot you need many but i
Re: (Score:3)
Obviously you've got safeties, but you've given up the control authority that a human should decide to pull the trigger on each individual target.
In this particular application, where you have a bunch of anti-ship missiles coming at you, if you want to "pull the trigger on each individual target" you might as well just scuttle the ship when you hear that hostilities have broken out.
Did you know that a human gunner operating AAA doesn't have verification of individual targets? They spray fire in the direction of the threat. The way to stop shooting random shit by mistake is to instead have the gunner decide when to shoot, and have the computer decide
Re: (Score:2)
From my point of view, the big risk is not that it is autonomous, but that it can be "blamed". If an "AI" weapon kills innocent people it will be easy to diffuse the blame very broadly. That will encourage the use of AI weapons in situations where there is a concern about bad publicity .
"Unfortunately the Raptor7 AI drone destroyed a children's playground while targeting a known terrorist. It was an unfortunate tragedy, but the manufacturer has provided assurances that the training set has been update
"Future of Life Institute"? (Score:3)
Is this guy for real?
We need slaughterbots (Score:3)
Re:Andrew Yang suffers from excellent foresight (Score:5, Insightful)
And yet automation and AI (which doesn't exist) hasn't led to a devaluing of human jobs. More tech-bro scifi fantasy.
Re: The role of jobs in the economy (Score:2)
McDonald's Annual Number of Employees
2018 210,000
2017 235,000
2016 375,000
2015 420,000
2014 420,000
2013 440,000
McDonald's Annual Gross Profit
(Millions of US $)
2018 $10,786
2017 $10,621
2016 $10,205
2015 $9,789
2014 $10,456
2013 $10,903
Source:
https://www.statista.com/statistics/819966/mcdonald-s-number-of-employees/
https://www.macrotrends.net/stocks/charts/MCD/mcdonalds/gross-profit
Re: (Score:2)
Thats nice. So a McDonalds job has been devalued or something? So confusing. We have record low unemployment.
Re: (Score:2)
Hey, when you're dumb math is hard. And yes, we have record underemployment. Lets celebrate this amazing achievement!
Re: (Score:2)
2018 647,500
2017 566,000
2016 341,400
2015 230,800
2014 154,100
2013 117,300
Amazon Annual Net Income
(Millions of US $)
2018 $10,073
2017 $3,033
2016 $2,371
2015 $596
2014 $-241
2013 $274
Looks ok right?
Except, profit per Amazon employee is now 6.65 times higher than 2013.
Think about what that means for the hollowing out of the rest of the retail and distribution sector.
And this is BEFORE greater warehouse automation (https://www.youtube.com/watch?v=r
Re: (Score:2)
Thats nice. What does that have to do with anything? Jesus. Amazon didn't win through AI and automation.
Re: (Score:2)
I think that the problem is that Yang is ahead of his time, and the gravity of his message isn't sinking in yet. He's not going to get as many votes from predicting that self driving AI will put truck drivers out of work, as someone else will 8 years from now when they promise to somehow "bring those jobs back" once they're already gone.
Re: (Score:1)
Tech-bros already told me self driving exists now. So now it will be 8 years from now? Make up your mind.
Re: Andrew Yang suffers from excellent foresight (Score:1)
Re: (Score:2)
All tech bros are ahead of their time. If he didn't know how to give that speech, he wouldn't be where he is today.
It doesn't imply that his ideas are any good. Some of them are. But they're all "ahead of their time," good ideas and bad ideas alike.
Re: (Score:2)
In the US unrestricted flow of illegal aliens "devalues human jobs" on the bottom end of the employment market. This is well established with studies. And yet, all of the current candidates, including Yang, are in favor of completely removing the Southern border.
Obligatory slaughter bots video (Score:2)
you really need to watch this ~ http://www.youtube.com/watch?v... [youtube.com]
Re: Autonomous Weaponry is IMMENSELY BENEFICIAL!!! (Score:1)
Re: Autonomous Weaponry is IMMENSELY BENEFICIAL!! (Score:1)
Russian company "Kalishnakov" (Score:1)
It's Kalashnikov. You know, like the inventor of the AK-47?
E
Re: (Score:2)
It's Kalashnikov
No, it's a bunch of Cyrillic letters that we can't print on /.
What would Bender do? (Score:1)
Kill all humans
Re: (Score:2)
Yes, but here we're discussing Artificial Intelligence.
Bender is just an over-engineered pipe bending machine. Pass the butter, Bender, you're moving up in the world.
Bender is in same situation as the humans; he'd need the help of some automated machine to actually kill all the humans.
He's been watching Gundam (Score:2)
When he talks about autonomous killing machines or slaughterbots, he was really referencing these things [fandom.com]. Go to the ten minute mark [youtube.com].
Pretty Far Down The List (Score:2)
Re: (Score:1)
Anything where a few people can sit at a desk and kill anyone on Earth that they choose should be considered a problem. Even if the number of people killed is tiny compared to influenza.
Humanity probably won't be wiped out by anything, not even global warming. Our society and culture will not survive indefinitely, if history is any indication. We do get to choose if we change and transcend into something new, or if we suffer all the way down.
Synthetic fear n loathing (Score:2)
Slaughterbots!!! (Score:1)
Obama and Trump (Score:4, Insightful)
Both have demonstrated that they are pro-slaughterbot.
If you're pro-Obama, then you'll need to reconcile that we perform extra-judicial executions.
If you're pro-Trump, you're probably really into this sort of thing. Enjoy!
Nice Fantasy Life (Score:1)