EFF: Google Should Not Help the US Military Build Unaccountable AI Systems (eff.org) 110
The Electronic Frontier Foundation's Peter Eckersley writes: Yesterday, The New York Times reported that there is widespread unrest amongst Google's employees about the company's work on a U.S. military project called "Project Maven." Google has claimed that its work on Maven is for "non-offensive uses only," but it seems that the company is building computer vision systems to flag objects and people seen by military drones for human review. This may in some cases lead to subsequent targeting by missile strikes. EFF has been mulling the ethical implications of such contracts, and we have some advice for Google and other tech companies that are considering building military AI systems.
The EFF lists several "starting points" any company, or any worker, considering whether to work with the military on a project with potentially dangerous or risk AI applications should be asking:
1. Is it possible to create strong and binding international institutions or agreements that define acceptable military uses and limitations in the use of AI? While this is not an easy task, the current lack of such structures is troubling. There are serious and potentially destabilizing impacts from deploying AI in any military setting not clearly governed by settled rules of war. The use of AI in potential target identification processes is one clear category of uses that must be governed by law.
2.Is there a robust process for studying and mitigating the safety and geopolitical stability problems that could result from the deployment of military AI? Does this process apply before work commences, along the development pathway and after deployment? Could it incorporate the sufficient expertise to address subtle and complex technical problems? And would those leading the process have sufficient independence and authority to ensure that it can check companies' and military agencies' decisions?
3.Are the contracting agencies willing to commit to not using AI for autonomous offensive weapons? Or to ensuring that any defensive autonomous systems are carefully engineered to avoid risks of accidental harm or conflict escalation? Are present testing and formal verification methods adequate for that task?
4.Can there be transparent, accountable oversight from an independently constituted ethics board or similar entity with both the power to veto aspects of the program and the power to bring public transparency to issues where necessary or appropriate? For example, while Alphabet's AI-focused subsidiary DeepMind has committed to independent ethics review, we are not aware of similar commitments from Google itself. Given this letter, we are concerned that the internal transparency, review, and discussion of Project Maven inside Google was inadequate. Any project review process must be transparent, informed, and independent. While it remains difficult to ensure that that is the case, without such independent oversight, a project runs real risk of harm.
The EFF lists several "starting points" any company, or any worker, considering whether to work with the military on a project with potentially dangerous or risk AI applications should be asking:
1. Is it possible to create strong and binding international institutions or agreements that define acceptable military uses and limitations in the use of AI? While this is not an easy task, the current lack of such structures is troubling. There are serious and potentially destabilizing impacts from deploying AI in any military setting not clearly governed by settled rules of war. The use of AI in potential target identification processes is one clear category of uses that must be governed by law.
2.Is there a robust process for studying and mitigating the safety and geopolitical stability problems that could result from the deployment of military AI? Does this process apply before work commences, along the development pathway and after deployment? Could it incorporate the sufficient expertise to address subtle and complex technical problems? And would those leading the process have sufficient independence and authority to ensure that it can check companies' and military agencies' decisions?
3.Are the contracting agencies willing to commit to not using AI for autonomous offensive weapons? Or to ensuring that any defensive autonomous systems are carefully engineered to avoid risks of accidental harm or conflict escalation? Are present testing and formal verification methods adequate for that task?
4.Can there be transparent, accountable oversight from an independently constituted ethics board or similar entity with both the power to veto aspects of the program and the power to bring public transparency to issues where necessary or appropriate? For example, while Alphabet's AI-focused subsidiary DeepMind has committed to independent ethics review, we are not aware of similar commitments from Google itself. Given this letter, we are concerned that the internal transparency, review, and discussion of Project Maven inside Google was inadequate. Any project review process must be transparent, informed, and independent. While it remains difficult to ensure that that is the case, without such independent oversight, a project runs real risk of harm.
Screw EFF (Score:5, Funny)
Re: (Score:2, Insightful)
Well if you can get China and Russia to stop improving their own militaries than by all means don't let the US do it. There is a big fallacy today that says if the US didn't build advanced weapons then there would no reason for others to create their own. The world is heading towards a melt down and I want the US to have the weapons needed to come out on top. The US already has to put up with people clamoring for total transparency in it's intelligence and counter intelligence agencies. These are the same m
Re: (Score:2, Insightful)
It's not that the US develops those weapons, as much as they US gets involved in a lot of other countries. Reducing that would be a good start, but unfortunately it looks unlikely under the current administration.
Advanced weapons don't make a huge difference really. The US still has enough nukes to maintain MAD. No missile shield is reliably enough to defend against that arsenal, and the same goes for current Russian ICBMs. All this stuff about hypersonic nuclear cruise missiles and torpedo drones is largel
Re: (Score:2)
Advanced weapons don't make a huge difference really.
Yet! At some point they will, but that day is unforeseeably in our 'sci-fi future'.
The US still has enough nukes to maintain MAD. No missile shield is reliably enough to defend against that arsenal, and the same goes for current Russian ICBMs. All this stuff about hypersonic nuclear cruise missiles and torpedo drones is largely posturing, adding nuclear warheads to technologies developed for other kinds of warfare.
The US has roughly 4000 nukes, Russia has maybe a few hundred more, so in a pure numbers sense I'd agree with you, it's definitely MAD.
It's not just about absolute numbers though, as, roughly a decade ago, the US started upgrading the fuses in their W76's (a set of 8 independently targeted 100 kilotonne warheads launched on a trident missile from a submarine) to increase their accuracy. This improvement doesn't violate the te
Re: (Score:2)
Advanced weapons can be very useful in a non-nuclear war. Some of the stories I've heard about what US forces were capable of in the 1991 Gulf War were quite impressive. They're better now.
Re: (Score:2)
On, I believe, September 11, 1941, the USN was fully at war with Germany in the Atlantic. We didn't do well in that undeclared war, but we fought it. For much of the rest of 1941, we'd been busy violating the rights and responsibilities of neutrals, in favor of Britain. The entry of the US into the war actually disrupted the fighting, because the US stopped exporting as much war materiel.
Re: Screw EFF (Score:1)
Re: (Score:1)
The world is heading towards a melt down and I want the US to have the weapons needed to come out on top.
Those two statements are self-contradicting. I always liked Carl Sagan's quote about the US and Soviet Union when we were worried about who had the bigger weapons:
Re: (Score:1)
A more realistic portrayal of conservatives:
"We're all working really hard to bring on rapture. How dare that pack of libtard yellow-belly commie snowflakes get in the way of our Armageddon. Let NRA bring us the Great RaptureBots like The Lord Jesus Christ wants! Stop keeping us from the grand shooting-range in the sky. And those are not harps, silly effeminate libs, but horseshoes; God's favorite game."
Hey not all of us are stupid religious zealots. Some of us just like the guns and confederate flags!
Re: (Score:2)
Such are usually libertarians.
My comments in this thread are entirely scurrilous. I have no use for the confederate flag but a couple of neighbors display them and I don’t care. We keep a gun handy in our store but I don’t carry one around. I’ve called one or two people “my special little snowflake” but they are special and unique people that I like. I am actually fairly progressive in most matters but there are some libtards out there who really take some things too far. The religious right have lost
Re: (Score:2)
You should probably start a new party, then, because the GOP was taken over by religious nutjobs in the 80s.
Not really being serious in this thread. The Republicans have been paying lip service to the religious right since the 80s. Lately it's become time to pay the piper. I have some problems with the Democrats too. If ever a suitable third party arose, I'd be right there with them.
Gov does whatever it can get away with (Score:1)
You can't effectively limit government with laws, policies, or agreements over the long term. All of western legal history, not to mention the total disfiguration of the U.S. Constitution over centuries, should make that clear. If you build destructive technology for the government, someone or some group in the government will be working overtime to make sure it gets used in the worst possible way.
Don't help the government do anything, ever. Engage your local communities. Help your neighbors. Start a busine
lost the plot (Score:2)
The entire lost boils down to "if we can't do it perfectly, we should just let other countries do it".
What an insanely naive position.
Re: (Score:2)
Honestly I'm waiting for Tencent killbots. Bonus points if they're actually remotely piloted by Chinese gamers a la 'Toys'.
Re: (Score:2)
Re: lost the plot (Score:2)
They both suck, really, and the latter is morally dubious
By that estimation pretty much all military (and much non-military) R&D is morally dubious. All kinds of technology and knowledge has the potential to be abused.
Morality is all well and good, but pragmatism matters too. I'd rather be "morally dubious" and alive than morally virtuous and dead.
Re: (Score:1)
Moderated -1, The Truth Hurts.
Re: (Score:2)
Awww, I noticed you got a little butthurt and decided to stalk me.
That's so sweet, APK is the only other person who stalked me before!
Re: (Score:1)
I'll admit that was one Family Guy's better, more self-aware non-sequiturs.
Let me answer those four questions (Score:3)
1. no
2. no
3. no
4. no
Just look at current international agreements and how often they are ignored.
Re:Let me answer those four questions (Score:5, Insightful)
Can these questions be answered in the affirmative for any advanced weapons system? Seems sort of an impossibly high bar they've set.
Re: (Score:2)
Correct, unilateral disarmament is the goal, because, if we stopped being such a threat, everyone would love us.
Re: (Score:2)
Can these questions be answered in the affirmative for any advanced weapons system?
Yes. Mutual restrictions on weapons work when the weapons are big, or require lots of infrastructure, and are easy to monitor.
There have been two reasonably successful examples:
1. Nukes.
2. Battleships
Battleships were restricted in the Washington Naval Treaty [wikipedia.org]. There was some cheating, but it mostly worked pretty well. But not well enough to prevent WW2.
Re: (Score:2)
AI projects can be physically small, have a computing cluster no more conspicuous than a typical cryptocoin mining or HPC operation, and can be carried out inside a mountain that's off-grid and air-gapped. Multiple legal authorities haven't managed to keep the Pirate Bay down for more than a year or so, what chance is there for a completely offline project?
Once the software is written it can be copied to thousands of different secure locations. Sure, eventually they'd need to test it on actual robots, but i
Re: (Score:1)
Nukes still exist and recently Trump has been making it very clear that they are still easily within reach should some crazy nutjob get (re-)ellected.
He may not have acted on his threats yet, but it's clear that he could and is stupid enough he might actually do so.
So... no... mutual restrictions even on these kinds of weapons have not worked.
Re: (Score:2)
It should also be noted that the Treaty limits on BB's resulted in the rise of the CV as the capital ship....
IOW, all that really happened as a result of the Washington Naval Treaties is that WW2 was fought with different weapons/tactics than WW1....
Re: (Score:1)
Can these questions be answered in the affirmative for any advanced weapons system? Seems sort of an impossibly high bar they've set.
No (Score:2)
Because this infringes my freedom? (Score:1)
Why is the EFF involved here?
Which of my online freedoms is being infringed???
When is it acceptable to help the military? (Score:5, Interesting)
When is it acceptable to help the military? There are a lot of applications that could be used for surveillance and non-offensive purposes, but could also be used to attack or kill people. As a civilian researcher developing technology with military funding, it's not clear how the work will eventually be used.
I was involved with a project that was funded by a US military office. To remain anonymous, I won't say exactly where my funding came from or what project I was working on, but I've seen calls to fund this research from the Air Force Office of Scientific Research and also from the Army Research Laboratory. Atmospheric wind shear can be exploited by aircraft to converse power through dynamic soaring. During the day, when the surface is being heated by solar radiation, the aircraft can fly in thermals and other areas of ascent in the planetary boundary layer, usually in the lowest 1-2 km, and exploit static soaring. Autonomous systems such as drones can use this information in planning their flight path and conserve power, which allows them to stay in flight longer and extend the missions they can carry out.
Although there are civilian uses for this technology, my work was funded by a grant from the US military. I had no role in designing the project or soliciting funding, but I was employed with funds from the grant. There are non-violent uses for this technology, even in military applications. But they can also be used to attack people.
Drones could be used to deliver supplies including food or medical supplies. Drones could be used to locate people in search-and-rescue missions. Drones could be used by the Coast Guard to patrol smugglers bringing contraband and drugs in the US. Drones could also be used to patrol the southern border of the US and would probably be quite a bit more useful than a wall. They could be used to gather surveillance of enemy combatants who may pose a risk both to US troops and civilians, to allow people time to evacuate or find shelter. None of these are violent, and many of these applications are not controversial at all. However, drones can also carry weapons and be used to attack and kill people.
As a researcher, I have no control over how my research is used by the military. I can use the results in other projects for civilian use to benefit people. A meteorologist might use drones to collect data around severe thunderstorms to improve weather forecasting and provide better warnings to people. This technology could be used to extend the flight of those drones and help gather data that can save lives. However, the research is funded by the military, and the military could use it to kill people.
Is it wrong to accept the funding and conduct research that can benefit civilians but can also be used for harm? Most technology can be used for non-violent purposes that are overwhelmingly beneficial to people. Even nuclear weapons could be used to benefit humanity if, for example, they were used to destroy a large near Earth asteroid that might collide with Earth. As a researcher, I have no control over how the military would use the results of my work. But that work could be used for both beneficial and harmful purposes. Is it wrong to accept that funding and do research for the military? When is it acceptable to do research with military funding and when is it not? Where do you draw the line?
Re: (Score:3, Funny)
>Where do you draw the line?
Obviously the line should have been drawn in 1960's when the precursor network to the Internet, ARPANET, was funded by the military ARPA by diverting a million dollars from a ballistic missile defense program for its development. If only back then the military hadn't paid universities and companies to develop the technologies used commonly for the internet today, cyberwar or cyberterrorism, or cybercrime wouldn't be things now.
Also, defending yourself and your family is good (Score:4, Insightful)
You mentioned a lot of non-violent uses of technology that has been funded by the military, and military resources being used to deliver food, medical supplies, and other relief. That's all true and good. Versus violent uses, you say, which are bad.
ALSO there are countries who want to wipe us out. There are countries with the ability to kill millions of Americans. What has happened before will happen again - there will be a country who *wants* to attack us and *can*. The US response to Japanese surprise attack at Pearl Harbor was very much violent - as it needed to be. They were bombing us - by surprise, pretending to negotiate trade agreements with us while their ships were underway to attack us. Swift and violent action to protect ourselves was the right action, and the only option.
I most certainly don't agree with every use of the US military. I AM very glad for its primary use - being a massive deterrent to anyone who might think about attacking us. You may think "no military would ever attack the United States". That's true, at the moment. But why? Why wouldn't North Korea, or Iran, Russia, or China*, send bombers to the US? Because we would crush them, that's why. The REASON we don't have to fight off an attack today is precisely because of our military capability.
That's the main use of a superpower military - making an attack on us inconceivable by simply having the *capability* to win decisively and quickly if we were attacked. That's a good thing. I don't want our country to be defenseless, a tempting target. Our capacity for overwhelming violence is a large part of why other countries don't initiate violence against us or our friends.
* The situation with China specifically is a bit more complex at the moment. Trade is important to them, and they have some significant military power. They have also noticed that they can attack us via cyber warfare and we don't treat it as an attack, we let them get away with that.
Re: (Score:1)
You mentioned a lot of non-violent uses of technology that has been funded by the military, and military resources being used to deliver food, medical supplies, and other relief. That's all true and good. Versus violent uses, you say, which are bad.
ALSO there are countries who want to wipe us out. There are countries with the ability to kill millions of Americans. What has happened before will happen again - there will be a country who *wants* to attack us and *can*. The US response to Japanese surprise attack at Pearl Harbor was very much violent - as it needed to be. They were bombing us - by surprise, pretending to negotiate trade agreements with us while their ships were underway to attack us. Swift and violent action to protect ourselves was the right action, and the only option.
I most certainly don't agree with every use of the US military. I AM very glad for its primary use - being a massive deterrent to anyone who might think about attacking us. You may think "no military would ever attack the United States". That's true, at the moment. But why? Why wouldn't North Korea, or Iran, Russia, or China*, send bombers to the US? Because we would crush them, that's why. The REASON we don't have to fight off an attack today is precisely because of our military capability.
That's the main use of a superpower military - making an attack on us inconceivable by simply having the *capability* to win decisively and quickly if we were attacked. That's a good thing. I don't want our country to be defenseless, a tempting target. Our capacity for overwhelming violence is a large part of why other countries don't initiate violence against us or our friends.
* The situation with China specifically is a bit more complex at the moment. Trade is important to them, and they have some significant military power. They have also noticed that they can attack us via cyber warfare and we don't treat it as an attack, we let them get away with that.
The US ability to wage war is a contradiction in of itself. The US _IS_ engaged in conflicts all over the world. You're confusing things a bit though. It's called the arms race and there absolutely are rules, research or not. One core component you're over looking is the separation of Military and civilian. There are a _lot_ of companies that exist purely to service the Military (Boston Dynamics for example). And yes, export controls are a thing. Fact that Google, Facebook or any other company would
Re: (Score:1)
Disclaimer - I absolutely do not agree with Cyberwar being in any way equivocal to traditional war. No one dies in a ddos. I do however agree with controls on technology as I do the use of Military force, domestically.
Well, there's cyberwar, and there's espionage. Most of what has been done has been espionage, usually for economic gain, not military gain. But a cyberwar would be, say, electronically attacking infrastructure, such that it would be more difficult to coordinate and attack in a real war, and that WOULD result in casualties that can be linked to the cyberwar.
Re:When is it acceptable to help the military? (Score:4, Insightful)
Most technology can be used for non-violent purposes that are overwhelmingly beneficial to people.
In addition, violence itself is merely another tool, one which can be put to good purpose. Military forces are important tools of public policy. They can be used to end horrific suffering and they can be used to maintain peace, by explicitly threatening anyone who would break the peace with violent consequences.
The underlying assumption of your post seems to be that military capability is an unalloyed evil. I'll grant that in an ideal world it would be completely unnecessary, but that is not the world in which we live. If we're concerned about misuse of military power, it seems to me that the armed forces already have more than enough capability to have us shaking in our boots, and it's not clear to me that adding AI to the mix (assuming the AI doesn't get out of control) significantly changes anything.
To make military forces "safe", we need to (a) ensure that they remain subject to civilian control and (b) ensure that civilian control acts responsibly. I'll grant that we seriously undermined (b) in the 2016 election, but that's a repairable problem.
The government ALREADY builds unaccountable AIs (Score:1)
We call them soldiers and policemen.
What an asinine position (Score:1)
And exactly WHO thinks China isn't working on this crap every minute of the day to undermine our Republic?
Re: (Score:1)
China doesn't need to do anything to undermine the US. They are watching a nation where one half hates the other half more than they hate any external threat. They look at the self-declared exemplar of democracy and a free press, and think, "not for us." With 1.5 billion people, they literally just need to keep manufacturing, trading, and not getting involved in multi-trillion dollar foreign adventures. The inevitable outcome is just an exercise in mathematics.
Well they better just close down the business (Score:5, Interesting)
The internet itself is based on intellectual property paid for and developed by the US military. GPS systems which are at the core of many computer applications we all love and use is run off a system originally used developed for the US military... there's hundreds more examples.
Technology is just that. It can be used for multiple purposes, very often the original intent of the technology can end up being used in completely ways, meaning that technology intended for military use can end up becoming something like the world wide web and technology not intended for military use can end up being used to take lives e.g.chlorine gas is used widely within industry for thousands of purposes... other than gassing people in idlib
Its not impossible (Score:1)
Re: (Score:2)
Nonsense, some grid transformers *might* be lost is all. And the whining about length of replacement time is based on the stupidity of scaling normal manufacturing times which would not be the case in emergency. Chicken little screaming is all that is
Re:Fortunately (Score:1)
Re: (Score:2)
That is speculation with some facts of science mixed in.
Re:Alpha Centuri (Score:1)
There is a positive side to this (Score:1)
At least UBER is not doing the research on target acquisition.
Can there be what? (Score:2)
4.Can there be transparent, accountable oversight from an independently constituted ethics board or similar entity with both the power to veto aspects of the program and the power to bring public transparency to issues where necessary or appropriate?
What in the world are they smoking (and can I get some, it must be goooood shiiiit)? In what reality do they believe that the design of military systems is subject to veto from a non-democratically- accountable entity? From where does this board derive any mandate to be making public policy?
I'm not against the goal here of having some ethics review. But there's a large gap between 'there should be an ethics board' and 'some dudes in Silicon Valley self-appointed themselves to veto the decisions of our elect
Without These Systems--We End. But.. (Score:3, Insightful)
AI driven defensive and offensive weapons systems are crucial to the survival of any power in the future world. We need to redouble efforts into making them more efficient. It's as simple as if we don't, they will and we will be lost to what little time is left for mankind to be written into any history books.
That said, we could focus hard solving the problems of differentiating between legitimate and illegitimate targets. We could focus on systems to save lives and win the hearts and minds of local populations. The only way an enemy is truly defeated is if you either killed them all (which is possible) or win over their hearts and minds (which is harder).
Above all, it would be extremely beneficial to focus on non-lethal weapons systems. For example, small drones with tranquilizer darts or slime bombs that make an area so slick that enemy troops cannot traverse enabling a battle win by maneuver. Catch enemy soldiers with nets... Whatever it is--war technologies require extreme innovation and creativity, be they lethal or not. The non-lethal approaches add the advantages of:
1. Capturing provides people to interrogate, leading to information that is key to more wins.
2. Non-harm is far more effective at winning the hearts and minds of an enemy.
3. Non-harm is far better for Public Relations.
4. Non-harm is morally superior, when and where it is reasonably possible.
Other opinions? (Score:2)
I disagree with EFF for once (Score:2)
The US government is going to find someone to help them with AI object recognition and target assistance. If it's not a new tech titan, it will be an establish defense contractor. A better implementation is safer for everyone. Both accuracy and speed of response are important in weapons systems.
As the AI becomes less effective, the risk of bad outcomes increases: collateral damage, misidentified innocents, and missed opportunities on real targets.
While I firmly believing that automating kill authority is ve
Unaccountable AI Systems? (Score:2)
The military still hasn't solved the problem of handing weapons to people like William Calley Jr. Robots are a ways down on my list of concerns.
Congratulations, EFF! (Score:2)
EFF used to be about protecting technological freedom. Now they're worried that users of technology have too much freedom. This means that at some point in the past, EFF won! (Slashdot, why didn't you report on this earlier?)
Its going to happen regardless (Score:2)
Look, this is the future of warfare. Drag your heels on that one as much as you like and find yourself in the same position as the old fleet admirals that felt big battle ships were the way to go.
Airplane and carrier killed the battleship. Its done. Its an inferior weapons platform. If you had a choice going into war between having a bunch of battleships or a bunch of carriers with planes, trained pilots etc... you're going for the carriers or you're going to lose horribly.
same deal with the AI systems. If
Re: (Score:2)
After our past disagreements, it feels odd to see something you wrote that I agree with so much.
Re: (Score:2)
You can't be wrong all the time. ;)
Break off from Google (Score:1)
Google has way too much power over global civilization. But everyone would rather complain about it than actually do what needs to be done to help solve that, namely:
Stop using Google products.
In other words, switch to a privacy-friendly e-mail host. Block Google trackers and scripts. Don't use Google Drive. Use a privacy-friendly search provider like DuckDuckGo. Don't use Chrome. If you absolutely need to log into YouTube (if you're a producer, for example), keep that account separate from everything else.