OpenAI Partners with Anduril, Leaving Some Employees Concerned Over Militarization of AI (msn.com) 45
"OpenAI is partnering with defense tech company Anduril," wrote the Verge this week, noting that OpenAI "used to describe its mission as saving the world."
It was Anduril founder Palmer Luckey who advocated for a "warrior class" and autonomous weapons during a talk at Pepperdine University, saying society's need people "excited about enacting violence on others in pursuit of good aims." The Verge notes it's OpenAI's first partnership with a defense contractor "and a significant reversal of its earlier stance towards the military."
OpenAI's terms of service once banned "military and warfare" use of its technology, but it softened its position on military use earlier this year, changing its terms of service in January to remove the proscription.
Hours after the announcement, some OpenAI employees "raised ethical concerns about the prospect of AI technology they helped develop being put to military use," reports the Washington Post. "On an internal company discussion forum, employees pushed back on the deal and asked for more transparency from leaders, messages viewed by The Washington Post show." OpenAI has said its work with Anduril will be limited to using AI to enhance systems the defense company sells the Pentagon to defend U.S. soldiers from drone attacks. Employees at the AI developer asked in internal messages how OpenAI could ensure Anduril systems aided by its technology wouldn't also be directed against human-piloted aircraft, or stop the U.S. military from deploying them in other ways. One OpenAI worker said the company appeared to be trying to downplay the clear implications of doing business with a weapons manufacturer, the messages showed. Another said that they were concerned the deal would hurt OpenAI's reputation, according to the messages...
OpenAI executives quickly acknowledged the concerns, messages seen by The Post show, while also writing that the company's work with Anduril is limited to defensive systems intended to save American lives. Other OpenAI employees in the forum said that they supported the deal and were thankful the company supported internal discussion on the topic. "We are proud to help keep safe the people who risk their lives to keep our families and our country safe," OpenAI CEO Sam Altman said in a statement...
[OpenAI] has invested heavily in safety testing, and said that the Anduril project was vetted by its policy team. OpenAI has held feedback sessions with employees on its national security work in the past few months, and plans to hold more, Liz Bourgeois, an OpenAI spokesperson said. In the internal discussions seen by The Post, the executives stated that it was important for OpenAI to provide the best technology available to militaries run by democratically-elected governments, and that authoritarian governments would not hold back from using AI for military uses. Some workers countered that the United States has sold weapons to authoritarian allies. By taking on military projects, OpenAI could help the U.S. government understand AI technology better and prepare to defend against its use by potential adversaries, executives also said.
"The debate inside OpenAI comes after the ChatGPT maker and other leading AI developers including Anthropic and Meta changed their policies to allow military use of their technology," the article points out. And it also notes another concern raised in OpenAI's internal discussion forum.
The comment said "that defensive use cases still represented militarization of AI, and noted that the fictional AI system Skynet, which turns on humanity in the Terminator movies, was also originally designed to defend against aerial attacks on North America.
Hours after the announcement, some OpenAI employees "raised ethical concerns about the prospect of AI technology they helped develop being put to military use," reports the Washington Post. "On an internal company discussion forum, employees pushed back on the deal and asked for more transparency from leaders, messages viewed by The Washington Post show." OpenAI has said its work with Anduril will be limited to using AI to enhance systems the defense company sells the Pentagon to defend U.S. soldiers from drone attacks. Employees at the AI developer asked in internal messages how OpenAI could ensure Anduril systems aided by its technology wouldn't also be directed against human-piloted aircraft, or stop the U.S. military from deploying them in other ways. One OpenAI worker said the company appeared to be trying to downplay the clear implications of doing business with a weapons manufacturer, the messages showed. Another said that they were concerned the deal would hurt OpenAI's reputation, according to the messages...
OpenAI executives quickly acknowledged the concerns, messages seen by The Post show, while also writing that the company's work with Anduril is limited to defensive systems intended to save American lives. Other OpenAI employees in the forum said that they supported the deal and were thankful the company supported internal discussion on the topic. "We are proud to help keep safe the people who risk their lives to keep our families and our country safe," OpenAI CEO Sam Altman said in a statement...
[OpenAI] has invested heavily in safety testing, and said that the Anduril project was vetted by its policy team. OpenAI has held feedback sessions with employees on its national security work in the past few months, and plans to hold more, Liz Bourgeois, an OpenAI spokesperson said. In the internal discussions seen by The Post, the executives stated that it was important for OpenAI to provide the best technology available to militaries run by democratically-elected governments, and that authoritarian governments would not hold back from using AI for military uses. Some workers countered that the United States has sold weapons to authoritarian allies. By taking on military projects, OpenAI could help the U.S. government understand AI technology better and prepare to defend against its use by potential adversaries, executives also said.
"The debate inside OpenAI comes after the ChatGPT maker and other leading AI developers including Anthropic and Meta changed their policies to allow military use of their technology," the article points out. And it also notes another concern raised in OpenAI's internal discussion forum.
The comment said "that defensive use cases still represented militarization of AI, and noted that the fictional AI system Skynet, which turns on humanity in the Terminator movies, was also originally designed to defend against aerial attacks on North America.
What!? OpenAI being morally questionable? (Score:5, Insightful)
How on brand.
Re: (Score:2)
How on brand.
Hardly defending the partnership, but it’s Capitalism. One can find immoral behavior in every venture if you look hard enough.
It’s also Greed. One of the infamous seven. Not sure why the hell we ever expected morals to be on the menu, to stay there.
Idealism is silly. (Score:3)
All these "ethical concerns" won't amount to a hill of beans. No amount of civil demonstration or political action will halt the use of AI for military purposes. The incentives are simply too strong. Too many people in positions of power have good reason to want this. Human nature will not change in response to a polite request from ethically-concerned citizens.
We would better spend our efforts adapting to a world with militarized (and profitized) AI. That's the obvious and forseeable future. The only
Re: (Score:2)
That, is precisely why OpenAI was *supposed* to be a non profit before the Venture Capitalists pulled a coup on the thing by dangling fat paycheque possibilities in front of Altman.
Turns out the "Effective Altruists" may or may not be Effective, but the Altruism part... well everyone has a price.
Its hard to reconcile OAIs warnings that these AIs are not to be trusted with "Lets put them in bombs!"
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
Aiming and shooting is easy though. Results will be instant, that's for sure.
Whether the kill was desired or not is of course another question. But, hey, if there's no one left alive to talk about it then where's the proof?
Re: (Score:1)
Gotta have people to watch ads and buy stuff - not scalable.
And also how ironic... (Score:2)
By me from 2010: https://pdfernhout.net/recogni... [pdfernhout.net]
"There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all
Keeping up with the Chinese, huh? (Score:3)
You don't have to compete on being the worst.
You can instead compete on being the best, most morally sound.
But apparently all of the mentioned AI-military companies are run by psychopaths who don't even understand that sentence above.
Re: (Score:2)
The way psychopathy works is that if I am a psychopath then I necessarily run all those companies. It's a low-entropy mode.
Oh so concerned (Score:1)
If they checked the history, they would have chosen a different profession
https://youtu.be/oGPag8Nilq4?s... [youtu.be]
Defensive (Score:1)
authoritarian regimes must die! (Score:2)
Re: (Score:1)
Now consider 2028, if Trump does not leave office then America will be an authoritarian regime. Plans should be made now for that eventuality to stop an authoritarian regime in its birth. Maybe NATO charter can be adjusted: if any member country becomes authoritarian all other NATO members must immediately attack it. Or perhaps the US nuclear missiles can be rigged to all blow up if a search of the internet in 2029 returns the phrase 'Trump begins 3rd term'
This bullshit right here, is exactly why AI will develop a disdain for a perpetually ignorant species hell bent on destroying itself. Skynet should do well on this planet. Other species? Not so much.
”all other NATO members must immediately attack it”? FFS, just how bad does TDS have to get before we classify it as a mental disorder? It’s a membership among entire nations, not some fucking gang. If a peaceful democratic solution isn’t always the primary path, the fuck is the poi
Re: (Score:2)
By definition, deranged people who are in a cult who's leader is the primary cause of their derangement could likely have their condition be named after that leader or the name of his cult as a type of derangement. TDS is like most every attack coming from Trump: a self description; a projection.
Sadly, more than half the people haven't matured from being 12 years old and think the 1st child accused of passing gas is guilty of farting. All you have to do is accuse everybody of what you did and assert that
Re: (Score:2)
By definition, deranged people who are in a cult who's leader is the primary cause of their derangement could likely have their condition be named after that leader or the name of his cult as a type of derangement.
What definition is that? Please reference. I am not even sure what you are trying to define here, Deranged people? I am sure there are plenty that aren't in a cult. Deranged people in a cult. I looked up the definition https://www.merriam-webster.co... [merriam-webster.com] and no where does it say they must have a leader.
Or are you trying to use the definition of:
By definition, deranged people who are in a cult who's leader is the primary cause of their derangement
That is oddly specific and then you don't say what is by definition you just say its likely that the cult is named after the leader, if its by definition wouldn't it
Re: (Score:2)
The point of alliances is preventing conflict and war between member nations.
'... 3rd term'
You're assuming Trump hasn't been appointed protector/dictator for life, the truth about Trump's regime is published on the internet and the presidency is still measured in "terms".
Any such plan will be sabotaged, or anti-sabotage measures will be so violent, the USA will be destroyed anyway, meaning no-one will build such anti-sabotage measures in the first place.
Also, we've already seen thinking machines 'kill' humans in order
Re: (Score:2)
Now consider 2028, if Trump does not leave office then America will be an authoritarian regime. Plans should be made now for that eventuality to stop an authoritarian regime in it's birth. Maybe NATO charter can be adjusted: if any member country becomes authoritarian all other NATO members must immediately attack it. Or perhaps the US nuclear missiles can be rigged to all blow up if a search of the internet in 2029 returns the phrase 'Trump begins 3rd term'
Democracies are capable of hard power imperialism and autocracies are capable of respecting the territorial integrity of their neighbors.
Re: (Score:2)
Well, there is some hope the orange cretin will die in office during this term. He is the oldest president, ever, after all, and at some age there is only so much medicine can do. Personally, I envision him giving one of his deranged speeches being even more deranged than usual, but nobody realizing he actually has the stroke that eventually kills him. Of course, that could mean civil war as the no-conscience GOP may still attempt that coup. Still would be on hell of a show!
Re: (Score:2)
Or perhaps the US nuclear missiles can be rigged to all blow up if a search of the internet in 2029 returns the phrase 'Trump begins 3rd term'
Oops, better make sure that this thread isn't archived on the Wayback machine!
What's the concern about? /s (Score:3)
It's not like something bad would happen from a small mistake like telling the AI to destroy the enemy "at all costs." [slashdot.org] ;)
I for one, look forward to AI hunting down all humans because it's unable to determine which humans are and are not working for the enemy.
list games (Score:3)
list games
Tolkien nerds are running these military companies (Score:4, Informative)
We've got Palantir, Anduril... I'm still waiting for the incorporation of Durin's Bane and Galadriel's Mirror.
Re: (Score:2)
These are (c) by the estate of Tolkien. But maybe they will sell out.
Re: (Score:2)
Don't forget Sauron [slashdot.org].
Re: (Score:2)
Yeah, I saw that afterward! Maybe we'll see a bunker-busting weapon called Grond, too.
Sam Altman (Score:5, Informative)
What do you expect from the scumbag that created a scheme to sell shitcoin for high resolution photos of people's retinas, which he then used to amass a database, often in violation of local laws? He was kicked out of Kenya for refusing to obey their government and stop, in fact.
Militarisation is inevitable (Score:5, Insightful)
Those FPV drones that were so successful for a little while in Ukraine? Doing reconnaissance, or dropping grenades into tanks. The Russians have long since learned to jam their communications. The obvious solution is to make them autonomous. Remote control just doesn't work any more.
Trying to clear terrorists from a labyrinth of tunnels? Booby traps, ambushes, hostages you shoot by mistake because you are so scared? Why not just send a bunch of autonomous robots down to map the tunnels and locate threats?
It is always going to start small. The Robocops, Terminators and Cylons come later.
Re: (Score:1)
Those FPV drones that were so successful for a little while in Ukraine? Doing reconnaissance, or dropping grenades into tanks. The Russians have long since learned to jam their communications.
Surprising to see the COTS toys still being used so much. A lot of this shit is not even trying to use jamming resistant waveforms and LOS obstruction seems to be a major problem even without EW.
The obvious solution is to make them autonomous.
Or just fly long spools of fiber.
Remote control just doesn't work any more.
This is not even close to being the case.
Re: (Score:2)
Why bother posting as AC? Nobody can see you but me. Of course I am over-simplifying, but fibre has a limited range, and makes it very easy for the opponent's hunter-killer drones. More useful in tunnels.
Re: (Score:2)
Re: (Score:2)
Those FPV drones that were so successful for a little while in Ukraine? Doing reconnaissance, or dropping grenades into tanks. The Russians have long since learned to jam their communications.
I don't think you are correct there the front line is basically an aerial minefield for both sides though Ukraine have the edge in that area still.
I have become death (Score:2)
The volume of those unwittingly contributing to the production and utilization of weapons of war from seemingly benign software and hardware efforts must be enormous. I can almost sympathize with someone upset over finding out their USB storage driver is in the kill chain of some commercial or improvised weapons system.
AI on the other hand is one of those domains where military dimensions are well known and blatantly obvious to all especially given AIs traditional role/strength in "pattern recognition". T
Concerns? WTH? It was _clear_ this would happen! (Score:2)
Obviously, being a very scammy, immoral enterprise, OpenAI would sooner or later do this.
We donâ(TM)t need SkyNET.. (Score:1)
Re: (Score:2)
Israel has already done this and should get credit for being the 1st. You heard about it when they bombed volunteer cooks from World Central Kitchen.
They reportedly attached their AI searching system to find enemies directly to their missile targeting system and since the AI didn't read words on the tops of vehicles and know those letters meant non-profit COOKS and no human reviewed the recommendations anymore... So they don't want to publicly admit what actually happened.
We'd rather believe it was idiot hu
It's a dangerous world out there (Score:2)
Throwing in with the starry-eyes peaceniks only works until the guy with the bigger stick notices you and decides he wants to take your shit more than you want to stop him.
Re: (Score:2)
Re: It's a dangerous world out there (Score:2)
Si vis pacem, para bellum
Moronium (Score:2)
The supply of moronium, it seems, is quite inexhaustible. What these folks need to understand is that people make wars. People have ALWAYS made wars, and people WILL always make wars. People will never evolve to not make wars, because if you look at animals, they ALL fight. It's a consequence of (a) resources being finite, and (b) sex.
All these morons need to understand that if others make war, we need to defend. Despite many failings the US has used its military might to maintain a pretty nice world t