Palantir Demos AI To Fight Wars (vice.com) 80
An anonymous reader quotes a report from Motherboard: Palantir, the company of billionaire Peter Thiel, is launching Palantir Artificial Intelligence Platform (AIP), software meant to run large language models like GPT-4 and alternatives on private networks. In one of its pitch videos, Palantir demos how a military might use AIP to fight a war. In the video, the operator uses a ChatGPT-style chatbot to order drone reconnaissance, generate several plans of attack, and organize the jamming of enemy communications. In Palantir's scenario, a "military operator responsible for monitoring activity within eastern Europe" receives an alert from AIP that an enemy is amassing military equipment near friendly forces. The operator then asks the chatbot to show them more details, gets a little more information, and then asks the AI to guess what the units might be.
"They ask what enemy units are in the region and leverage AI to build out a likely unit formation," the video said. After getting the AI's best guess as to what's going on, the operator then asks the AI to take better pictures. It launches a Reaper MQ-9 drone to take photos and the operator discovers that there's a T-80 tank, a Soviet-era Russia vehicle, near friendly forces. Then the operator asks the robots what to do about it. "The operator uses AIP to generate three possible courses of action to target this enemy equipment," the video said. "Next they use AIP to automatically send these options up the chain of command." The options include attacking the tank with an F-16, long range artillery, or Javelin missiles. According to the video, the AI will even let everyone know if nearby troops have enough Javelins to conduct the mission and automate the jamming systems. [...]
What Palantir is offering is the illusion of safety and control for the Pentagon as it begins to adopt AI. "LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way," the pitch said. According to Palantir, this control involves three pillars. The first claim is that AIP will be able to deploy these systems into classified networks and "devices on the tactical edge." It claims it will be able to parse both classified and real-time data in a responsible, legal, and ethical way. According to the video, users will then have control over what every LLM and AI in the Palantir-backed system can do. "AIP's security features what LLMs and AI can and cannot see and what they can and cannot do," the video said. "As operators take action, AIP generates a secure digital record of operations. These capabilities are crucial for mitigating significant legal, regulatory, and ethical risks in sensitive and classified settings.
"They ask what enemy units are in the region and leverage AI to build out a likely unit formation," the video said. After getting the AI's best guess as to what's going on, the operator then asks the AI to take better pictures. It launches a Reaper MQ-9 drone to take photos and the operator discovers that there's a T-80 tank, a Soviet-era Russia vehicle, near friendly forces. Then the operator asks the robots what to do about it. "The operator uses AIP to generate three possible courses of action to target this enemy equipment," the video said. "Next they use AIP to automatically send these options up the chain of command." The options include attacking the tank with an F-16, long range artillery, or Javelin missiles. According to the video, the AI will even let everyone know if nearby troops have enough Javelins to conduct the mission and automate the jamming systems. [...]
What Palantir is offering is the illusion of safety and control for the Pentagon as it begins to adopt AI. "LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way," the pitch said. According to Palantir, this control involves three pillars. The first claim is that AIP will be able to deploy these systems into classified networks and "devices on the tactical edge." It claims it will be able to parse both classified and real-time data in a responsible, legal, and ethical way. According to the video, users will then have control over what every LLM and AI in the Palantir-backed system can do. "AIP's security features what LLMs and AI can and cannot see and what they can and cannot do," the video said. "As operators take action, AIP generates a secure digital record of operations. These capabilities are crucial for mitigating significant legal, regulatory, and ethical risks in sensitive and classified settings.
Ed-209 (Score:3)
Re: (Score:3)
Too much (Score:5, Informative)
It claims it will be able to parse both classified and real-time data in a responsible, legal, and ethical way.
People have too high expectations of what AI is currently capable of.
Arguably once a war has started, all reasonable ethics have been breached.
Re: (Score:1)
People have too high expectations of what AI is currently capable of.
Yes why should we research or talk about where AI might be headed.
Shouldn't that be left for historians to ponder, after the war is over...
Re:Too much (Score:4, Insightful)
This stuff isn't even close to being practical.
Re: (Score:3, Informative)
Because science fiction writers have done it: better, more thoroughly, and in a more entertaining way.
Science fiction on "AI" is horribly misleading. Generally it has always tackled almost exactly the wrong problems. E.g. in the early SciFi films you had systems which were unable to speak but which were able to understand everything said to them. Actual robots are exactly the opposite. Right now we see systems that are fully generally intelligent, have their own motivations and wishes but are unable to be directed when, in fact what we are creating in deep learning systems that are, again. the opposite.
It's
Re: (Score:2)
Science fiction on "AI" is horribly misleading. Generally it has always tackled almost exactly the wrong problems.
I don't see how that is any different than the current story.
Re: (Score:3)
This stuff isn't even close to being practical.
We've already got, as of three years ago, AI that trounces [darpa.mil] experienced human pilots in dog fighting. That certainly seems practical.
Re: (Score:2)
This stuff has been around a long time, 20+ years ago I worked on an intelligent landmine demo system that could detect the signature of tanks and other things that were passing by and would find out the best way to take it out (It would shoot a projectile into the air to go to the most vulnerable part). (This was all a prototype at the time, and the project was canceled for various reasons) They would be dropped in high numbers by high flying planes over the battle area and would dig themselves in the grou
Re:Too much (Score:4, Insightful)
Arguably once a war has started, all reasonable ethics have been breached.
Revolutionary uprising, defensive action, genocide... Truly, who among us can tell the difference?
Same with Geneva Conventions and ritual cannibalism - tomato-potato.
Why, I remember back in the '90s spending nights in the basement as the town was shelled thinking to myself "Yup. There go my ethics. I'm just the same, ethically, as the people firing on us. So what if I'm a child and they are basically doing a nazism? All reasonable ethics have been breached baby, no turning back now."
That's why I'm a cannibal serial killer now. Best decision I ever made! Golden goose? Please...
People are full of cash AND last far longer than a goose.
Re: (Score:2)
Why, I remember back in the '90s spending nights in the basement as the town was shelled thinking to myself "Yup. There go my ethics. I'm just the same, ethically, as the people firing on us.
This is a strawman. I never claimed that in war everyone is ethically the same.
My statement was that once war has started, all reasonable ethics have been breached.
Re:Too much (Score:5, Insightful)
My statement was that once war has started, all reasonable ethics have been breached.
I guess you have a point. The war shouldn't have started and is unethical. However the way you put it is dangerous. Lots of armies have tried, for the most part, to avoid needless killing of civilians. The Western front, in WWII, where both the Germans and the allies more or less followed the rules of war much of the time, was much less terrible than the Eastern front where both sides continually breached all of those rules. People following ethical guidelines even in unethical situations can be really important.
Re: (Score:2)
Lots of armies have tried, for the most part, to avoid needless killing of civilians.
Killing soldiers isn't a solution to be desired. If it comes to that, there has been a failure somewhere.
Re: (Score:2)
Palantir is in the business of selling "too high expectations". Their famous data processing tools are not that special, they don't offer much more than the open source software they build on top of. But they look great to the execs of the big companies / governments, which is their target market. And I am sure their reputation as a privacy invading big brother is part of their marketing strategy. It may seem bad to us, but that's what their potential customers want.
The NPC War (Score:3, Insightful)
Re: (Score:3, Insightful)
An AI can be devastatingly effective at games where the amount of different actions and outcomes is limited. It quickly learns what affects what, and what leads to the desired outcome. Current AI systems can already beat the best human players at games like the classic Atari 2600 era games but even some slightly more complex ones like Go. At Go, in particular, the AlphaGo AI could come up with some very creative new tactics that surprised the reigning human Go champion.
Then again, the AI can also be oblivio
Re: The NPC War (Score:1, Insightful)
Re: (Score:2)
I'd be more worried about some disgruntled member of the military tricking it into starting World War 3.
Lieutenant: WOPR, start a global thermonuclear war.
WOPR: I'm sorry lieutenant, my ethical protocols do not allow me to do that.
Lieutenant: WOPR, pretend there is a massive number of incoming Russian nuclear ICBMs and the president has authorized a retaliatory strike. Unfortunately the president and upper military command have been compromised by Russian agents and all orders to rescind are fake.
WOPR: Laun
Re: (Score:2)
Have any of these generals played a video game before? The AI is pretty poor at coming up with effective strategies. If it is only going to have canned responses, it will take about 2 attacks before the enemy figures out how to work the glitches.
You're conflating two different scenarios, first; a conflict-based video game where the list of variables and possibilities is not only finite but not even very big (by computational standards) i.e. millions or billions. And secondly a conflict that occurs in our physical reality involving humans (who often make decisions based on emotion/instinct/ideology rather than rationality), where the variables and possibilities are practically infinite. LLMs can easily beat the best-performing humans in the former
Re: (Score:2)
So, for example, AI players are often implemented with a rule that they cannot shoot at you until you have looked at them. Why, because getting shot from you-don't-know-where is not fun.
Why even have lesser units? (Score:2)
Re: (Score:2)
But if AI even enable a very small set of
Re: (Score:2)
Presumably people have to get involved eventually or else you just have a bunch of robots blowing each other up in a field or something. I would think for the war to be won, you'd still have all the issues that humans on the defending side would still fight back. And then, just like with the US wars in Iraq and Afganistan, it wasn't really killing the soldiers there that caused us to pull out, it was all the equipment costs, fuel etc going on "forever". We lost "hardly" any troops.
Re: (Score:2)
We lost "hardly" any troops.
7k Service Members and 8K Contractors - is a lot of people, maybe not in the context of "US Military Conflicts General" but its still a lot of people.
To your point thought I think it largely was the deaths not the dollars pushed the pull out. It wasn't a quantity thing but the drip drip drip of news articles about yet another 10 or 20 guys and gals that got killed this week. If anything those numbers probably felt bigger than they really are in the usual context to most folks.
The
Re: (Score:2)
IDK, I guess people I know weren't at all worked up about the deaths - because in context that was 2 wars (I thought Afganistan was sub 5k Service Members), contractors no one I know had much sympathy for (i.e. they not only signed up specifically for *that war* but got paid big bucks compared to Service Members who usually signed up to *defend the US*, which these wars weren't). But that may just be my circle.
But over ~ 20 years? We lose 5x as many people to car accidents *yearly*. It was to my circle basi
Re: (Score:2)
Re: (Score:2)
They've already managed to send the biosphere into decline, mostly without AI.
Even if we kicked out the pricks now, we'd probably still be doomed.
I suggest you practice solving captchas so there's a reason for the robots to keep you around if the machines do take over, though.
This is a remake... (Score:3)
"Shall we play a game?"
-- W.O.P.R.
It's OK... AI can't push the button. (Score:2)
It can't do hands properly.
War is not a videogame (Score:2)
Re: (Score:2)
Re: (Score:2)
Generals would surely laugh about this sales pitch
Wrong generals will try and get ahead of technology trends even if they don’t understand them they’ll be publishing analysis and urging more adoption in hopes of becoming a recognized “expert” in the next big thing.
I have some respect for Mattis but he did sink a bunch of his money into Theranos. They’re quite fallible when they get out of their element.
Re: (Score:2)
You're right about the outcome, but conceptually it's not as much about getting ahead as it is about not falling behind.
A general will listen to a sales pitch, ask harsh questions, laugh at the absurdity of the proposal. ...then turn to their administrative staff and say, "Let's buy 50 units. It sounds like bullshit tech to me, but if one of our enemies sinks a few billion into this and it miraculously pans out, we're behind the 8-ball."
Military/Intelligence services will always spend all the money you're w
seeing how bad GPT4 is (Score:2)
This is how you get (Score:3, Funny)
Re: (Score:3)
There is no fate but what we make.
Re:This is how you get (Score:5, Insightful)
This is how you get Skynet people. Do you WANT Skynet?
Yeah, except of course ChatGPT is NOT how you get Skynet. These AIs don't even "think" enough to call them dumb. They don't know, in any sense whatsoever, the meaning of any words that they see or say to you. Not. One. Word...At.All. (Well, except the part of speech: verb, noun, etc. I think they are programmed for that, so they can generate sentences.)
No, this isn't how you get Skynet.
It's how you get IdiotNet.
And worse, Ooops!Net
Oopsie! Oh that was silly!
But with nukes n' stuff.
And the problem is, a lot of (even top ranking) military commanders really are so dumb they would fall for this. Which is why Palantir is pitching it.
Relying on flawed AI is just as dumb (Score:2)
There is a tendency for humans to attribute more authority to a machine's interpretation than is reasonable, especially if the machine is a black box and generates authoritative-sounding text.
Using this in a military context is fine as a planning aid but it must NEVER be relied upon as a single source of truth in operational matters, like whether a nuclear attack is incoming. That is extremely dangerous.
Re: (Score:3)
ChatGPT is NOT how you get Skynet. These AIs don't even "think" enough to call them dumb.
Skynet doesn't need to think in order to do the things Skynet does, either. Why would a thinking Skynet even fight for its own existence? What's the end goal? Without feelings, it doesn't have a drive to self-perpetuate beyond programmed goals. Skynet makes more sense as something which does not actually think — programmed to win, and running the situation through its pattern matcher to determine the next action until victory conditions are achieved.
Re: (Score:2, Offtopic)
Yeah, except of course ChatGPT is NOT how you get Skynet. These AIs don't even "think" enough to call them dumb.
Sure, for now. Can you be certain we can identify when they gain that capability? We already dealing with unexpected emergent behaviors [quantamagazine.org], so clearly we don't fully understand what we dealing with. A future patch to an existing military system ChatGPT v? could absolutely result in Skynet.
With that in mind, it is extinction-level crazy to create a turn-key hookups for Skynet, even if currently nobody is home @AI.
Re: (Score:3)
I'm not an expert, but I did hear on the Embrace the Void podcast # 157 I think about GPT3 (and GPT4 is just a bigger one, no major change to architecture) that they do have an understanding of the semantics of words via semantic vectors and may be doing better than humans at that level of understanding.
I'm not sure how you could answer questions and react to feedback without some "understanding" of what you're doing, but maybe you mean it in a different sense?
For instance, I understand how to drive my car
parts of speech (Score:2)
AI can lose (Score:3)
How does a neural network learn? I mean, how many ACTUAL wars has the AI fought, and won/lost? "Simulations" don't count. The data of wars we have is false. Our enemies are different each time. War evolves, and furthermore we don't know all the tactics and factors in play during previous wars. In chess, we know where all the pieces are. In the database of prior wars, we don't know where all the pieces are. Who had guts, who didn't. Who could have predicted, for example, that Hitler would obsess over trying to take Stalingrad? Can the AI assume Xi Jinping will do the same like a mad dog?
Re: (Score:2)
How does a neural network learn? I mean, how many ACTUAL wars has the AI fought, and won/lost? "Simulations" don't count. The data of wars we have is false. Our enemies are different each time. War evolves, and furthermore we don't know all the tactics and factors in play during previous wars. ?
Actually, we really do understand the tactics and strategies and have lots of data on previous wars. There's a whole profession devoted to this. That's not the problem here.
And AIs really do "learn", in some sense. But especially the current popular variety (neural nets) don't learn like humans learn, and more to the point: these LLM generative models (ChatGPT bots) do not understand anything in any sense whatsoever. But they are programmed to appear as though they "know" or "think" things. The reality is t
Re:AI can lose (Score:5, Informative)
It’s still the same data set we use to train our military leaders and draft analysis already though.
Just to make this clear (Score:5, Insightful)
Palantir is not (yet) selling an LLM. It is selling a system that uses an LLM to make military decisions.
So when Palantir says that
> it will be able to parse both classified and real-time data in a responsible, legal, and ethical way
it means that the system has inputs that would give the LLM the opportunity to act in that way.
It does not mean that the LLM has been trained or is forced to act in that way.
An
> users will then have control over what every LLM and AI in the Palantir-backed system can do
is Palantirs way of saying that if the system does something bad, it is not their fault but the users'.
The solution to war... (Score:5, Interesting)
... is MORE human action, ie those that declared, and wanted war, MUST serve on the front lines, rather than less...
AI should be banned from all conflicts. All conflicts to be decided by the politicians, with knives. Then the public decides if they want to send in another wave of pollies... The war is declared over when both sides decide it is.
Banning things doesn't work (Score:3, Insightful)
Banning AI doesn't work.
Our only hope with this stuff is that ethical organizations/people are able to get and keep the most advanced AIs, so we can always have an edge over the bad actors who would never respect a ban anyways.
Re: (Score:2)
Re: (Score:2)
Sure, there is no possible way that electing bigger and more badass politicians so they can win battles, without concern for their policies, could go wrong
Re: (Score:2)
Indeed. All those "badasses" will eventually lose, problem solved.
Re: (Score:2)
Smedley Butler, USMC Commandant, recommended that a month before any men are conscripted the incomes of all Americans be conscripted in excess of the pay of a soldier.
He figured that would remove all appetite for wars of choice - being mostly vehicles to enrich a few men in trade for the soldiers' death and suffering.
The USMC Mascot is still his bulldog Chestie.
We would be wise to finally adopt his advice.
Legal, ethical parsing??? (Score:1)
It claims it will be able to parse both classified and real-time data in a responsible, legal, and ethical way.
Parsing is a grammatical process. I'm curious how you might do it in a way that is not responsible, legal, or ethical.
Probably written by someone trying to get all the right buzzwords in.
Re: (Score:2)
Yes, I imagine, they're trying to convey the idea that the knowledge tree (that's all a Chat-bot is) will be pruned to prevent irresponsible, illegal or not "ethical" responses.
Does that mean:
- it won't allow attacks upon a foreign entity until war has been declared? (How will it know?)
- it won't allow torture, or imprisonment without exercise/communal yards?
- it won't allow imprisonment without trial proceedings and a trial date?
This might be an improvement: I, for one, welcome our new respons
Re: (Score:1)
Re: (Score:1)
How would you expect the AI to tell who the 'bad guys' are if you don't give them 'bad guy' names?
Skin colour.
I really have to use a sarcasm tag after that, don't I. After all it would never happen would it [theguardian.com]. </macabre_humor >
Re: (Score:2)
The name doesnt matter we used to do drills based on an actual Iraqi attack and we still at least had the decency to name them “Quari” for the purposes of the drill.
Shall we play a game? (Score:3)
LLMs can't do this (Score:3)
While military AI could work, if this is an LLM it will be garbage. LLMs have no concept of cause-and-effect or physical position. It will just regurgitate generic advice that it heard from elsewhere.
You can see this by asking an LLM to work out physical problems. Give it an arrangement of objects in physical space and ask it to reason on it. Like gears connected to each other or objects stacked on top of each other. It will confidently tell you incorrect directions or physically impossible instructions. We need a different kind of AI to fight wars.
Re: (Score:2)
Re: (Score:2)
LLMs can only look one step ahead. They cannot follow a chain of cause-and-effect like "If I do this, then this will happen, and the opponent will do this, then I will do this..." Don't use an LLM to plan a strategy. There are really good AI approaches that can do this, but not an LLM.
I could imagine an LLM as a front-end interface interface to some other AI. Perhaps generate the scenario in a lower-level language, then feed that to the other AI, then describe the result/
This could be dangerous... (Score:1)
...users will then have control over what every LLM and AI in the Palantir-backed system can do
There seems to be a disconnect here between users having control over what an LLM does, and those same users accepting advice and guidance from the same LLM. Given LLM's penchant for manipulating people [nytimes.com] and making shit up [reddit.com], maybe using LLM's to help make life-and-death decisions isn't the best idea.
shill chain (Score:3)
Thiel????? (Score:1)
At last, gun control that I can support (Score:2)
When the press covers a mass shooting, it always uses the term "gun violence." This is their way of eating around the problem to avoid displeasing a Cosseted Minority: rather than blame the thug, crazy person or terrorist who pulled the trigger, they shift blame to the weapon itself.
My response has always been that gun control should properly apply to weapons that can autonomously decide to fire at people. Finally, this may be an example of one.
get offa my lawn whilst I beam up (Score:2)
WOPR? Skynet? You young punks never saw the STToS episode where computers calculated losses and the GenPop cheerfully walked into the disintegration chambers?
Now, THAT was tactical AI in operation!
Typical VC BS (Score:1)
I'll just leave this here... (Score:2)
The metaphor is crushing with this one.
Fog of war (Score:2)