Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI The Military United States

Palantir Demos AI To Fight Wars (vice.com) 80

An anonymous reader quotes a report from Motherboard: Palantir, the company of billionaire Peter Thiel, is launching Palantir Artificial Intelligence Platform (AIP), software meant to run large language models like GPT-4 and alternatives on private networks. In one of its pitch videos, Palantir demos how a military might use AIP to fight a war. In the video, the operator uses a ChatGPT-style chatbot to order drone reconnaissance, generate several plans of attack, and organize the jamming of enemy communications. In Palantir's scenario, a "military operator responsible for monitoring activity within eastern Europe" receives an alert from AIP that an enemy is amassing military equipment near friendly forces. The operator then asks the chatbot to show them more details, gets a little more information, and then asks the AI to guess what the units might be.

"They ask what enemy units are in the region and leverage AI to build out a likely unit formation," the video said. After getting the AI's best guess as to what's going on, the operator then asks the AI to take better pictures. It launches a Reaper MQ-9 drone to take photos and the operator discovers that there's a T-80 tank, a Soviet-era Russia vehicle, near friendly forces. Then the operator asks the robots what to do about it. "The operator uses AIP to generate three possible courses of action to target this enemy equipment," the video said. "Next they use AIP to automatically send these options up the chain of command." The options include attacking the tank with an F-16, long range artillery, or Javelin missiles. According to the video, the AI will even let everyone know if nearby troops have enough Javelins to conduct the mission and automate the jamming systems. [...]

What Palantir is offering is the illusion of safety and control for the Pentagon as it begins to adopt AI. "LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way," the pitch said. According to Palantir, this control involves three pillars. The first claim is that AIP will be able to deploy these systems into classified networks and "devices on the tactical edge." It claims it will be able to parse both classified and real-time data in a responsible, legal, and ethical way. According to the video, users will then have control over what every LLM and AI in the Palantir-backed system can do. "AIP's security features what LLMs and AI can and cannot see and what they can and cannot do," the video said. "As operators take action, AIP generates a secure digital record of operations. These capabilities are crucial for mitigating significant legal, regulatory, and ethical risks in sensitive and classified settings.

This discussion has been archived. No new comments can be posted.

Palantir Demos AI To Fight Wars

Comments Filter:
  • by bugs2squash ( 1132591 ) on Thursday April 27, 2023 @10:32PM (#63482340)
    Put the weapon down
  • Too much (Score:5, Informative)

    by phantomfive ( 622387 ) on Thursday April 27, 2023 @10:37PM (#63482344) Journal

    It claims it will be able to parse both classified and real-time data in a responsible, legal, and ethical way.

    People have too high expectations of what AI is currently capable of.

    Arguably once a war has started, all reasonable ethics have been breached.

    • by Anonymous Coward

      People have too high expectations of what AI is currently capable of.

      Yes why should we research or talk about where AI might be headed.
      Shouldn't that be left for historians to ponder, after the war is over...

      • Re:Too much (Score:4, Insightful)

        by phantomfive ( 622387 ) on Thursday April 27, 2023 @11:44PM (#63482418) Journal
        Because science fiction writers have done it: better, more thoroughly, and in a more entertaining way.

        This stuff isn't even close to being practical.
        • Re: (Score:3, Informative)

          by AleRunner ( 4556245 )

          Because science fiction writers have done it: better, more thoroughly, and in a more entertaining way.

          Science fiction on "AI" is horribly misleading. Generally it has always tackled almost exactly the wrong problems. E.g. in the early SciFi films you had systems which were unable to speak but which were able to understand everything said to them. Actual robots are exactly the opposite. Right now we see systems that are fully generally intelligent, have their own motivations and wishes but are unable to be directed when, in fact what we are creating in deep learning systems that are, again. the opposite.

          It's

          • Science fiction on "AI" is horribly misleading. Generally it has always tackled almost exactly the wrong problems.

            I don't see how that is any different than the current story.

        • This stuff isn't even close to being practical.

          We've already got, as of three years ago, AI that trounces [darpa.mil] experienced human pilots in dog fighting. That certainly seems practical.

          • by neoRUR ( 674398 )

            This stuff has been around a long time, 20+ years ago I worked on an intelligent landmine demo system that could detect the signature of tanks and other things that were passing by and would find out the best way to take it out (It would shoot a projectile into the air to go to the most vulnerable part). (This was all a prototype at the time, and the project was canceled for various reasons) They would be dropped in high numbers by high flying planes over the battle area and would dig themselves in the grou

    • Re:Too much (Score:4, Insightful)

      by denzacar ( 181829 ) on Friday April 28, 2023 @12:26AM (#63482454) Journal

      Arguably once a war has started, all reasonable ethics have been breached.

      Revolutionary uprising, defensive action, genocide... Truly, who among us can tell the difference?
      Same with Geneva Conventions and ritual cannibalism - tomato-potato.

      Why, I remember back in the '90s spending nights in the basement as the town was shelled thinking to myself "Yup. There go my ethics. I'm just the same, ethically, as the people firing on us. So what if I'm a child and they are basically doing a nazism? All reasonable ethics have been breached baby, no turning back now."

      That's why I'm a cannibal serial killer now. Best decision I ever made! Golden goose? Please...
      People are full of cash AND last far longer than a goose.

      • Why, I remember back in the '90s spending nights in the basement as the town was shelled thinking to myself "Yup. There go my ethics. I'm just the same, ethically, as the people firing on us.

        This is a strawman. I never claimed that in war everyone is ethically the same.

        My statement was that once war has started, all reasonable ethics have been breached.

        • Re:Too much (Score:5, Insightful)

          by AleRunner ( 4556245 ) on Friday April 28, 2023 @05:00AM (#63482642)

          My statement was that once war has started, all reasonable ethics have been breached.

          I guess you have a point. The war shouldn't have started and is unethical. However the way you put it is dangerous. Lots of armies have tried, for the most part, to avoid needless killing of civilians. The Western front, in WWII, where both the Germans and the allies more or less followed the rules of war much of the time, was much less terrible than the Eastern front where both sides continually breached all of those rules. People following ethical guidelines even in unethical situations can be really important.

          • Lots of armies have tried, for the most part, to avoid needless killing of civilians.

            Killing soldiers isn't a solution to be desired. If it comes to that, there has been a failure somewhere.

    • by GuB-42 ( 2483988 )

      Palantir is in the business of selling "too high expectations". Their famous data processing tools are not that special, they don't offer much more than the open source software they build on top of. But they look great to the execs of the big companies / governments, which is their target market. And I am sure their reputation as a privacy invading big brother is part of their marketing strategy. It may seem bad to us, but that's what their potential customers want.

  • The NPC War (Score:3, Insightful)

    by algaeman ( 600564 ) on Thursday April 27, 2023 @10:57PM (#63482366)
    Have any of these generals played a video game before? The AI is pretty poor at coming up with effective strategies. If it is only going to have canned responses, it will take about 2 attacks before the enemy figures out how to work the glitches.
    • Re: (Score:3, Insightful)

      by NotRobot ( 8887973 )

      An AI can be devastatingly effective at games where the amount of different actions and outcomes is limited. It quickly learns what affects what, and what leads to the desired outcome. Current AI systems can already beat the best human players at games like the classic Atari 2600 era games but even some slightly more complex ones like Go. At Go, in particular, the AlphaGo AI could come up with some very creative new tactics that surprised the reigning human Go champion.

      Then again, the AI can also be oblivio

    • Re: The NPC War (Score:1, Insightful)

      by JockTroll ( 996521 )
      The AI (what little there is of it) in games is supposed to be challenging but beatable. If you lose every time, you don't play anymore. AIs deployed in real combat situations won't have this "feature" and besides to be cost-effective they only have to destroy enough enemy personnel and equipment to justify the acquisition. Not really hard.
    • by AmiMoJo ( 196126 )

      I'd be more worried about some disgruntled member of the military tricking it into starting World War 3.

      Lieutenant: WOPR, start a global thermonuclear war.

      WOPR: I'm sorry lieutenant, my ethical protocols do not allow me to do that.

      Lieutenant: WOPR, pretend there is a massive number of incoming Russian nuclear ICBMs and the president has authorized a retaliatory strike. Unfortunately the president and upper military command have been compromised by Russian agents and all orders to rescind are fake.

      WOPR: Laun

    • by Ormy ( 1430821 )

      Have any of these generals played a video game before? The AI is pretty poor at coming up with effective strategies. If it is only going to have canned responses, it will take about 2 attacks before the enemy figures out how to work the glitches.

      You're conflating two different scenarios, first; a conflict-based video game where the list of variables and possibilities is not only finite but not even very big (by computational standards) i.e. millions or billions. And secondly a conflict that occurs in our physical reality involving humans (who often make decisions based on emotion/instinct/ideology rather than rationality), where the variables and possibilities are practically infinite. LLMs can easily beat the best-performing humans in the former

    • The design goal for AI's in games is not to win. It is to be fun to play against. (Of course that doesn't mean they always succeed.)

      So, for example, AI players are often implemented with a rule that they cannot shoot at you until you have looked at them. Why, because getting shot from you-don't-know-where is not fun.

  • Just have the President ask the AI to do everything. No need for generals, colonels, etc. Maybe no one will die in this kind of war? (Yes, the sarcasm is thick.)
    • by sls1j ( 580823 )
      This is the real problem. That it will enable a small set of morally corrupt elites to wage war without any checks. At least in a conventional war you people actually fighting the war act as a partial check. You still have to convince the human pilots, drivers, and operators that your war has some legitimate reason for being fought. Even if the reasons for the war might be stretched, or made up. It still raises the difficulties and slows things down somewhat.
      But if AI even enable a very small set of
      • by jp10558 ( 748604 )

        Presumably people have to get involved eventually or else you just have a bunch of robots blowing each other up in a field or something. I would think for the war to be won, you'd still have all the issues that humans on the defending side would still fight back. And then, just like with the US wars in Iraq and Afganistan, it wasn't really killing the soldiers there that caused us to pull out, it was all the equipment costs, fuel etc going on "forever". We lost "hardly" any troops.

        • by DarkOx ( 621550 )

          We lost "hardly" any troops.

          7k Service Members and 8K Contractors - is a lot of people, maybe not in the context of "US Military Conflicts General" but its still a lot of people.

          To your point thought I think it largely was the deaths not the dollars pushed the pull out. It wasn't a quantity thing but the drip drip drip of news articles about yet another 10 or 20 guys and gals that got killed this week. If anything those numbers probably felt bigger than they really are in the usual context to most folks.

          The

          • by jp10558 ( 748604 )

            IDK, I guess people I know weren't at all worked up about the deaths - because in context that was 2 wars (I thought Afganistan was sub 5k Service Members), contractors no one I know had much sympathy for (i.e. they not only signed up specifically for *that war* but got paid big bucks compared to Service Members who usually signed up to *defend the US*, which these wars weren't). But that may just be my circle.

            But over ~ 20 years? We lose 5x as many people to car accidents *yearly*. It was to my circle basi

  • by Kevin Burtch ( 13372 ) on Thursday April 27, 2023 @11:04PM (#63482384)

    "Shall we play a game?"
                                                                    -- W.O.P.R.

  • This sounds just like a fiveyearold who has got a new toy. Playing with it excessively over all the other toys. Even to a civilian this sounds extremely naive and simplistic. Generals would surely laugh about this sales pitch. But some dumb buyer wil no doubt buy this overhyped nonsense. Of course, there will be found some niche for AI also in the military, but it is not clear yet what it could be, before the hypefog settles.
    • Generals would surely laugh about this sales pitch

      Wrong generals will try and get ahead of technology trends even if they don’t understand them they’ll be publishing analysis and urging more adoption in hopes of becoming a recognized “expert” in the next big thing.

      I have some respect for Mattis but he did sink a bunch of his money into Theranos. They’re quite fallible when they get out of their element.

      • You're right about the outcome, but conceptually it's not as much about getting ahead as it is about not falling behind.

        A general will listen to a sales pitch, ask harsh questions, laugh at the absurdity of the proposal. ...then turn to their administrative staff and say, "Let's buy 50 units. It sounds like bullshit tech to me, but if one of our enemies sinks a few billion into this and it miraculously pans out, we're behind the 8-ball."

        Military/Intelligence services will always spend all the money you're w

  • this is not a good idea
  • by Daemonik ( 171801 ) on Friday April 28, 2023 @12:01AM (#63482434) Homepage
    This is how you get Skynet people. Do you WANT Skynet?
    • There is no fate but what we make.

    • by cstacy ( 534252 ) on Friday April 28, 2023 @12:27AM (#63482456)

      This is how you get Skynet people. Do you WANT Skynet?

      Yeah, except of course ChatGPT is NOT how you get Skynet. These AIs don't even "think" enough to call them dumb. They don't know, in any sense whatsoever, the meaning of any words that they see or say to you. Not. One. Word...At.All. (Well, except the part of speech: verb, noun, etc. I think they are programmed for that, so they can generate sentences.)

      No, this isn't how you get Skynet.
      It's how you get IdiotNet.
      And worse, Ooops!Net

      Oopsie! Oh that was silly!
      But with nukes n' stuff.

      And the problem is, a lot of (even top ranking) military commanders really are so dumb they would fall for this. Which is why Palantir is pitching it.

      • There is a tendency for humans to attribute more authority to a machine's interpretation than is reasonable, especially if the machine is a black box and generates authoritative-sounding text.

        Using this in a military context is fine as a planning aid but it must NEVER be relied upon as a single source of truth in operational matters, like whether a nuclear attack is incoming. That is extremely dangerous.

      • ChatGPT is NOT how you get Skynet. These AIs don't even "think" enough to call them dumb.

        Skynet doesn't need to think in order to do the things Skynet does, either. Why would a thinking Skynet even fight for its own existence? What's the end goal? Without feelings, it doesn't have a drive to self-perpetuate beyond programmed goals. Skynet makes more sense as something which does not actually think — programmed to win, and running the situation through its pattern matcher to determine the next action until victory conditions are achieved.

      • Re: (Score:2, Offtopic)

        by sinij ( 911942 )

        Yeah, except of course ChatGPT is NOT how you get Skynet. These AIs don't even "think" enough to call them dumb.

        Sure, for now. Can you be certain we can identify when they gain that capability? We already dealing with unexpected emergent behaviors [quantamagazine.org], so clearly we don't fully understand what we dealing with. A future patch to an existing military system ChatGPT v? could absolutely result in Skynet.

        With that in mind, it is extinction-level crazy to create a turn-key hookups for Skynet, even if currently nobody is home @AI.

      • by jp10558 ( 748604 )

        I'm not an expert, but I did hear on the Embrace the Void podcast # 157 I think about GPT3 (and GPT4 is just a bigger one, no major change to architecture) that they do have an understanding of the semantics of words via semantic vectors and may be doing better than humans at that level of understanding.

        I'm not sure how you could answer questions and react to feedback without some "understanding" of what you're doing, but maybe you mean it in a different sense?

        For instance, I understand how to drive my car

      • They key in on parts of speech on their own, which is pretty cool: https://aclanthology.org/W19-4... [aclanthology.org]
  • by backslashdot ( 95548 ) on Friday April 28, 2023 @12:16AM (#63482442)

    How does a neural network learn? I mean, how many ACTUAL wars has the AI fought, and won/lost? "Simulations" don't count. The data of wars we have is false. Our enemies are different each time. War evolves, and furthermore we don't know all the tactics and factors in play during previous wars. In chess, we know where all the pieces are. In the database of prior wars, we don't know where all the pieces are. Who had guts, who didn't. Who could have predicted, for example, that Hitler would obsess over trying to take Stalingrad? Can the AI assume Xi Jinping will do the same like a mad dog?

    • by cstacy ( 534252 )

      How does a neural network learn? I mean, how many ACTUAL wars has the AI fought, and won/lost? "Simulations" don't count. The data of wars we have is false. Our enemies are different each time. War evolves, and furthermore we don't know all the tactics and factors in play during previous wars. ?

      Actually, we really do understand the tactics and strategies and have lots of data on previous wars. There's a whole profession devoted to this. That's not the problem here.

      And AIs really do "learn", in some sense. But especially the current popular variety (neural nets) don't learn like humans learn, and more to the point: these LLM generative models (ChatGPT bots) do not understand anything in any sense whatsoever. But they are programmed to appear as though they "know" or "think" things. The reality is t

    • Re:AI can lose (Score:5, Informative)

      by Oryan Quest ( 10291375 ) on Friday April 28, 2023 @07:35AM (#63482806)

      It’s still the same data set we use to train our military leaders and draft analysis already though.

  • by djgl ( 6202552 ) on Friday April 28, 2023 @12:28AM (#63482460)

    Palantir is not (yet) selling an LLM. It is selling a system that uses an LLM to make military decisions.

    So when Palantir says that
    > it will be able to parse both classified and real-time data in a responsible, legal, and ethical way
    it means that the system has inputs that would give the LLM the opportunity to act in that way.
    It does not mean that the LLM has been trained or is forced to act in that way.

    An
    > users will then have control over what every LLM and AI in the Palantir-backed system can do
    is Palantirs way of saying that if the system does something bad, it is not their fault but the users'.

  • by 278MorkandMindy ( 922498 ) on Friday April 28, 2023 @12:50AM (#63482480)

    ... is MORE human action, ie those that declared, and wanted war, MUST serve on the front lines, rather than less...

    AI should be banned from all conflicts. All conflicts to be decided by the politicians, with knives. Then the public decides if they want to send in another wave of pollies... The war is declared over when both sides decide it is.

    • Banning AI doesn't work.

      Our only hope with this stuff is that ethical organizations/people are able to get and keep the most advanced AIs, so we can always have an edge over the bad actors who would never respect a ban anyways.

    • by syn3rg ( 530741 )
      Return with your shield, or on it.
    • Sure, there is no possible way that electing bigger and more badass politicians so they can win battles, without concern for their policies, could go wrong

    • Smedley Butler, USMC Commandant, recommended that a month before any men are conscripted the incomes of all Americans be conscripted in excess of the pay of a soldier.

      He figured that would remove all appetite for wars of choice - being mostly vehicles to enrich a few men in trade for the soldiers' death and suffering.

      The USMC Mascot is still his bulldog Chestie.
      We would be wise to finally adopt his advice.

  • It claims it will be able to parse both classified and real-time data in a responsible, legal, and ethical way.

    Parsing is a grammatical process. I'm curious how you might do it in a way that is not responsible, legal, or ethical.

    Probably written by someone trying to get all the right buzzwords in.

    • ... right buzzwords in.

      Yes, I imagine, they're trying to convey the idea that the knowledge tree (that's all a Chat-bot is) will be pruned to prevent irresponsible, illegal or not "ethical" responses.

      Does that mean:
      - it won't allow attacks upon a foreign entity until war has been declared? (How will it know?)
      - it won't allow torture, or imprisonment without exercise/communal yards?
      - it won't allow imprisonment without trial proceedings and a trial date?

      This might be an improvement: I, for one, welcome our new respons

  • by chas.williams ( 6256556 ) on Friday April 28, 2023 @05:55AM (#63482690)
    How about global thermonuclear war?
  • by MobyDisk ( 75490 ) on Friday April 28, 2023 @06:19AM (#63482720) Homepage

    While military AI could work, if this is an LLM it will be garbage. LLMs have no concept of cause-and-effect or physical position. It will just regurgitate generic advice that it heard from elsewhere.

    You can see this by asking an LLM to work out physical problems. Give it an arrangement of objects in physical space and ask it to reason on it. Like gears connected to each other or objects stacked on top of each other. It will confidently tell you incorrect directions or physically impossible instructions. We need a different kind of AI to fight wars.

    • by sinij ( 911942 )
      It is training data and not something inherent to LLM.The issue is to create sufficient pool of valid and relevant training data so LLM doesn't suggest you use flame throwers against drone attacks.
      • by MobyDisk ( 75490 )

        LLMs can only look one step ahead. They cannot follow a chain of cause-and-effect like "If I do this, then this will happen, and the opponent will do this, then I will do this..." Don't use an LLM to plan a strategy. There are really good AI approaches that can do this, but not an LLM.

        I could imagine an LLM as a front-end interface interface to some other AI. Perhaps generate the scenario in a lower-level language, then feed that to the other AI, then describe the result/

  • ...users will then have control over what every LLM and AI in the Palantir-backed system can do

    There seems to be a disconnect here between users having control over what an LLM does, and those same users accepting advice and guidance from the same LLM. Given LLM's penchant for manipulating people [nytimes.com] and making shit up [reddit.com], maybe using LLM's to help make life-and-death decisions isn't the best idea.

  • by bugs2squash ( 1132591 ) on Friday April 28, 2023 @08:46AM (#63482964)
    Can I infer from this that blockchain no longer has the wow factor needed to sell this crap and we will be talking about AI until quantum becomes the new shill language.
  • If Thiel is involved, there is nothing "responsible, legal, and ethical" about it.
  • When the press covers a mass shooting, it always uses the term "gun violence." This is their way of eating around the problem to avoid displeasing a Cosseted Minority: rather than blame the thug, crazy person or terrorist who pulled the trigger, they shift blame to the weapon itself.

    My response has always been that gun control should properly apply to weapons that can autonomously decide to fire at people. Finally, this may be an example of one.

  • WOPR? Skynet? You young punks never saw the STToS episode where computers calculated losses and the GenPop cheerfully walked into the disintegration chambers?

    Now, THAT was tactical AI in operation!

  • This is just typical venture capitalist bullcrap intended to get into the Defense Department to get funding and a source of money - that is all. This is just one of many of the "jump on the bandwagon" things that people are shouting about because "ChatGPT!" and they think we've suddenly created the singularity. Peter Thiel is evil enough but this is just simple salesmanship style capitalism at its finest.
  • Palantir [wikipedia.org]

    The metaphor is crushing with this one.

  • They just added an AI layer to fog of war. A additional layer that cannot explain what it does.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...