Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
United States The Military

The Pentagon Says AI is Speeding Up Its 'Kill Chain' 34

An anonymous reader shares a report: Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people. Today, their tools are not being used as weapons, but AI is giving the Department of Defense a "significant advantage" in identifying, tracking, and assessing threats, the Pentagon's Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

"We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces," said Plumb. The "kill chain" refers to the military's process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb. The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans. "We've been really clear on what we will and won't use their technologies for," Plumb said, when asked how the Pentagon works with AI model providers.
This discussion has been archived. No new comments can be posted.

The Pentagon Says AI is Speeding Up Its 'Kill Chain'

Comments Filter:
  • just need to replace the men with the brass keys to speed things up when the kill order comes down.

  • OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans.

    Why not? Surely the kill chain is only as strong as its weakest link!

    https://www.youtube.com/watch?... [youtube.com]

  • and will happily abuse the military assets.

  • It won't be long before they put a drinking bird on the kill button and it wipes out some of your family and friends. They won't apologize, because no one will specifically be at fault.
    • You have the providers of the tech saying "Our AI doesn't kill people", and the guys pulling the trigger saying "The AI told us to".

      Not only does it seem tailor made for this purpose, it's already been deployed in Palestine. Tellingly, the IDF chose to name its AI terrorist-designation software "the Gospel". Can't argue with Yaweh.

      What these systems do is automate the production of "faulty intelligence" that gets used to justify the unjustifiable. In other words, AI proves useful in generating industrial qu

  • Great now to complete the cycle all we need are national battle computers that work out the progress of the war (we've always been at war with Eurasia) and calculate casualties that should report to the absorption chambers. So much cleaner than what we've been doing up till now.

  • There are even articles going way back (Popular Mechanics?) where the US gov't has been experimenting with AI -- that topic disappeared quickly, and now here we are with OpenAI. It could be that OpenAI has an advantage.

    But it's the narrative that I find irritating -- this narrative that AI is going to kill everyone, which is really a dumb assertion meant to influence public opinion. My hammer here can build a house, or crush your skull -- but the fact that I have the hammer, does that make me a killer?

  • Instead of identifying and eliminating targets, why not use the AI to help the target solve the problems in their life which made them a threat in the first place?

    While it is true that some people are genuinely evil, for most people, doing evil is a matter of their inability to do good, rather than a genuine preference for making themselves and everyone else miserable. If someone has become disgruntled with their lot in life to the extent that they're willing to threaten others, wouldn't it be far bette

    • Certainly AI is being used to help people find jobs. It's just framed as the opposite - AI being used to pass people over for jobs. Neither is really wrong, it's a glass-half-full thing.
    • Very soon robots will be doing all the work and you'll get your wish. Because if robots do all the work surely TPB must help people and start up UBI. Never underestimate the power of greed, read 'Manna Two Views of Humanity’s Future' if you have not already, it is not scifi, it is a warning: https://marshallbrain.com/mann... [marshallbrain.com]
    • Instead of identifying and eliminating targets, why not use the AI to help the target solve the problems in their life which made them a threat in the first place?

      The honest truth is that we cannot eliminate all the major risk factors involved in our present understanding of what makes someone likely to become a terrorist.

      Major risk factors include:
      * high intelligence (regardless of education)
      * lack of fulfilling love life
      * lack of fulfilling employment
      * dissatisfaction with own government

      The more of these things that are true about them, the more likely they will become a terrorist. So basically, you can fix most of the world's problems and still not prevent people

  • People are worried by ChatGPT (an awesome autocomplete), and want to add all sorts of laws to stop that.
    A huge distraction from the real problem: the militarized robotics that are making killing more and more automated and deadly.

  • I hope the made the AI play tic-tac-toe against themselves before putting them on production. And also use at least 2FA for missile launch orders.
  • i.e. Robocop screwball robot
  • The problem is that you need real wars in order to test and refine these weapons. The same way you ultimately can only test driverless cars on real streets. So you need an Iraq, Afghanistan, Gaza, Sudan,Serbia, Libya, Syria or .. Ukraine and ultimately a Taiwan. The venture capitalists can't do it alone.

Economists state their GNP growth projections to the nearest tenth of a percentage point to prove they have a sense of humor. -- Edgar R. Fiedler

Working...