Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
United States The Military

The Pentagon Says AI is Speeding Up Its 'Kill Chain' 19

An anonymous reader shares a report: Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people. Today, their tools are not being used as weapons, but AI is giving the Department of Defense a "significant advantage" in identifying, tracking, and assessing threats, the Pentagon's Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

"We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces," said Plumb. The "kill chain" refers to the military's process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb. The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans. "We've been really clear on what we will and won't use their technologies for," Plumb said, when asked how the Pentagon works with AI model providers.

The Pentagon Says AI is Speeding Up Its 'Kill Chain'

Comments Filter:
  • just need to replace the men with the brass keys to speed things up when the kill order comes down.

    • That is already done in practice. For example, drone pilots get their target from the AI and have about half a second to decide whether that is the right target. In theory, there is a human in the loop, in practice the human can do nothing but accept the proposal made by the AI.
    • Everyone knows, and has always known, that this would happen, right? I mean, nobody actually thought that we wouldn't weaponize AI, right?

      We weaponize everything that we can. It's in our nature. We achieve peace through mutually-assured destruction. The capacity for violence is the ultimate determinant of authority, so anything that can increase that capacity is an attractive target.

      This isn't something that's ever going to change.

  • OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans.

    Why not? Surely the kill chain is only as strong as its weakest link!

    https://www.youtube.com/watch?... [youtube.com]

  • and will happily abuse the military assets.

  • It won't be long before they put a drinking bird on the kill button and it wipes out some of your family and friends. They won't apologize, because no one will specifically be at fault.
    • You have the providers of the tech saying "Our AI doesn't kill people", and the guys pulling the trigger saying "The AI told us to".

      Not only does it seem tailor made for this purpose, it's already been deployed in Palestine. Tellingly, the IDF chose to name its AI terrorist-designation software "the Gospel". Can't argue with Yaweh.

      What these systems do is automate the production of "faulty intelligence" that gets used to justify the unjustifiable. In other words, AI proves useful in generating industrial qu

  • Great now to complete the cycle all we need are national battle computers that work out the progress of the war (we've always been at war with Eurasia) and calculate casualties that should report to the absorption chambers. So much cleaner than what we've been doing up till now.

  • Instead of identifying and eliminating targets, why not use the AI to help the target solve the problems in their life which made them a threat in the first place?

    While it is true that some people are genuinely evil, for most people, doing evil is a matter of their inability to do good, rather than a genuine preference for making themselves and everyone else miserable. If someone has become disgruntled with their lot in life to the extent that they're willing to threaten others, wouldn't it be far bette

There are three kinds of people: men, women, and unix.

Working...