Military Wants AI Decision-Makers To Replace Humans
DARPA seeks to model 'highly trusted humans'
by Jason Ditz
When lives are on the line and a fast decision has to be made, leaving the matter in the hands of the military is sub-optimal. In the future, the decisions could be left in the hands of artificial intelligence.
That’s a goal for the Pentagon in general and DARPA in particular, with the underlying assumption that the AI won’t have human biases and may be able to optimize decisions to save lives.
For the military, saving lives tends to be a secondary objective, at best, and artificial intelligence designed by and for the military is almost certain to focus on the priorities of top officials.
Naturally, the AI will echo the ideologies of the highest ranked officials, and with the military long focused on combat success and efficiency, there is no reason to believe the AI would be designed around maximizing lives saved.
AI decision-making and the cost of human lives is such a clichéd topic that it has becomes an entire genre of dark science-fiction. The ways things could plainly go very wrong are familiar in the minds of pretty much everyone, so why is the military still going down this road?
Honestly, they never stopped. Efficiency concerned pushed them into AI research during the Cold War, and things stalled when the AI research started to brush up against technological limits of the 1980s. Faster computers have given this research track a new lease of life, and on life-deciding.
The military isn’t totally oblivious to the complaints this faces. Their narrative ignores obvious military decision-making, like trigger-pullers and ordering attacks, while trying to spin the AI as able to prioritize medical care during mass casualty events overseas.
Just saying that the AI is being used overseas is itself a chance to deflect some concern, as the public envisioning computers making these kinds of decisions in Afghanistan and not small-town America is less relatable.
The AI is good at counting and balancing numbers, and its not totally out of the realm of possibilities that it could help in maximizing access to certain limited things, like medical aid. If that’s all this was, it would be less of a concern.
When the Pentagon has a research project it’s really into, it never stops with one thing.
The plan is to try to model “highly trusted humans,” though how trustworthy those humans are remains to be seen, since the military is openly looking for those who “won’t have human biases.”
Jason Ditz is senior editor of Antiwar.com.
https://original.antiwar.com/jason/2022/04/06/military-wants-ai-decision-makers-to-replace-humans/
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home