Anthropic said no to two things: mass surveillance of Americans and fully autonomous weaponry.
https://x.com/shanaka86/status/2025934824607981805
The Pentagon is about to give an American AI company the Huawei treatment.
Not because it’s Chinese. Not because it’s a spy risk.
Because it refuses to let the military use its AI for mass surveillance of Americans and fully autonomous weapons.
This morning, Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon. A senior Defense official told Axios: “This is not a friendly meeting. This is a sh*t-or-get-off-the-pot meeting.”
Here’s what’s actually happening:
Claude is the only AI model running inside the Pentagon’s classified systems. The most capable model for sensitive defense and intelligence work. It was used in the Maduro raid in January through Palantir, the first confirmed use of a commercial AI in a classified military operation.
Now the Pentagon wants all restrictions removed. “All lawful purposes.” Including capabilities that would let the military continuously monitor the social media posts, voter registration, concealed carry permits, and demonstration records of every American citizen using AI at scale.
Anthropic said no to two things: mass surveillance of Americans and fully autonomous weaponry.
The Pentagon’s response: threatening to designate Anthropic a “supply chain risk.”
That designation is reserved for foreign adversaries. The last company to receive it was Huawei. It would force every defense contractor in America to certify they don’t use Claude in their workflows. Given that 8 of the Fortune 10 use Claude, this would cascade through the entire defense industrial base.
A senior Pentagon official told Axios: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
Another official: “The problem with Dario is, with him, it’s ideological. We know who we’re dealing with.”
Meanwhile: OpenAI, Google, and xAI have already agreed to remove their safeguards for military use. OpenAI deployed ChatGPT to all 3 million DoD personnel through GenAI. mil. xAI holds a separate $200M contract backed by Musk’s political proximity to the administration.
Anthropic is the only one that said no.
Think about what’s being asked. The company whose own safety chief resigned two weeks ago warning “the world is in peril.” The company that just published a report showing its most advanced model “knowingly assisted with chemical weapons research” in testing. That company is being punished for refusing to hand the U.S. military unrestricted access to that same technology.
The Pentagon admits competing models “are just behind” for classified work. They need Claude. But they’re willing to blow up the relationship rather than accept two restrictions that protect American citizens from their own government.
This is the most important story in AI right now and almost nobody is framing it correctly.
It’s not about one $200M contract. It’s about whether the U.S. military can compel a private company to remove safety restrictions on technology its own developers have demonstrated is dangerous, under threat of receiving the same designation as a Chinese national security threat.
Dario Amodei walks into that meeting this morning with $380 billion in enterprise value, $14 billion in revenue, and a principle that may cost him both.
Full institutional analysis on my Substack.
open.substack.com/pub/shanakaans
https://x.com/shanaka86/status/2025934824607981805

0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home