This topic has gotten too much attention to ignore.
I’ve got a lot of blog post ideas, but not enough time to turn them into posts (guess I should be using AI). While those ideas percolate, here’s the rundown on the Anthropic vs. Department of War situation since so many of you have asked me about it.
The story so far…
Who’s Anthropic?
Anthropic is an AI company that creates LLMs just like OpenAI, and their chatbot (equivalent to ChatGPT) is named Claude. Anthropic is smaller than OpenAI, and is known for their desire to build safe AI. In fact, Anthropic was founded by ex-OpenAI employees who felt that OpenAI wasn’t taking a safe enough approach to their AI.
Although most people associate AI with ChatGPT, Claude is the model most used for coding, and the model most used by businesses. ChatGPT has the lead among consumers. You may have seen one of Anthropic’s Super Bowl ads, which poked fun at OpenAI’s decision to use ads in generative AI, something that Anthropic said they will never do. (Note: Sam Altman, CEO of OpenAI, once said that incorporating ads into ChatGPT would be a “last resort.”)
What’s the beef, exactly?
The issue is that Anthropic’s existing contract (signed in July 2025) imposed two limitations on the DoW’s use of their AI: that the AI would not be used for fully autonomous lethal weapons or for mass domestic surveillance of citizens.In part because it’s not yet ready – it’s not good enough – for autonomous weapons (and they promised to work to improve this).
The DoW and Pete Hegseth demanded that Anthropic remove those restrictions.
What happened?
In February (after months of negotiations) this blew up publicly, with the DoW insisting that the two limitations be removed from the contract by February 27th at 5:01 pm.
On February 26th Dario Amodei, the CEO of Anthropic, issued a statement that they would not back down. Sam Altman, the CEO of OpenAI, publicly supported Anthropic’s stance.
On February 27th, Sam Altman signed a contract with the DoW to use OpenAI’s models. That’s right. Mere hours after saying they had the same redlines.
Trump and Hegseth blasted Anthropic, calling them “left-wing nutjobs” and labeling them a “supply chain risk.”

My take on why does it matter, particularly for generative AI in the workplace
Here’s my take.
We don’t know the full story
Remember, we’re dealing with the U.S. military. They’re not telling us the full story. They can’t and they shouldn’t. What we’re seeing is likely only a small snippet of what all the issues are.
It never should have gone public
The negotiations and their outcome should have remained private. There was no need to broadcast the disagreements. Just cancel the contract and court the other vendors. This didn’t need to become political and it didn’t need to become a battle of egos.
What changed since last year?
It was barely 8 months ago that the government signed the contract with Anthropic where they had accepted the two limitations. Why were the conditions acceptable then, but they’re not acceptable now?
All we know is that in January Pete Hegseth said they wanted AI with no constraints (within the law); saying that they will only accept AI “without ideological constraints.” They changed their mind.
The DoW doesn’t do citizen surveillance
At least, it’s not supposed to. To the extent that this happens, it should be happening at the FBI. So I’m not really sure why this would be a dealbreaker for the DoW. Only as a matter of principle – the ideological constraint.
So, it’s down to the autonomous weapon clause. And as I said before, Anthropic said no, we won’t budge. (they also pointed out that Hegseth’s position was inherently contradictory).
Donald Trump’s response was childish
Like the boy who decides to take his ball and go home.

But behind the rant, he has a point
He is right that the Commander in Chief controls the military. And he’s right that the military needs to control their arsenal and how it is used. AI is part of that arsenal. The military can’t be effective if something they’re using could, at a critical moment, suddenly stop working because of some limit.
Pete Hegseth’s threat is ridiculous
To punish Anthropic for not bending the knee, Hegseth has labeled them a “supply chain risk” for the military – a designation typically reserved for our enemies and is trying to get them banned from all companies that do business with the government. That’s unreasonable because it’s too harsh and too broad. It’s kind of like saying Boeing can’t hire an accountant because they’re a foreign national; after all, they might sneak into a restricted area and pour water into the fuel tank of an F-18.
If this designation holds, Anthropic promises to take the issue to court.
But there’s more going on
Remember how I said we don’t know the full story? Although I disagree with the vindictiveness of the response, I understand the concern. There are rumors that Anthropic was unhappy that their software was used with the raid in Venezuela that captured Maduro (which Anthropic denies). Maybe these two prohibitions aren’t the whole story. That could explain why the military wasn’t satisfied with Anthropic’s promise to improve the AI so that it could safely control autonomous weapons.
Anthropic’s stance is unprecedented
I could find no instance or example where a company restricts the use of its product by the buyer based solely on the contract between buyer and seller. It simply doesn’t happen. So it’s pretty haughty for Anthropic to take this position.
Anthropic’s stance is unfounded
Who do they think they are, to think they can limit use? Especially with the government, where national defense needs are paramount, can’t be foreseen? The DoW needs complete control, they don’t have the time to pause in the middle of an operation because someone says “hey wait a sec, is this a permitted use?”
“You can’t lead tactical ops by exception.”
– a Pentagon official
On the other hand…
hen, there’s OpenAI. A company that originally claimed basically the same thing, and had the goal of building AI that was “open” to everyone because it was too powerful and therefore dangerous for any one entity to control it.
Needless to say that with OpenAI, that ship sailed a long time ago.
OpenAI’s prohibitions are a strawman
OpenAI published a statement explaining their position with the Department of War. They’ve since updated that statement but the original claimed the same prohibitions as Anthropic. But they aren’t. If they were, the DoW wouldn’t have signed the contract!
The difference is that Anthropic’s prohibitions were unconditional. OpenAI makes the “same” prohibitions but with one important addition: they can use it for “all lawful purposes.”
Anthropic says “you can’t use it for autonomous weapons.”
OpenAI says “you can’t use it for autonomous weapons prohibited by law.”
Sam Altman can’t be trusted
Not only is Mr. Altman using word gymnastics to claim that they uphold the same prohibitions as Anthropic, but…how do you publicly support Anthropic’s redlines, only to sneak in and steal their contract a few hours later?
Fallout
There was immediate fallout after the DoW decided to go nuclear.
For consumers, OpenAI falters
A grassroots campaign called QuitGPT was started back when OpenAI said they’d put ads in their chatbot. After these events, the number of people canceling subscriptions and uninstalling the app has risen dramatically. It will take a lot more to impact OpenAI financially, but it’s a clear loss for sentiment and brand.
Anthropic surges
Anthropic’s Claude is now #1 on the Apple app store (up from somewhere around #180); from virtually unknown to the talk of the town! Their consumer usage is still tiny compared to ChatGPT, but now they are now a known entity, and their stance on safe AI is attractive. If the trend holds, it could become significant.

OpenAI gets the contract
Sam seized the opportunity and stole the government business. It’s not huge dollars but it could grow into something much bigger.
What’s next?
Anthropic lost the contract. But their primary source of revenue is not the government or consumers, it’s businesses. So while OpenAI is gasping for revenue (that’s why they have to introduce ads), Anthropic is in a much better financial position. Although they’re not projecting profitability until 2028 (vs. OpenAI’s 2030), they don’t need the government’s money to get there.
Even though they are (not surprisingly) still seeking a compromise.
Anthropic is also expected to IPO later this year. These events have massively increased their brand awareness – and their brand’s commitment to safe AI. When the time comes, that awareness could mean a lot more interest in their stock.
Perhaps the DoW’s OpenAI mushroom cloud has a silver lining for Anthropic, in setting them up for a much stronger IPO.
NOTE: The original March 2 post was updated on March 3 with the news that Anthropic is seeking to re-negotiate, along with a few wording changes for clarity.



Leave a Reply
You must be logged in to post a comment.