Claude AI: The day Dario Amodei said no to the Pentagon (and why it changes everything)
There are moments in a career when you feel the tectonic plates shift beneath your feet. This Friday, February 27, 2026, will go down as one of those earthquakes. I've spent the week talking to sources in Silicon Valley, poring over statements on Truth Social, and watching the markets swing. And let me tell you: what's happening to Claude AI isn't just a story about a lost contract. It's the end of an era.
The man who said no to war
Picture the scene. Dario Amodei, the head of Anthropic, a former OpenAI employee with the calm gaze of a philosopher rather than a startup founder, is facing Pete Hegseth, Trump's Secretary of Defence. The stakes? A US$200 million contract, but more importantly, access to the Pentagon's classified networks for Claude AI. Hegseth is clear: lift all restrictions, or get out. No quarter given. What Washington wants is use for "lawful purposes" — read, unfettered access for mass surveillance or integration into lethal autonomous weapons systems. The ultimatum expires at 5:01 pm local time. Amodei doesn't budge. His position? "In a limited number of cases, we believe AI can harm democratic values, rather than defend them." He reiterates his two non-negotiable red lines: no domestic surveillance of American citizens, and no autonomous weapons deciding to kill without human oversight. It's a firm, polite, but unwavering "no". For what it's worth, some whisper that this tension was heightened after the alleged use of Claude AI during an operation targeting Nicolás Maduro in January, a scenario that sent chills through the Anthropic teams.
Trump's thunderbolt and the 'banishment'
The response wasn't long in coming. And it bears the branding iron of the Trump era. On Truth Social, the US president posts a vengeful message: "We don't need it, we don't want it, and we won't work with them anymore." He accuses the company of being "radical left and woke" and wanting to "dictate to our great army how to fight and win wars." But the most devastating part isn't the insult. It's the Pentagon's decision to designate Anthropic as a "supply chain risk." Translation: any company — from Lockheed Martin to the smallest Defence startup — that uses Claude AI will be automatically excluded from government contracts. It's commercial death. Pete Hegseth, for his part, talks outright of "betrayal." Meanwhile, and it's not a small irony, Sam Altman announced on X that OpenAI was taking Anthropic's place in the classified networks, while swearing blind that it would respect the same "red lines." The timing is, shall we say... interesting.
The 'SaaSpocalypse' and the billion-dollar waltz
But make no mistake. If Washington is turning its back on Claude AI, Wall Street, on the other hand, is absolutely crazy about it. In four weeks, Anthropic triggered five seismic shocks in the markets, a phenomenon traders have dubbed the "SaaSpocalypse."
- Early February: The launch of legal tools sends Thomson Reuters plunging 16% and LegalZoom 20% in a single day. Fear is palpable: what if Claude AI replaces lawyers?
- Mid-February: Claude Opus 4.6 takes down financial data giants like FactSet.
- The final blow: Claude Code Security and its announcement of modernising the COBOL language costs IBM 13.2% in one session. Unseen since the dot-com bubble burst. IBM, the dinosaur, gets its ankle bitten by a virtual coder.
Simply put, the startup valued at US$380 billion after a recent US$30 billion raise is redrawing the map of global tech, whether Washington likes it or not.
OpenAI, the awkward winner and the killer T-shirt
While Dario Amodei plays the lone crusader, Sam Altman attempts a balancing act. He signs with the Devil, but assures everyone he wants to "defuse tensions" and asks the department to offer the same conditions to all AI companies. A bit like borrowing your neighbour's car after reporting them to the tax office. On the communications front, it's a disaster. On Saturday, the Claude AI app overtook ChatGPT on the US App Store. A powerful symbol.
And this is where popular culture gets involved. In Silicon Valley, black hoodies and T-shirts are the new battlefields. You already see developers proudly sporting the famous "You are absolutely correct" Claude AI T-shirt Funny programmer gift, an ironic nod to the AI's overly polite responses. The Anthropic Claude AI Artificial Intelligence T-Shirt Boxy is becoming the uniform of those who refuse to "sell their soul to the military-industrial complex." It's a movement. It's bigger than just a product.
The shadow of Jean-Claude, Brigitte and the culture war
For us, watching from New Zealand, this psychodrama takes on a particular resonance. We watch with a mix of fascination and dread. On one hand, you have a philosophical debate worthy of a human rights commission: how far can technology serve the state without threatening it? When I hear Trump call Anthropic "woke," I can't help but think of certain figures in our own cultural landscape. Imagine Jean-Claude Van Damme in a political sci-fi film, playing the general who absolutely wants to control AI. Or, closer to home, see the stance of someone like a Jacinda Ardern taking up the cause of AI ethics to protect the most vulnerable. These archetypes cross the Pacific. New Zealand, with its own Defence Force and startups, watches this American precedent with anxiety: what if tomorrow, we're asked to choose between values and contracts?
The business of conscience
So, what lesson to draw from this chaos? Just one, but it's crucial for investors and decision-makers. The era when ethics was just a PR department is over. Today, Anthropic's "Constitution," the document guiding Claude AI, has market value. Refusing to create erotic "AI companions", refusing ads, refusing autonomous weapons... all of this builds invaluable brand capital. Yes, Anthropic has had to loosen some of its safety rules in the face of competition, that's the reality of the market. But on the essentials, they hold firm. And this "conscience of Silicon Valley" positioning attracts talent, retains clients (8 of the top 10 largest US companies use Claude AI), and ultimately, justifies a US$380 billion valuation. It's a risky bet, but a devilishly profitable one.
Meanwhile, the Pentagon has to manage a costly transition to other models, and OpenAI has to prove it can be both the government's favourite and a guardian of liberties. Good luck, Sam. You're going to need it.
I'm keeping an eye on those engineers signing open letters, on those ironic T-shirts, and on that guy, Dario Amodei, who preferred to lose a US$200 million contract rather than lose his soul. In the temple of technology, that's what you call, I believe, a prophetic gesture.