Skip to content
Contact Us
Contact Us
Background curve

AI War

On Saturday, a joint US-Israeli operation killed Supreme Leader Khamenei, the IRGC Commander, Defense Minister, and at least 40 others in simultaneous strikes on three separate gatherings of senior officials.

According to the WSJ, US Central Command used Anthropic's Claude during the operation - for intelligence assessments, target identification, and simulation of combat scenarios. Not AI as a logistics tool. AI as a thinking participant in kinetic military action at the highest level of lethality. The same models were reportedly used in January to help capture Nicolás Maduro.

Our AI overlords have been positioning themselves as tools, “We’re but peasants, similar to utility providers. We provide the electricity, but you decide how to use it.”

If Claude was used more extensively as reports indicate, it marks an evolution from data provider to thinking contributor. AI was always going to have significant military role: it was a matter of when, not if. But it sure sounds like Claude was playing a pretty active role in the planning and execution of the missions.

Like so many other things related to AI, this is the least AI will be used in military actions going forward. And the pace of acceleration is staggering.

Before the actual strikes on Saturday, there was a metaphorical war being waged between the US government, Anthropic, and OpenAI.

Trump Cancels Claude

Just hours before the strikes, Trump cancelled the Claude government contract because of Anthropic’s refusal to remove two carveouts from their contract:

  1. No mass domestic surveillance of Americans
  2. No fully autonomous weapons targeting

Here’s the two sides in a nutshell.

Pentagon: “Those uses are already illegal, so there's no need to write explicit carve-outs into the contract. Just agree to "all lawful purposes" and existing law protects you.”

Anthropic: “We don't trust that. "Lawful" is subject to interpretation, and we want the prohibitions explicitly written in — not assumed from existing law. If you're insisting on language that lets you disregard those safeguards at will, you're signaling what you actually want.”

The Pentagon's argument was essentially "trust the law." Anthropic's was "we want it in writing." The government said no. Anthropic’s morals were tested and they didn’t blink.

Secretary of Defense/Pull-up Machine Pete Hegseth declared Anthropic a supply chain risk and instructed all government departments and vendors to immediately cease use.

The logic is reasonable. If Anthropic can decide the military can't use Claude for certain operations, then a private company has effective veto power over sovereign decisions.

Imagine Salesforce, LoanBoss, Yardi, DealPath, or vts unilaterally deciding to lock you out mid-quarter. Your data, your workflows, your operation…held hostage to a vendor's institutional values. The Pentagon just lived that scenario at national scale.

Anthropic's refusal was the first time a major technology vendor asserted moral agency over the downstream use of its product at scale, in a national security context, and refused to back down under federal pressure. Electricity doesn't refuse to power a weapons factory. Lockheed doesn't get to veto what the F-35 targets.

But Anthropic said: we've thought about this, we have a position, and we won't make concessions on our morals. I can respect that.

You know who doesn’t have that problem? Sam Altman.

The same guy that says with a straight face that OpenAi’s primary purpose is to benefit humanity swooped in and signed a contract with the government just hours after Anthropic refused. Obviously, Altman didn’t mind removing the language that gave the federal government heartburn with Claude. Nevermind that on CNBC that morning he said he shared Anthropic’s red lines.

The government may ultimately keep Claude in the mix - Saturday's results speak for themselves. But the deeper issue is vendor concentration. If OpenAI's next CEO has stronger convictions than Altman, access changes overnight. That's a dangerous single point of failure when the stakes are national security. Running multiple AI systems in parallel isn't redundancy, it's risk management.

It also raises the same question for all of us: should a handful of people get to decide whether their tools have moral opinions about our business?