Pentagon building silhouette with glowing cyan AI circuit patterns radiating outward, representing classified military AI contracts
Briefing Policy & Regulation

Pentagon Signs AI Deals With 8 Firms — But Not Anthropic

Key takeaways:

  • The Pentagon signed classified AI deployment deals with eight companies: SpaceX, OpenAI, Google, NVIDIA, Reflection AI, Microsoft, AWS, and Oracle.
  • Anthropic was excluded — and officially labeled a “supply-chain risk” — after refusing to remove safety guardrails covering autonomous weapons and domestic surveillance.
  • All eight signing companies agreed to “any lawful use” of their technology inside Pentagon classified networks.
  • Anthropic sued the Trump administration in March to reverse its blacklisting; the case is ongoing.
  • Despite the DoD ban, the NSA is reportedly still using Anthropic’s Mythos model for cybersecurity work, per Axios.

The Pentagon has signed formal agreements with eight AI companies — SpaceX, OpenAI, Google, NVIDIA, Reflection AI, Microsoft, Amazon Web Services, and Oracle — to deploy frontier AI on its most sensitive classified networks. Anthropic was not included. The Claude maker was officially blacklisted as a supply-chain risk — the first US AI company to receive that designation — after it refused to allow the DoD to use its models without safety constraints on autonomous weapons and mass surveillance.

What Did Anthropic Actually Refuse?

The dispute turns on contract language. According to TechCrunch, Anthropic’s standard government contracts included clauses restricting use of its AI for two specific applications: domestic mass surveillance and autonomous weapons systems. The Trump administration wanted those clauses removed. Anthropic declined.

The Pentagon responded by labeling Anthropic a supply-chain risk — effectively barring all DoD contractors from using its products. Anthropic filed suit against the Trump administration in March to reverse the designation. That litigation is ongoing.

The eight companies that did sign agreed to “any lawful use” of their technology. That’s a materially broader authorization than Anthropic was willing to grant.

What Are IL6 and IL7 Networks?

The new agreements cover the DoD’s IL6 and IL7 classification tiers — the two highest security levels for cloud-based Defense Department systems. IL6 is required to process classified defense data; IL7 handles top-secret and sensitive compartmented information (SCI) workloads.

The intended use cases span “data synthesis, situational understanding, and augmenting warfighter decision-making in complex operational environments,” per the DoD. In plainer terms: battlefield intelligence, logistics support, and command assistance running on the same networks that carry classified communications and targeting data.

What Does This Mean for Enterprise AI Buyers?

The business question this raises is sharper than it appears: when AI vendors negotiate with governments, do your enterprise contract terms hold?

Anthropic built a market position partly on its safety-first stance. Many enterprise buyers chose Claude specifically because Anthropic maintained stricter usage policies than competitors. The Pentagon dispute is the first time that position has cost Anthropic a major customer segment — publicly, with a formal blacklisting, not just a lost RFP.

The eight companies that said yes now hold first-mover access to one of the most significant classified AI deployment opportunities in US history. The financial and strategic upside is substantial. That competitive pressure doesn’t disappear — it increases, for every AI vendor with government ambitions.

For enterprise procurement teams evaluating AI vendors, the right question isn’t which vendor has the best stated safety policy. It’s whether those policies are contractually binding, specific, and enforceable — or aspirational language that dissolves under pressure from a large enough buyer.

The NSA wrinkle is worth noting: despite the DoD-wide ban, Axios reported that the NSA is still running Anthropic’s Mythos model for cybersecurity work. Pentagon tech chief Emil Michael acknowledged the situation but called Mythos “a separate issue.” The blacklist is being applied inconsistently across agencies. This is not a clean story of principled enforcement — the details are messier, and worth watching.

For a deeper look at how enterprise AI governance frameworks are being stress-tested right now, see our analysis of the agentic stack problem.


Frequently asked questions

Why did the Pentagon exclude Anthropic from its AI deals?

The Pentagon excluded Anthropic because the company refused to remove contractual guardrails barring its AI from use in autonomous weapons and domestic mass surveillance. The Trump administration wanted unrestricted “any lawful use” terms. Anthropic declined, and the DoD labeled it a supply-chain risk — the first such designation for a US AI company — barring Pentagon contractors from using its products.

Which AI companies signed deals with the Pentagon in May 2026?

Eight companies signed classified AI deployment agreements with the Pentagon: SpaceX, OpenAI, Google, NVIDIA, Reflection AI, Microsoft, Amazon Web Services, and Oracle. All agreed to “any lawful use” of their technology on IL6 and IL7 classified networks.

What are IL6 and IL7 military networks?

IL6 and IL7 are the Defense Department’s two highest cloud security classification tiers. IL6 is required for processing classified defense data; IL7 handles top-secret and sensitive compartmented information (SCI) workloads. Deploying AI on these networks means handling some of the most sensitive data in the US government.

Does the Pentagon blacklist affect Anthropic’s commercial enterprise business?

The blacklist bars Pentagon contractors from using Anthropic products, but does not directly restrict commercial sales. However, the reputational effect is real: the company has lost access to a major potential buyer class, and the eight signing firms now have a significant head start in classified government AI deployment.

What did Anthropic do after being blacklisted?

Anthropic filed a federal lawsuit against the Trump administration in March 2026 seeking to reverse the supply-chain risk designation. As of May 2026, that litigation is ongoing. Separately, the NSA reportedly continues using Anthropic’s Mythos model for cybersecurity despite the broader DoD ban, suggesting the blacklist is applied unevenly across federal agencies.


Last updated: May 3, 2026.

Advanced AI covers AI strategy and policy for executives navigating enterprise adoption.