Skip to main content

The Anthropic Ban: What the Numbers Actually Say

February 28, 2026 Mary Womack 13 min read

Deep analytical foundation for this week's issue. Hard data, market penetration, enterprise exposure, legal precedent, and the double standard the Pentagon created in a single evening.

1. The Business Model Gap: Why This Is Not Apples to Apples

The Pentagon's decision treats AI companies as interchangeable. They are not. Anthropic and OpenAI have fundamentally different business architectures, and those differences determine who actually gets hurt by this ban.

Revenue Architecture Comparison

Metric Anthropic (Claude) OpenAI (ChatGPT/GPT)
Revenue (Feb 2026 ARR) $14 billion $20 billion
API/Enterprise Revenue % 70-80% from B2B API ~25-30% from enterprise/API
Consumer Revenue % 20-25% ~55-60% consumer subscriptions
Business Customers 300,000+ 3 million paying biz users
$100K+ Annual Customers 7x growth YoY Not disclosed at this tier
$1M+ Annual Customers 500+ Not disclosed
Fortune 10 Penetration 8 of 10 Not disclosed at this level
API Revenue (2025) $3.8 billion $1.8 billion
Growth Rate 10x annual, 3 consecutive years ~3x annual
Enterprise LLM Spending Share 40% of enterprise LLM spend 27% of enterprise LLM spend
Gross Margin ~40-50% (improving to 77% by 2028) ~33%
Cash Flow Positive Target 2027 2030

The core distinction: Anthropic makes 70-80% of its money from enterprises embedding Claude into their products and workflows via API. OpenAI makes 55-60% of its money from individual consumers paying $20/month for ChatGPT.

When the Pentagon bans Anthropic, it is banning the company that powers enterprise software. When it accepts OpenAI, it is accepting the company that powers a consumer chatbot. The enterprise exposure profiles could not be more different.

Why Anthropic Leads Enterprise API Revenue 2:1

In 2025, Anthropic generated $3.8 billion from API sales alone. OpenAI generated $1.8 billion from API sales. Anthropic's API revenue was more than double OpenAI's, according to The Information.

This gap matters because API revenue represents deep integration. These are not casual users. These are companies that have built Claude into their products, their internal tools, their customer-facing systems. Switching costs are enormous. Token-based pricing means one customer's product launch can 10x their usage overnight.

Anthropic commands 40% of enterprise LLM spending, surpassing OpenAI's 27% and Google's 21%. The company that the Pentagon just banned is the market leader in the exact segment that matters for defense contractors: enterprise software infrastructure.

2. Claude's Enterprise Software Penetration: The Invisible Infrastructure

Claude is not a standalone product most people interact with directly. It is infrastructure that runs inside other products. This is what makes the Pentagon's ban unprecedented in its collateral damage.

Where Claude Is Embedded (Verified)

Platform Integration Type Status Default?
Microsoft 365 Copilot Subprocessor powering Researcher, Agent Mode, Copilot Studio, Word/Excel/PPT agents Enabled Jan 7, 2026 YES - default ON for most commercial tenants
ServiceNow Default model for Build Agent (agentic workflow/app development) Announced Jan 28, 2026 YES - default model
AWS Bedrock Foundation model available on FedRAMP High, IL4/5 authorized cloud Active Available (customer selects)
Google Cloud Vertex AI Foundation model on GCP enterprise platform Active Available (customer selects)
Microsoft Azure AI Foundry Claude Sonnet, Opus, Haiku available via serverless deployment Active/Preview Available (customer selects)
Salesforce Agentforce Claude available as reasoning engine for AI agents Active Available (customer selects)
Slack (via MCP) Claude integrates with Slack through Model Context Protocol Active User-initiated
GitHub (via Claude Code) 4% of all public commits authored by Claude Code as of Feb 2026 Active Developer tool
Palantir Maven Smart System Only frontier AI on DoD classified networks Active (6-month wind-down) Primary AI model

Critical detail on Microsoft 365 Copilot: As of January 7, 2026, Anthropic models are enabled by default for most commercial M365 tenants worldwide (except EU/EFTA/UK, where default is off; and government clouds GCC/GCC High/DoD, where unavailable). Anthropic operates as a Microsoft subprocessor under Microsoft's Product Terms and DPA. Unless a tenant administrator specifically opted out before January 7, Claude is running inside their M365 environment right now.

This means any defense contractor using M365 Copilot that did not specifically disable the Anthropic toggle now has Claude processing data in their environment. They may not know it.

Critical detail on ServiceNow: Claude is the default model for ServiceNow Build Agent. ServiceNow processes 80 billion workflows annually. ServiceNow has deployed Claude to 29,000+ of its own employees. Their COO called Claude's coding capabilities "definitely the market leading." Usage of Build Agent is expected to quadruple over the next 12 months.

3. Claude Code and Agentic AI: Where Anthropic Pulled Away

The agentic AI category is where Claude's lead is widest and most consequential. This is not a close race.

  • $2.5 billion — Claude Code Annualized Revenue (Feb 2026). Doubled since January 1, 2026. Hit $1B ARR in ~6 months after May 2025 launch. Faster to $1B than ChatGPT, Slack, or any enterprise software product in history.
  • 4% — GitHub Public Commits Authored by Claude Code. SemiAnalysis, February 2026. 135,000+ commits per day. 42,896x growth in 13 months. Projected 20%+ of daily commits by end of 2026.
  • Enterprise users = majority of Claude Code revenue. Business subscriptions quadrupled since January 1, 2026. Accenture training 30,000 professionals on Claude (largest enterprise deployment). 70-90% of Anthropic's internal engineering code written by Claude Code.

Agentic AI Competitive Landscape

Company Agentic Product Revenue Signal Enterprise Penetration Coding Market
Anthropic Claude Code, Cowork, Agent Teams (Opus 4.6) $2.5B ARR (Claude Code alone) 40% enterprise LLM spend; 300K biz customers 4% of GitHub commits, growing to 20%+
OpenAI GPT-5.2 Codex, Custom GPTs, Assistants API $1B ARR from API (total); Codex revenue not broken out 27% enterprise LLM spend; 3M paying biz users GitHub Copilot (Microsoft) had 1yr headstart, "barely made inroads" per SemiAnalysis
Google Gemini 2.5, Project Astra agents Gemini revenue through Alphabet; ~$1.9B (DeepMind) 21% enterprise LLM spend Limited standalone coding presence
xAI (Grok) Grok integrated in X platform ~$428M ARR 1.5% enterprise adoption Minimal
Meta (Llama) Open-source Llama models No direct LLM revenue (drives ecosystem) Open-source adoption; no direct enterprise sales Used in third-party tools

SemiAnalysis (the industry's most cited semiconductor/AI research firm) published in February 2026: GitHub Copilot and Office Copilot "had a year headstart and barely made any inroads as a product" compared to Claude Code. Microsoft's CEO Satya Nadella has "literally stepped in as the product manager of Microsoft AI." Microsoft itself has widely adopted Claude Code internally, with even non-developers reportedly encouraged to use it.

The Pentagon's chosen replacement, Grok, has 1.5% enterprise adoption and $428M in revenue. A Pentagon official told Axios replacing Claude with Grok is "not like-for-like" and a former senior official called it "inferior."

4. Defense Industrial Base Impact: The Compliance Nightmare by the Numbers

  • 100,000+ DIB Companies Affected. CISA defines the DIB as 100,000+ companies and subcontractors performing under DoD contract. European Parliament research puts it at 60,000+ companies and 1.17 million employees.
  • ~70% Small Business Percentage of DIB. Congressional Research Service: nearly 70% of companies doing business with DoD are small businesses. Small business participation has already fallen 40%+ over recent decades.
  • 79% Companies Using Both Anthropic and OpenAI. Ramp data: 79% of OpenAI customers also pay for Anthropic. 1 in 5 businesses on Ramp pay for Anthropic, up from 1 in 25 a year ago.

The Invisible Software Problem

Hegseth's order states: "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."

The problem: Claude is not a box you can find on a shelf. It is software running inside other software. Here is what a defense contractor's compliance team must now audit:

Software Platform Claude Exposure Contractor Impact
Microsoft 365 Copilot Claude enabled by default since Jan 7 for commercial tenants. Unless admin opted out, Claude is active. Any contractor using M365 Copilot
ServiceNow Claude is default model for Build Agent. Processes 80B workflows/year. Any contractor using ServiceNow for IT/workflow management
AWS Bedrock Claude available on FedRAMP High, IL4/5. Amazon invested $8B in Anthropic. Contractors on AWS GovCloud face ambiguity
Salesforce Claude powers Agentforce AI agents, available in Slack via MCP. Any contractor using Salesforce with AI features
Developer Tools Claude Code: 4% of all GitHub commits. Used by engineers at Microsoft, Google, enterprises globally. Any contractor whose developers use Claude Code

Unlike Section 889 (Huawei/ZTE), which banned specific hardware you could physically locate and remove, this ban targets software embedded at the platform level. A contractor cannot look inside Microsoft 365 Copilot's routing logic to determine which AI model processed their query at 2:47pm on a Tuesday.

Section 889 vs. Anthropic Designation: A Comparison

Factor Section 889 (Huawei/ZTE) Anthropic Designation
Target Foreign adversary companies (China) Domestic American company
Basis Evidence of espionage/sabotage risk Contract negotiation dispute
Product Type Hardware (identifiable, removable) Software (embedded, often invisible)
Transition Period Multi-year compliance timeline 6 months for agencies; none stated for contractors
Prior Precedent Multiple foreign entities (Kaspersky, Acronis) None. First domestic company ever.
Risk Assessment Required Yes (statutory requirement) Unclear if conducted
Congressional Notification Required before action Does not appear to have occurred

5. Legal Analysis: Changing Agreed-Upon Contract Terms

The Pentagon awarded Anthropic a contract worth up to $200 million in July 2025. That contract included Anthropic's acceptable use policy, which contained the two restrictions at the center of this dispute. Those restrictions were known, agreed to, and the contract was executed.

What followed was an attempt to retroactively alter the terms. The Pentagon's demand that Anthropic make Claude available for "all lawful purposes" is a unilateral renegotiation of terms both parties had already accepted.

Anthropic's Legal Arguments (From Public Statements)

The designation is "legally unsound." Supply chain risk authority under 10 USC 3252 / FASCSA is designed for cases involving sabotage, subversion, or manipulation by adversaries. No evidence of any such risk has been presented.

The statute requires a formal risk assessment and Congressional notification before designation. Legal experts say neither appears to have occurred.

Hegseth's statement that "no contractor may conduct any commercial activity with Anthropic" exceeds statutory authority. Anthropic argues the law can only restrict use of Claude as part of DoD contracts, not contractors' broader commercial activity with other customers.

The designation sets "a dangerous precedent for any American company that negotiates with the government."

Legal Expert Assessment

Charlie Bullock, senior research fellow at the Institute for Law and AI, told Wired: the government cannot make the designation without completing a risk assessment and notifying Congress. Neither appears to have occurred.

Amos Toh, senior counsel at the Brennan Center for Justice (NYU): the supply chain risk designation requires proof of risk from sabotage, subversion, or manipulation by an adversary. "It is not at all clear how adversaries could exploit Anthropic's usage restrictions on Claude to sabotage military systems."

The statute also requires that the Pentagon demonstrate "good faith effort" to pursue less intrusive measures. Toh questioned whether this standard was met given how quickly the dispute escalated.

Independent analyst Shenaka Anslem Perera: "It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk?"

The Precedent Problem

Every prior use of supply chain risk designation has targeted foreign entities: Huawei, ZTE, Kaspersky, Acronis. All involved evidence or reasonable suspicion of adversarial government influence.

This is the first time the United States government has designated a domestic American company a supply chain risk. The trigger was not espionage. Not sabotage. Not data exfiltration. The trigger was a contract negotiation disagreement over two usage restrictions that, per Anthropic, "have not affected a single government mission to date."

6. The Double Standard: OpenAI Got the Same Deal Hours Later

The timeline matters. On the evening of February 27, 2026, within hours of the Anthropic designation:

OpenAI CEO Sam Altman announced his company reached a deal with the Pentagon to deploy on classified networks. The deal includes the same two restrictions Anthropic was punished for demanding.

What OpenAI's Agreement Contains

Restriction Anthropic's Position (Rejected) OpenAI's Agreement (Accepted)
Mass Domestic Surveillance Prohibited in acceptable use policy Prohibited. Altman: "prohibitions on domestic mass surveillance"
Autonomous Weapons Prohibited in acceptable use policy Prohibited. Altman: "human responsibility for the use of force, including for autonomous weapon systems"
Additional Restrictions None beyond the two above Cloud-only deployment, forward-deployed OpenAI engineers, layered safety stack, model can refuse tasks without government override

Key quotes from OpenAI's announcement:

  • Altman to employees (per BBC): "We share with Anthropic" the two red lines.
  • Altman publicly: "Asking [the Department of War] to offer these same terms to all AI companies."
  • Fortune: "Those limitations are essentially the same ones that Anthropic sought."
  • CNN: "It is not clear what is different about OpenAI's deal versus what Anthropic wanted."
  • Ilya Sutskever (OpenAI co-founder, now at SSI): "It is extremely good that Anthropic has not backed down, and it is significant that OpenAI has taken a similar stance."

The OpenAI deal actually contains more restrictions than Anthropic ever demanded. Cloud-only deployment. Forward-deployed OpenAI engineers monitoring usage. A safety stack the model enforces that the government cannot override. Anthropic asked for two restrictions. OpenAI embedded at least six.

7. The AWS Complication

Amazon has invested $8 billion in Anthropic. Claude runs on AWS Bedrock, including FedRAMP High and IL4/5 authorized environments. AWS is the single largest cloud provider to the U.S. government.

A strict reading of "no contractor may conduct any commercial activity with Anthropic" could implicate Amazon's government cloud business. If hosting Claude constitutes "commercial activity with Anthropic," does that extend to AWS itself? The designation's overbreadth creates absurd results that expose its true nature: this is a punitive action dressed up as a security measure.

Anthropic argues the Secretary lacks authority to extend the ban beyond DoD-contract usage of Claude. The courts will decide. In the meantime, AWS's government cloud customers are left in compliance limbo.

8. Key Statistics for Tuesday's Newsletter

Revenue and Scale

  • $14B ARR (Anthropic, Feb 2026) vs $20B ARR (OpenAI). Anthropic grew $1B to $14B in 14 months. No precedent in B2B software history.
  • API revenue: Anthropic $3.8B vs OpenAI $1.8B. Anthropic earns 2x more from the enterprise integrations the ban directly threatens.
  • Claude Code: $2.5B ARR, doubled since Jan 1. Hit $1B in 6 months. Now 4% of all GitHub commits, projected 20%+ by EOY 2026.
  • Enterprise LLM spending: Anthropic 40%, OpenAI 27%, Google 21%. The banned company leads.
  • 79% of OpenAI enterprise customers also pay for Anthropic. These companies cannot simply switch.

Defense Industrial Base

  • 100,000+ DIB companies affected. ~70% are small businesses. Small business participation already down 40%+ over recent decades.
  • Claude enabled by default in M365 Copilot (Jan 7, 2026). Claude is default model in ServiceNow Build Agent (Jan 28, 2026). Both platforms are ubiquitous in defense contracting.
  • Contractors get 6 months for agency wind-down. No stated compliance timeline for the "no commercial activity" mandate.

The Double Standard

  • Same two red lines accepted from OpenAI on the same evening they were rejected from Anthropic.
  • OpenAI's deal actually contains MORE restrictions (cloud-only, engineers on-site, safety stack, model can refuse tasks).
  • First time supply chain risk designation used against a domestic company. All prior uses: foreign adversaries.
  • Legal experts: risk assessment and Congressional notification required but do not appear to have occurred.

Classified Network Gap

  • Claude is the only frontier AI on DoD classified networks. Used in the operation to capture Nicolas Maduro.
  • Pentagon official: replacing with Grok would be a "huge pain in the ass." Grok has 1.5% enterprise adoption, $428M revenue.
  • Former senior official: "near 100% certainty of unexpected, bad consequences for the military."

Source Verification Status

Claim Status Source
Anthropic $14B ARR VERIFIED Anthropic Series G announcement; Sacra; SaaStr
70-80% revenue from B2B/API VERIFIED Sacra; Business of Apps; SaaStr
40% enterprise LLM spending share PLAUSIBLE Deep Research Global analysis
Claude Code $2.5B ARR VERIFIED Anthropic Series G; SaaStr; SemiAnalysis
4% GitHub commits VERIFIED SemiAnalysis (Feb 2026)
M365 Copilot default Jan 7 VERIFIED Microsoft Learn docs; UC Today; Windows Forum
ServiceNow default model VERIFIED ServiceNow press release; CIO; Axios
100,000+ DIB companies VERIFIED CISA official sector page
~70% DIB small business VERIFIED Congressional Research Service
79% OpenAI customers also pay Anthropic VERIFIED Ramp data via SaaStr
OpenAI same two red lines VERIFIED OpenAI announcement; BBC; NPR; Fortune; CNN; Axios
First domestic supply chain risk designation VERIFIED Fortune; legal experts cited

This document contains analysis and inference clearly labeled. It is not legal advice.

Get the next one delivered

Federal health IT intelligence, weekly. No fluff, no vendor spin.