OpenAI Said Yes to the Pentagon. Anthropic Said No. Here's What Happened to Both.
On February 27, 2026, the Pentagon gave the AI industry a simple ultimatum: let us use your technology for any lawful purpose, or lose your government contract.
Two companies gave opposite answers within hours of each other.
Anthropic said no. CEO Dario Amodei refused to remove safety restrictions on Claude, specifically around mass domestic surveillance and fully autonomous weapons. "We cannot in good conscience accede to their request," he wrote.
OpenAI said yes. Within hours of Anthropic's refusal, Sam Altman announced a deal giving the Department of War access to OpenAI's models on classified networks.
This article reconstructs what happened, what the contract language actually says, how the market responded, and what the legal fight ahead looks like.
What the Pentagon Asked For
This story starts before February 27. In January 2026, Secretary of War Pete Hegseth issued a strategy memo directing that all Department of War AI contracts include "any lawful use" language within 180 days. The memo specifically called for "models free from usage policy constraints that may limit lawful military applications."
The government wasn't asking to use AI for specific military tasks. It was demanding the removal of vendor-imposed safety restrictions — the guardrails that companies like Anthropic and OpenAI had built into their models.
The distinction matters for understanding what followed. The Pentagon didn't request access for a specific classified project. It sought a contractual framework where AI companies would defer to the government on acceptable use cases.
Anthropic maintained two specific red lines: no mass domestic surveillance of Americans, and no fully autonomous weapons without human oversight. The company argued these represented basic democratic guardrails that any AI system deployed in a military context should respect.
The Pentagon warned Anthropic that if it didn't comply by Friday, it would lose its contract and face designation as a national security risk.
Anthropic didn't comply.
What OpenAI's Contract Says
OpenAI moved quickly. The deal was announced on February 28 — giving the Department of War access to OpenAI's models through Amazon Web Services' classified cloud infrastructure, including environments rated for Secret and Top Secret workloads.
OpenAI framed the agreement around three stated "red lines": no domestic mass surveillance, no autonomous weapons, and no high-stakes automated decisions like social credit systems. Altman has argued these safeguards represent meaningful protections — that OpenAI secured restrictions the government might not have otherwise accepted, and that participating in the process allowed the company to shape the outcome from the inside rather than ceding the contract to companies with fewer guardrails.
Several legal analysts have since examined how these safeguards would operate in practice.
According to Lawfare's analysis of the disclosed contract excerpt, OpenAI's guardrails are defined "by reference to existing legal authorities and Defense Department policy, with interpretive discretion resting with the government." The operative clause states the AI system shall not be "intentionally used for domestic surveillance" consistent with "applicable laws, including the Fourth Amendment."
The "intentionally" qualifier. The word "intentionally" narrows the scope of the prohibition. Without a precise definition of "domestic surveillance," enforcement depends on how a use case is characterized — not on its operational effect. If large-scale data collection is framed as incidental to a foreign intelligence operation rather than intentional domestic surveillance, it could fall outside the restriction.
The Electronic Frontier Foundation published a detailed analysis identifying four phrases it considers problematic: "consistent with applicable laws," "intentionally," "deliberate tracking," and "unconstrained monitoring." The EFF's assessment: "The U.S. government doesn't believe 'consistent with applicable laws' means 'no domestic surveillance.'"
TechPolicy.Press identified a concrete precedent for how "incidental" collection operates in practice. In 2018, US Special Operations Command purchased bulk location data from a Muslim prayer app with 98 million users. The app was not deliberately targeting Americans, but the nature of bulk data collection means a significant number of US citizens were likely included. The case illustrates how the "intentionally" qualifier functions in operational contexts.
The interpretive authority question. Unlike Anthropic's approach — where the vendor retained enforcement authority over its own red lines — OpenAI's contract places the Pentagon as the primary interpreter of what constitutes a violation. OpenAI personnel, as Altman himself stated, "do not get to make operational decisions." The Pentagon "does not want the company to express opinions on whether certain military actions are good or bad ideas."
Former Army general counsel Brad Carson told The Intercept that the word "surveillance" as the Pentagon interprets it likely "doesn't even include the kind of activities that people are most concerned about" — such as building intelligence dossiers from commercially available personal data.
National security law professor Alan Rozenshtein described the situation as "not sustainable" — saying either OpenAI doesn't fully understand its own agreement, or it's providing "PR coverage" for red lines that don't exist in enforceable form.
OpenAI has not published the full contract text.
The Supply Chain Designation
Anthropic's refusal triggered an escalating government response. On February 27, President Trump directed all federal agencies to cease using Anthropic's technology with a six-month phase-out period. On March 3, the Department of War formally designated Anthropic a "supply chain risk."
The supply chain risk designation is a legal mechanism created to protect the United States from foreign adversaries infiltrating defense technology. It has previously been applied to companies like Huawei and Kaspersky — Chinese and Russian entities suspected of enabling foreign espionage.
According to available public records, this is the first time the designation has been applied to a domestic American company.
The designation invokes two federal statutes — 10 U.S.C. § 3252 and the Federal Acquisition Supply Chain Security Act of 2018 — both designed for foreign adversary threats. Axios described its application to a domestic AI company as "extremely unusual."
The practical consequences are significant. Under the designation, all defense contractors must review their supply chains quarterly, report within three business days if they've used Anthropic products, and submit mitigation plans within ten days. Anthropic's CFO has stated the company faces potential losses of "hundreds of millions" in 2026 revenue.
Amodei called the dual actions "inherently contradictory": one designates Anthropic as a security threat, while the other — the invocation of the Defense Production Act — simultaneously treats Claude as essential to national security.
On March 9, Anthropic filed lawsuits in two federal courts challenging the designation, arguing it violates the company's First Amendment rights and exceeds the government's statutory authority.
The Fallout at OpenAI
Inside OpenAI, the deal generated internal disagreement.
Caitlin Kalinowski, who led OpenAI's hardware and robotics team, resigned. "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got," she wrote. An open letter signed by nearly 900 employees at Google and OpenAI urged their companies to refuse government requests to deploy AI for domestic mass surveillance or autonomous lethal targeting.
The public response was immediate. ChatGPT app uninstalls surged 295% above the daily baseline on February 28. The QuitGPT movement documented over 2.5 million people pledging to cancel subscriptions. Forbes reported 1.5 million platform departures.
The Altman timeline. On February 27 — the same day the deal was announced — Altman told Axios that OpenAI "shares Anthropic's red lines." On March 3, after the boycott intensified, he released an internal memo acknowledging the rollout was "opportunistic and sloppy" and announced amended contract terms adding explicit language barring domestic surveillance of U.S. persons and excluding the NSA from the agreement. On March 5, he told CNBC that "government should be more powerful than companies" — a position that critics noted was difficult to reconcile with his earlier stated alignment with Anthropic's red lines.
The amended language retains the "intentionally" qualifier, and intelligence agencies other than the NSA remain unmentioned in the exclusions.
The Market Response
The commercial fallout took a shape that may have surprised both companies.
On the weekend following the Pentagon clash, Claude climbed to #1 on Apple's U.S. free apps chart, displacing ChatGPT. By March 2, Claude was pulling 149,000 daily downloads compared to ChatGPT's 124,000. Free users of Claude increased by more than 60% since January.
The broader market share data tracks a longer trend. Between August 2025 and February 2026, ChatGPT's U.S. mobile market share fell from 57% to 42%. Claude's U.S. share tripled from 1.5% to 4%. Google's Gemini nearly doubled from 13% to 25%. These shifts predate the Pentagon story but accelerated after it.
Enterprise adoption data tells a similar story. According to Ramp procurement data, OpenAI's share of enterprise AI spending fell from 50% to 27% over the same period. Anthropic's climbed to 40%. Organizational adoption of Anthropic nearly doubled, from 29% to 56%.
Brand sentiment data from Pulsar shows a widening perception gap. Before the Pentagon deal, Anthropic held a 61.2/100 positivity score to OpenAI's 55.5. After the deal, Anthropic rose to 63.9 while OpenAI dropped to 49.3 — a 14.6-point gap. Claude's user churn rate improved from 55% to 36%.
Market shifts this large have multiple causes. Claude's technical improvements, Anthropic's growing revenue base, and OpenAI's own product and pricing decisions all contributed to these trends before the Pentagon story broke. But the timing of the acceleration is difficult to separate from the controversy.
The Race to the Bottom
Anthropic wasn't the only company that received the Pentagon's request. The "any lawful purpose" requirement applied across the industry.
Elon Musk's xAI agreed immediately. Grok was deployed on classified networks without restriction. Google agreed to similar terms. Both companies removed some model-level safety restrictions to comply.
Altman himself identified the competitive dynamic, noting that competitors like xAI would gain advantage by declaring they'll "do whatever you want." He described OpenAI's own safety stack as something the government "tolerates" — language that suggests the guardrails exist at the Pentagon's discretion rather than as independently enforceable terms.
Legal scholars have described this as a race-to-the-bottom dynamic: AI safety becomes a competitive disadvantage in government contracting. The company willing to remove the most restrictions wins the contract.
The Pentagon's own planning documents outline the next phase: training AI models directly on classified data. Senator Warren has pressed the Pentagon specifically on the security implications of granting xAI — a company controlled by the world's richest man, who also runs a social media platform — access to classified military networks.
The Governance Question
The deeper structural issue underneath this story concerns what governs AI deployment in the military.
Currently, the answer is procurement contracts — bilateral agreements between individual companies and the Pentagon. Not legislation. Not public regulation. Not oversight bodies with democratic accountability. Contracts.
Georgetown law professor Jessica Tillipman identifies the fundamental limitation: bilateral agreements "lack the democratic accountability, public deliberation, and institutional durability that statutes provide." These contracts bind only the signing parties. They depend on technical controls that vendors maintain post-deployment. The enforcement mechanisms arrive "months or years after the contested use."
If these are Other Transaction agreements — which reporting suggests they are — they operate entirely outside the Federal Acquisition Regulation. No Contract Disputes Act protections. Dispute resolution exists only to the extent the parties specifically negotiated it.
Even termination poses problems as a remedy. By the time a vendor identifies a contract violation and terminates the agreement, the contested use has already occurred. The Anthropic situation illustrates a further complication: when the government designates your technology as essential through the Defense Production Act while simultaneously labeling you a supply chain risk, exercising termination rights becomes functionally difficult.
The administration is simultaneously pursuing the most significant reform to federal acquisition in 41 years, emphasizing "commercial-first mandates" and "faster and more flexible acquisition pathways" — systematically shifting governance responsibility onto individual deals.
Questions about domestic surveillance, lethal targeting, and intelligence oversight, Tillipman writes, "deserve answers from Congress and the courts, not from a procurement framework that was never built to carry them."
What Happens Next
The Anthropic lawsuit hearing is scheduled for March 24 — five days away.
The coalition supporting Anthropic continues to expand. Nearly 150 retired federal and state judges — appointed by both Republicans and Democrats — have filed an amicus brief. Microsoft and employees from competing AI companies, including OpenAI itself, have joined. Jeff Dean and 30+ Google and OpenAI employees filed their own brief. Catholic moral theologians submitted an amicus brief on ethical grounds. Twenty-two retired generals warned that abrupt tool changes could harm troops in the field.
The government's legal position is also developing. On March 17, the DOJ filed arguing that an AI vendor retaining unilateral control over its model could "preemptively alter the behavior of its model during ongoing warfighting operations", framing this within the context of the US-Israeli conflict with Iran. The filing calls the blacklist "lawful and reasonable." Legal observers have noted that the filing does not address why the Pentagon accepted similar restrictions from OpenAI.
On March 19, the Pentagon escalated further. A new filing argues that Anthropic employs foreign nationals from China, citing the PRC's National Intelligence Law. Undersecretary Emil Michael argues this "increases the degree of adversarial risk." This shifts the government's argument from contract compliance to workforce composition — a feature shared broadly across the AI industry, which depends heavily on international talent.
The Council on Foreign Relations has warned the situation carries international implications. China launched five major AI models in the same period, and no Chinese AI firm has received a supply chain risk designation from the US government. The CFR argues that American defense contractors now face greater regulatory uncertainty using US-built AI tools than Chinese open-source alternatives — raising questions about the long-term effects on US technology credibility abroad.
The March 24 hearing will determine whether the supply chain designation stands. But the broader framework for how AI gets deployed in military and intelligence contexts remains what it was before February 27: procurement contracts negotiated behind closed doors, governed by enforcement mechanisms that legal scholars have questioned, with interpretive authority resting largely with one party to the agreement.