Pentagon vs. Anthropic: Autonomous Weapons AI Guardrails and the Governance Crisis for Enterprise AI Vendors

Authors: Cloud Security Alliance AI Safety Initiative
Published: 2026-03-09

Categories: AI Governance, AI Security, Supply Chain Risk, Enterprise AI
Download PDF

Pentagon vs. Anthropic: Autonomous Weapons AI Guardrails and the Governance Crisis for Enterprise AI Vendors


Key Takeaways

The breakdown of contract negotiations between Anthropic and the U.S. Department of Defense, culminating in an extraordinary government action in early March 2026, marks one of the most consequential tests to date of whether commercial AI vendors can maintain substantive safety guardrails when those guardrails conflict with state interests. The Pentagon’s decision to designate Anthropic a “Supply-Chain Risk to National Security” — the first such designation ever applied to an American company — was triggered not by any security failure, but by Anthropic’s refusal to accept a contract clause permitting “any lawful use” of its Claude models, including application to autonomous weapons and domestic surveillance [1][2]. The episode has produced competing and mutually incompatible implications for enterprise security practitioners: that vendor guardrails are negotiable under government pressure, that the legal frameworks enterprises rely on to govern AI use are not adequate to the moment, and that the precedent set by the supply chain designation creates entirely new operational risk for organizations deploying commercial AI.

The dispute also surfaced a critical enterprise AI risk that has received insufficient attention: the limits of AI acceptable use policies as a governance mechanism. When a major AI vendor with an explicitly safety-centered mission was pressured to abandon its usage restrictions under threat of government retaliation, the viability of contractual guardrails as a primary control for governing high-stakes AI use must be reconsidered. Enterprises should not treat vendor usage policies as a substitute for independent governance frameworks.


Background

From Contract to Crisis: The July 2025 Agreement and Its Unraveling

Anthropic signed a two-year, $200 million contract with the Pentagon in July 2025, becoming the first AI laboratory to integrate its frontier models into mission workflows on classified networks [3]. The contract was framed as a partnership to “prototype frontier AI capabilities that advance U.S. national security,” and at the time represented an alignment between Anthropic’s safety-centric positioning and the DoD’s stated commitment to the responsible use of AI. That alignment proved short-lived.

Renegotiations broke down in February 2026 over a single clause. The Pentagon insisted on language authorizing Claude for “any lawful use” — an umbrella formulation that, in Anthropic’s reading, would permit deployment for domestic mass surveillance and for lethal targeting in fully autonomous weapons systems without meaningful human authorization [4]. Anthropic CEO Dario Amodei declined to accept these terms. He stated publicly that the company “cannot in good conscience accede to the Pentagon’s demands,” placing two specific prohibitions at the center of his position: Claude would not be used for surveillance of American citizens without judicial oversight, and it would not be used for autonomous lethal targeting without a human in the decision loop [5].

Defense Secretary Pete Hegseth issued Anthropic a deadline of 5:01 p.m. ET on February 28, 2026 to comply [6]. When no agreement was reached, President Trump posted on Truth Social ordering all federal agencies to “immediately cease” using Anthropic technology, with a six-month phase-out period granted only to agencies like DoD already operationally dependent on Claude [7]. On March 5, 2026, the Pentagon formally notified Anthropic of its designation as a Supply-Chain Risk to National Security — a legal designation normally reserved for foreign adversaries or compromised foreign vendors, and never previously applied to an American company [8]. Anthropic stated the designation was “not legally sound” and indicated it would challenge it in court [9].

As of March 9, 2026, Dario Amodei was reported to be back at the negotiating table with the DoD, with both sides seeking a resolution that would preserve some version of the relationship [10]. The legal challenge, the ban, and the counter-negotiations are proceeding simultaneously.

The Regulatory Framework at Issue

The normative backdrop for this dispute is DoD Directive 3000.09, “Autonomy in Weapon Systems,” last updated January 25, 2023. The 2023 revision was the first to specifically reference artificial intelligence as distinct from autonomy more broadly, incorporated the DoD AI Ethical Principles, established an Autonomous Weapons Systems Working Group, and required all autonomous weapon systems to allow “commanders and operators to exercise appropriate levels of human judgment over the use of force” [11]. This requirement for human judgment is precisely the control that the “any lawful use” clause would have effectively subordinated — or at minimum, transferred from vendor contractual restriction to the DoD’s own discretionary compliance.

Implementation of Directive 3000.09 has been notably opaque. After the January 2023 update, the DoD has not publicly reported how many lethal autonomous weapons systems have been reviewed under the directive, nor confirmed whether the newly established Autonomous Weapons Systems Working Group has convened [12]. The FY2026 National Defense Authorization Act (NDAA) attempted to address this opacity legislatively: Section 1061 requires the Pentagon to notify congressional defense committees of any waivers of DODD 3000.09, including the affected systems, rationale, and duration; Section 1533 directs the Chief Digital and AI Officer to create a standardized department-wide framework for AI model governance, testing, and deployment [13][33]. These provisions reflect congressional recognition that self-regulatory enforcement of the directive had been insufficient, though the NDAA provisions themselves had not yet taken effect when the Anthropic dispute reached its peak.

The OpenAI Counteroffer and Its Contested Adequacy

Within hours of Trump’s order banning Anthropic technology, OpenAI reached an agreement to provide its AI models to the DoD on classified networks [14]. OpenAI CEO Sam Altman later acknowledged the negotiations were “definitely rushed” [15]. OpenAI publicly stated that its deal preserves two core prohibitions — against domestic mass surveillance and against lethal autonomy without human authorization — but framed these as commitments to comply with applicable law rather than as affirmative contractual prohibitions [16]. This is precisely the distinction MIT Technology Review identified as the reason OpenAI could reach agreement when Anthropic could not: OpenAI accepted the “lawful use” framing and committed to comply with law, while Anthropic sought to inscribe specific technical and ethical red lines as contractual terms that could not be waived by a lawful order [17].

Critics from the Electronic Frontier Foundation and The Intercept argued that OpenAI’s contract language contained insufficient specificity to prevent surveillance use or autonomous weapons deployment, relying instead on the same legal frameworks whose adequacy the dispute had called into question [18][19]. Under this pressure, OpenAI subsequently altered the terms of its Pentagon deal, though the final contractual language has not been publicly released [20]. Caitlin Kalinowski, who had been leading OpenAI’s robotics and hardware operations since November 2024, resigned on March 7, 2026, stating that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got” — and that the policy guardrails were “not sufficiently defined before OpenAI announced an agreement with the Pentagon” [21][22].


Security Analysis

The Structural Weakness of Contractual Guardrails

The Anthropic-Pentagon dispute makes visible a vulnerability that has been latent in AI governance since the emergence of frontier models: contractual acceptable use policies, while necessary, are insufficient as primary controls for preventing misuse at state scale. Anthropic’s refusal to accept unrestricted use was a corporate policy decision, enforced through negotiation rather than through a technical mechanism that would prevent prohibited uses regardless of contract terms. When a counterparty with enforcement authority — particularly a government with legal designation tools — applies sufficient pressure, the policy is the only thing standing between the AI capability and its prohibited application.

This is not a criticism of Anthropic’s position, which was principled and widely regarded by civil society and bipartisan congressional voices as correct [23][24][32]. It is an architectural observation. The model itself is separable from the policy. Once deployed on classified networks, as Claude had been since July 2025, technical enforcement of use restrictions becomes dependent on architecture choices — deployment configurations, API constraints, access controls — whose durability under government pressure is now an open question. If confirmed, reports that the U.S. military continued using Claude for operational targeting in ongoing military operations against Iran even after Trump’s ban was announced — given the six-month phase-out provision — would underscore that the separation between “model deployed” and “model governed” is not self-enforcing [25][26].

The Supply Chain Risk Designation as a Governance Weapon

The Pentagon’s designation of Anthropic as a Supply-Chain Risk to National Security warrants careful analysis independent of the substantive dispute over autonomous weapons use. The supply chain risk designation is a legal instrument derived from authorities including the Defense Federal Acquisition Regulation Supplement (DFARS) and National Security memoranda, designed to protect the defense industrial base from compromised foreign vendors and components [27]. Its application against an American company in response to that company’s refusal to remove safety restrictions from its AI products is, as legal analysis from Mayer Brown noted, unprecedented — and raises the immediate question of whether other enterprises that procure AI under government contracts now face the same coercive mechanism [27].

For enterprise AI security teams, the designation creates two distinct risk categories. The first is direct: any enterprise with DoD contracts that had integrated Claude into workflows requiring contractor supply chain compliance now faced abrupt operational disruption, because the designation bars military contractors and suppliers from doing business with the designated company [8]. The second is precedential: if a government can designate an AI vendor as a supply chain risk not because of a technical security failure but because of a policy disagreement over acceptable use, the concept of AI vendor supply chain risk has been expanded into a governance instrument that operates independently of any technical assessment of the vendor’s security posture.

Opacity in DODD 3000.09 Implementation and the Oversight Gap

The Defense Directive establishing human judgment requirements for autonomous weapons has, by all available evidence, been poorly disclosed and its enforcement has not been publicly documented. DefenseScoop documented as early as October 2023 that the DoD was “notably quiet” on how many autonomous weapons systems had been reviewed under the updated directive and whether the Autonomous Weapons Systems Working Group had met [12]. The dispute over Anthropic’s contract makes clear that this opacity extends to the contractual implementation of 3000.09 requirements. If the Pentagon was simultaneously asserting (1) compliance with DODD 3000.09’s human judgment requirements, and (2) the right to apply Claude to “any lawful use” including autonomous weapons without the guardrails Anthropic insisted on, these positions appear inconsistent: if DODD 3000.09’s human judgment requirement was already legally sufficient, vendor contractual guardrails were unnecessary; if vendor guardrails were unnecessary, the Pentagon’s insistence on removing them suggests the human judgment requirement was not the operative constraint. The FY2026 NDAA provisions requiring congressional notification of 3000.09 waivers represent an attempt to close this gap, but they postdate the dispute and their effectiveness depends on the Pentagon’s good-faith compliance with reporting requirements for which it has provided little public transparency to date [12].

Bipartisan Senate Armed Services Committee leadership — Chairs Wicker and Reed, Appropriations Chair McConnell and Ranking Member Coons — sent a joint letter to both Anthropic and the Pentagon acknowledging that “the issue of ‘lawful use’ requires additional work by all stakeholders” — an acknowledgment that the existing legal framework did not adequately address the dispute [23]. A senior DoD official publicly countered that AI contract restrictions could “threaten military missions,” framing vendor guardrails as operational liabilities rather than governance assets [28]. These positions are irreconcilable, and as of this writing no legislative or regulatory resolution has been reached.

Enterprise Risk Surface: What This Means Beyond the Defense Sector

The implications of this dispute extend well beyond organizations with DoD contracts. Enterprise AI security practitioners should note several dynamics that the Anthropic-Pentagon episode has forced into the open.

First, AI vendor usage policies are subject to revision under government pressure in ways that standard software vendor SLAs are not. Anthropic did not capitulate, but the designation and ban demonstrate that a government counterparty can impose severe operational costs on a vendor that holds its position — costs that may not be sustainable for vendors with smaller financial reserves or greater dependence on government contracts. OpenAI’s willingness to reach agreement in far less time than the issue warranted, resulting in contract language that at least one of its senior technical leaders — Caitlin Kalinowski, who resigned over the issue — found inadequate [21][22], illustrates how competitive pressure in the government AI market can erode the durability of vendor governance commitments.

Second, the question of what constitutes a “lawful use” in AI contracts requires attention from legal and compliance teams. The DoD’s position — that any use not affirmatively prohibited by existing law is a lawful use the vendor must permit — sets a floor that provides little substantive governance for novel AI applications. Enterprises procuring AI for regulated industries should examine whether their own contract terms rely on the same “lawful use” framing or whether they have negotiated specific technical and ethical restrictions with AI vendors.

Third, the deployment of Claude on classified networks since July 2025, and, if confirmed, its reported continued use for operational military targeting during the phase-out period [25][26], illustrates that the operational integration of AI into high-stakes workflows can outpace the governance frameworks intended to constrain it. The same dynamic — AI capability deployed faster than governing policy can follow — appears in enterprise contexts regularly and carries analogous governance risks even when the stakes are lower than autonomous weapons use.


Recommendations

Immediate Actions

Enterprise AI security teams should audit their current AI vendor agreements to determine whether those agreements contain “any lawful use” language or similarly broad permissive clauses rather than specific technical and ethical restrictions on prohibited use cases. Any agreements that lack affirmative restrictions on the highest-risk use categories relevant to the organization’s sector should be flagged for legal review. Organizations with DoD contracts should assess their exposure to the supply chain risk designation’s operational effects on Anthropic and any other AI vendor that may face similar government action in the near term.

Organizations that have integrated AI into workflows governed by sector-specific compliance frameworks — healthcare, financial services, critical infrastructure — should verify that those compliance frameworks have been explicitly mapped to their AI vendor contracts. The “lawful use” gap identified in the Anthropic dispute is not unique to DoD: any compliance framework that relies on AI vendors’ own usage policies rather than contractually specified technical restrictions faces the same structural vulnerability.

Short-Term Mitigations

Enterprises should reduce single-vendor dependence for AI capabilities critical to their operations. The potential for abrupt operational disruption to DoD contractors that have integrated Claude into compliance-relevant workflows illustrates the concentration risk inherent in deep integration with a single AI vendor. Architectural patterns that allow substitution between models from different vendors — particularly for applications where continuity of service is critical — reduce exposure to the kind of government action or vendor dispute that can produce sudden access loss.

Governance frameworks for AI use should be maintained at the enterprise level, independent of vendor usage policies. This means establishing internal acceptable use policies for AI that are more specific than vendor terms, with internal technical controls — access restrictions, output review mechanisms, deployment architecture constraints — that enforce those policies regardless of changes to vendor contracts or usage terms. The principle that enterprise AI governance cannot be fully delegated to vendor guardrails is directly affirmed by this dispute.

Enterprises should also begin tracking the legislative and regulatory trajectory of AI governance more actively than current security program maturity models typically require. The FY2026 NDAA provisions on AI governance are likely early iterations of a body of regulation that will impose more specific requirements on AI procurement and deployment in regulated industries. The Anthropic dispute has demonstrated that congressional and bipartisan political appetite for AI governance legislation is substantial; enterprises that are already ahead of this curve will face lower compliance adjustment costs.

Strategic Considerations

The governance crisis exposed by the Pentagon-Anthropic dispute reflects a genuine structural gap: there is no legally binding international or domestic framework that affirmatively prohibits AI use for certain categories of autonomous weapons or surveillance, leaving these restrictions as matters for private contract negotiation [27][29]. The Center for American Progress, in analysis published in March 2026, explicitly called on Congress to pass legislation codifying prohibitions on AI use for domestic mass surveillance and fully autonomous weapons — arguing that these restrictions must be established in law, not left to negotiation [29]. Oxford and Chatham House experts characterized the dispute as exposing “governance failures” with consequences extending well beyond Washington [30][31]. Security leaders at enterprises dependent on frontier AI should engage in industry working groups, public comment processes, and standards development activities that will shape the emerging legislative and regulatory landscape.


CSA Resource Alignment

This incident maps directly to several foundational Cloud Security Alliance frameworks. The CSA AI Controls Matrix (AICM) v1.0 provides specific governance controls relevant to the risks exposed here. Under the AICM’s AI Governance domain, enterprise organizations are expected to establish policies governing acceptable AI use that are independent of vendor terms — the precise control gap the Anthropic dispute has highlighted. The AICM’s Supply Chain Security domain addresses vendor risk assessment, but the supply chain risk designation episode argues for extending that assessment to include the vendor’s political and legal exposure to government pressure, not only its technical security posture.

The CSA MAESTRO framework for agentic AI threat modeling addresses the risk that AI capabilities deployed in operational workflows may be used beyond their intended and governed scope. If confirmed, reports of continued Claude use for operational military targeting during a phase-out period following a ban [25] would represent exactly the class of agentic deployment risk MAESTRO is designed to surface: an AI system integrated deeply enough into operational workflows that the organizational cost of governance enforcement creates pressure to defer or ignore it. MAESTRO Layer 5 (deployment and integration) controls are directly applicable to the challenge of ensuring that AI use restrictions are enforced at the architectural level, not only through policy.

The CSA Cloud Controls Matrix (CCM) provides relevant guidance under its Threat and Vulnerability Management (TVM) and Supply Chain Management (STA) domains. Enterprises should verify that their CCM-aligned control libraries account for the risk of AI vendor policy change driven by external government pressure, and that supply chain assessments include evaluation of vendor governance commitments’ resilience to regulatory and legal coercion.

CSA’s AI Organizational Responsibilities guidance, and the broader CSA STAR for AI program, offer organizations structured mechanisms for assessing AI vendors’ governance frameworks against independent criteria rather than relying on vendor self-attestation. In a market environment where vendor usage policies can be abandoned under government pressure or rushed through in competitive haste — as the OpenAI deal illustrated — independent third-party assessment of vendor AI governance has significantly greater value than vendor-provided documentation alone.


References

[1] NBC News, “Anthropic Says Pentagon Declared It a National Security Risk,” March 2026. https://www.nbcnews.com/tech/tech-news/anthropic-says-pentagon-declared-national-security-risk-rcna262013

[2] CBS News, “Pentagon Designates Anthropic as Supply Chain Risk; Feud Over AI Guardrails,” March 2026. https://www.cbsnews.com/news/pentagon-anthropic-supply-chain-risk-feud-ai-guardrails/

[3] Fortune, “OpenAI Pentagon Deal; Anthropic Designated Supply Chain Risk — Unprecedented Action May Damage Its Growth,” February 28, 2026. https://fortune.com/2026/02/28/openai-pentagon-deal-anthropic-designated-supply-chain-risk-unprecedented-action-damage-its-growth/

[4] CNN Business, “Hegseth, Anthropic, AI Military Standoff,” February 24, 2026. https://www.cnn.com/2026/02/24/tech/hegseth-anthropic-ai-military-amodei

[5] PBS NewsHour, “Anthropic ‘Cannot in Good Conscience’ Accede to Pentagon’s Demands,” February 2026. https://www.pbs.org/newshour/nation/anthropic-cannot-in-good-conscience-accede-to-pentagons-demands-ceo-says

[6] CNN Business, “Anthropic Pentagon Deadline,” February 27, 2026. https://www.cnn.com/2026/02/27/tech/anthropic-pentagon-deadline

[7] Bloomberg, “Trump Orders U.S. Government to Drop Anthropic After Pentagon Feud,” February 27, 2026. https://www.bloomberg.com/news/articles/2026-02-27/trump-orders-us-government-to-drop-anthropic-after-pentagon-feud

[8] Washington Post, “Trump Orders Agencies to Drop Anthropic,” February 27, 2026. https://www.washingtonpost.com/technology/2026/02/27/trump-anthropic-claude-drop/

[9] CNBC, “Anthropic Pentagon AI: Claude, Iran, Supply Chain Designation,” March 5, 2026. https://www.cnbc.com/2026/03/05/anthropic-pentagon-ai-claude-iran.html

[10] TechCrunch, “Anthropic CEO Dario Amodei Could Still Be Trying to Make a Deal with Pentagon,” March 5, 2026. https://techcrunch.com/2026/03/05/anthropic-ceo-dario-amodei-could-still-be-trying-to-make-a-deal-with-pentagon/

[11] U.S. Department of Defense, “DoD Announces Update to DoD Directive 3000.09, Autonomy in Weapon Systems,” January 2023. https://www.war.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/

[12] DefenseScoop, “After 3000.09 Update, DoD Stays Quiet on Lethal Autonomous Weapon Reviews,” October 27, 2023. https://defensescoop.com/2023/10/27/after-3000-09-update-dod-stays-quiet-on-lethal-autonomous-weapon-reviews/

[13] K&L Gates, “Artificial Intelligence Provisions in the Fiscal Year 2026 House and Senate National Defense Authorization Acts,” September 2025. https://www.klgates.com/Artificial-Intelligence-Provisions-in-the-Fiscal-Year-2026-House-and-Senate-National-Defense-Authorization-Acts-9-11-2025 [NOTE: Pre-passage analysis of House and Senate NDAA bills; section numbers cited in text reflect enacted law and should be verified against post-passage sources including [33].]

[14] CNBC, “OpenAI Strikes Deal with Pentagon Hours After Rival Anthropic Was Blacklisted by Trump,” February 27, 2026. https://www.cnbc.com/2026/02/27/openai-strikes-deal-with-pentagon-hours-after-rival-anthropic-was-blacklisted-by-trump.html

[15] CNBC, “OpenAI, Sam Altman, Pentagon Deal Amended, Surveillance Limits,” March 3, 2026. https://www.cnbc.com/2026/03/03/openai-sam-altman-pentagon-deal-amended-surveillance-limits.html

[16] OpenAI, “Our Agreement with the Department of Defense.” https://openai.com/index/our-agreement-with-the-department-of-war/

[17] MIT Technology Review, “OpenAI’s Compromise with the Pentagon Is What Anthropic Feared,” March 2, 2026. https://www.technologyreview.com/2026/03/02/1133850/openais-compromise-with-the-pentagon-is-what-anthropic-feared/

[18] Electronic Frontier Foundation, “Weasel Words: OpenAI’s Pentagon Deal Won’t Stop AI-Powered Surveillance,” March 2026. https://www.eff.org/deeplinks/2026/03/weasel-words-openais-pentagon-deal-wont-stop-ai-powered-surveillance

[19] The Intercept, “OpenAI on Surveillance and Autonomous Killings,” March 8, 2026. https://theintercept.com/2026/03/08/openai-anthropic-military-contract-ethics-surveillance/

[20] NBC News, “OpenAI Alters Deal with Pentagon After Critics Sound Alarm on Surveillance,” March 2026. https://www.nbcnews.com/tech/tech-news/openai-alters-deal-pentagon-critics-sound-alarm-surveillance-rcna261357

[21] Fortune, “OpenAI Robotics Leader Caitlin Kalinowski Resigns over Pentagon Surveillance, Autonomous Weapons,” March 7, 2026. https://fortune.com/2026/03/07/openai-robotics-leader-caitlin-kalinowski-resignation-pentagon-surveillance-autonomous-weapons-anthropic/

[22] NPR, “OpenAI Robotics Lead Resigns, Citing Pentagon AI Guardrails,” March 8, 2026. https://www.npr.org/2026/03/08/nx-s1-5741779/openai-resigns-ai-pentagon-guardrails-military

[23] Axios, “Senate Defense Leaders Intervene in Anthropic-Pentagon Standoff,” February 27, 2026. https://www.axios.com/2026/02/27/senate-defense-anthropic-pentagon-ai

[24] Washington Technology, “Private Sector and Former Military Leaders Urge Congress to Intervene in Pentagon-Anthropic Dispute,” March 2026. https://www.washingtontechnology.com/contracts/2026/03/private-sector-former-military-leaders-urge-congress-intervene-pentagon-anthropic-dispute/411949/

[25] CBS News, “Anthropic Claude AI Used in Iran War,” March 2026. https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-s/ [NOTE: DoD has not officially confirmed the specific operational role of Claude in these operations; reported by multiple outlets including Washington Post and CBS News but should be treated as reported-unconfirmed pending official disclosure.]

[26] Washington Post, “Anthropic AI in Iran Campaign,” March 4, 2026. https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign/

[27] Mayer Brown, “Pentagon Designates Anthropic a Supply-Chain Risk: What Government Contractors Need to Know,” March 2026. https://www.mayerbrown.com/en/insights/publications/2026/03/pentagon-designates-anthropic-a-supply-chain-risk-what-government-contractors-need-to-know

[28] U.S. News & World Report, “AI Contract Restrictions Could Threaten Military Missions, U.S. Official Says,” March 3, 2026. https://www.usnews.com/news/top-news/articles/2026-03-03/ai-contract-restrictions-could-threaten-military-missions-us-official-says

[29] Center for American Progress, “The Department of Defense’s Conflict with Anthropic and Deal with OpenAI Are a Call for Congress to Act,” March 2026. https://www.americanprogress.org/article/the-department-of-defenses-conflict-with-anthropic-and-deal-with-openai-are-a-call-for-congress-to-act/

[30] Chatham House, “Anthropic’s Feud with the Pentagon Reveals the Limits of AI Governance,” March 2026. https://www.chathamhouse.org/2026/03/anthropics-feud-pentagon-reveals-limits-ai-governance

[31] University of Oxford, “Expert Comment: The Pentagon-Anthropic Dispute Reflects Governance Failures with Consequences That Extend Well Beyond Washington,” March 6, 2026. https://www.ox.ac.uk/news/2026-03-06-expert-comment-pentagon-anthropic-dispute-reflects-governance-failures-consequences

[32] Electronic Frontier Foundation, “Tech Companies Shouldn’t Be Bullied into Doing Surveillance,” February 24, 2026. https://www.eff.org/deeplinks/2026/02/tech-companies-shouldnt-be-bullied-doing-surveillance

[33] Brennan Center for Justice, “The Good, the Bad, and the Really Weird AI Provisions in the Annual Defense Policy Bill,” December 15, 2025. https://www.brennancenter.org/our-work/analysis-opinion/good-bad-and-really-weird-ai-provisions-annual-defense-policy-bill

← Back to Research Index