The Conscience Clause
Anthropic-Pentagon Standoff Reveals Who Governs AI
Originally on Forbes - March 4, 2026.
The United States federal government designated Anthropic, one of its AI contractors, a national security risk, not because it had done anything illegal, but because it refused to remove limits on its own technology. The standoff between Anthropic and the Department of Defense compressed a decade of unsettled questions about AI governance into roughly 72 hours. It also answered one of the most consequential of those questions, at least for now: when a company draws an ethical line, and the government demands it be erased, who wins?
Anthropic, the Pentagon And The Red Lines
The sequence of events is worth recounting precisely. In November 2024, Anthropic and Palantir announced a partnership to bring Claude to U.S. government intelligence and defense operations. In July 2025, the Department of Defense awarded Anthropic a two-year prototype agreement with a $200 million ceiling. Everything appeared to be moving in a familiar direction: Silicon Valley idealism gradually accommodating the demands of government contracts. Then the Pentagon pushed further.
The Department’s request was specific: Anthropic should remove restrictions that would prevent Claude from being used for mass domestic surveillance and fully autonomous weapons systems. Anthropic refused. In a public statement on February 26, CEO Dario Amodei wrote that “regardless, these threats do not change our position: we cannot in good conscience accede to their request.” The phrase “in good conscience” was the hinge of the entire dispute.
The administration’s response was swift and escalatory. Secretary of Defense Pete Hegseth, posting on X, publicly characterized Anthropic as delivering “a master class in arrogance and betrayal.” President Trump, on Truth Social, directed federal agencies to immediately cease all use of Anthropic’s technology, with a six-month phase-out period. In his post on X, Hegseth commanded the Department to designate Anthropic a Supply Chain Risk, a designation that, if applied broadly, could effectively bar the company from doing business with any U.S. military contractor, cloud provider, or enterprise customer with government exposure.
Risks for Anthropic: Moral, Commercial And Existential
Anthropic faces three distinct but deeply intertwined risks. The moral risk is compromising the company’s core purpose, which is built around the premise that AI should be safe and beneficial. The commercial risk is real but manageable; $200 million is not an existential number for a company at Anthropic’s scale. The existential risk is the Supply Chain Risk designation itself, with its potential to cascade through investors, cloud partnerships, enterprise contracts and IPO plans in ways that would be difficult to contain.
By the evening of February 27, the plot had grown stranger still. OpenAI’s Sam Altman announced via X that his company had reached a deal with the Department of War to deploy its models in their classified network, including an agreement on prohibitions against domestic mass surveillance and autonomous weapons systems. In other words, OpenAI had, within hours of Anthropic’s standoff reaching its peak, signed essentially the terms Anthropic was holding out for. The Wall Street Journal simultaneously reported that government agencies had raised alarms about Grok, xAI’s chatbot, which had quietly been given access to classified networks under a low-drama, unrestricted agreement signed earlier in the year. The Pentagon had been shopping for compliance, and the market had obliged.
The backlash from the public and from workers inside the AI industry was not negligible. A boycott campaign against ChatGPT, organized under the banner that it had “taken Trump’s killer robot deal,” claimed over 2.5 million participants. An open letter signed by employees of Google and OpenAI, titled “We Will Not Be Divided,” urged leadership at both companies to hold the same lines Anthropic had drawn, specifically on domestic surveillance and autonomous killing without human oversight.
The substantive question underneath this theater is deceptively simple. Both parties in the Anthropic-Pentagon standoff had defensible positions. The Department’s argument, that no private company should be able to dictate terms on national security matters to the government of the United States, reflects a coherent principle about democratic accountability. Anthropic’s argument, that it cannot in good conscience strip safeguards designed to protect civilians, reflects an equally coherent principle about corporate responsibility. The problem is that these two positions are not reconcilable through negotiation alone. They require a legal framework that does not yet exist.
AI Governance Has No Law: Why Congress Must Act
The proposed resolution that emerged from the Altman-DoW agreement hinged on two words: lawful usage. But that formulation only defers the difficulty. There are currently no comprehensive federal privacy laws governing AI-enabled surveillance. No AI regulation defines what autonomous weapons systems may or may not do. The phrase “lawful usage” is a placeholder for a political process that has not happened. As I argued in Forbes in January, 2026 is going to be the year AI becomes a visible political issue, migrating from policy white papers into electoral campaigns. The Anthropic standoff has accelerated that timeline.
The answer to who should decide cannot be the companies themselves. Corporate terms of service are not a substitute for democratic governance, and the critics who accused Anthropic of attempting to assert veto power over military operations were not entirely wrong, even if the specific uses being demanded were genuinely alarming. The answer also cannot be an executive branch acting through financial coercion rather than law. The Hyperdimensional newsletter put it plainly in the days following the standoff: a government that will treat a company like a foreign adversary simply for expressing a position about its own products has assaulted something fundamental to a functioning republic.
What is needed, and what has been conspicuously absent, is Congress. The questions at stake here, about the conditions under which AI can be used for surveillance, about what level of human oversight is required before a weapons system can act, about what rights citizens retain against AI-enabled state power, are not technical questions. They are political and constitutional ones. They belong in legislation, not in corporate usage policies or presidential Truth Social posts.
The Anthropic case will likely be remembered as a clarifying moment, regardless of how the specific contract dispute ultimately resolves. It demonstrated that the absence of law does not mean the absence of conflict. It only means the conflict gets decided by whoever has the most leverage at a given moment. That is not governance. It is improvisation with very high stakes.




Re: What is needed, and what has been conspicuously absent, is Congress.
What we need is a 'functioning' congress. I agree, it's been conspicuously absent for some time now.
"Both parties in the Anthropic-Pentagon standoff had defensible positions...The problem is that these two positions are not reconcilable through negotiation alone."
The problem is the impolite subtext, which is that the DOD leaders are claiming they want to use the AI for "all lawful purposes" when they talk about AI, but the previous week were still persecuting a senator and veteran for saying that soldiers shouldn't follow unlawful orders, and then the next day hold press conferences where they mock the very concept of rules of engagement.
The positions aren't currently reconcilable because, though anthropic apparently cannot say it directly, the current DOD leadership can't be trusted to follow the law, and fights any effort to constrain them to the law. What they are likely demanding from anthropic is a special version of their AI that will follow *unlawful* orders.