Anthropic apologized after a leaked internal message criticized government decisions. The issue emerged after the Pentagon labeled the company a supply chain risk, raising questions about the use of its AI tools in military-related work. The situation gained more attention after another AI developer signed a Pentagon deal. Anthropic later said it still wants to cooperate with the US Department of Defense.
Anthropic Apologizes After Internal Memo Becomes Public
The company confirmed that someone leaked an internal message written during a tense moment to the press. Employees wrote the memo shortly after the US government announced plans to remove the company’s technology from federal systems, which triggered strong reactions within the organization.
Company leadership said the message reflected the emotions of a difficult day rather than a carefully considered statement. Anthropic apologized for the tone of the memo and clarified that it did not represent the company’s official views. The company also said it wrote the message several days earlier and that it did not reflect the most recent discussions with government officials.
Anthropic bid for $100M Pentagon drone swarm project amid escalating dispute with DoD
The company issued the apology to show that it still believes cooperation with government institutions is important, especially for national security and the responsible use of artificial intelligence.
Anthropic also emphasized that AI tools can support soldiers and national security professionals in their work. To demonstrate its willingness to cooperate, the company offered its AI models to the Department of Defense at a nominal cost and said its engineers could provide technical support for government operations.
Pentagon Labels Anthropic a Supply Chain Risk
The conflict escalated when the US Department of Defense formally classified Anthropic as a supply chain risk on March 4, 2026. Such a designation means the government believes a supplier could potentially create security concerns if sensitive defense systems use its products.
Anthropic said the classification has limited real impact. The company explained that the restriction applies only when organizations use its AI system directly in contracts involving the Department of Defense.
Sam Altman acknowledges OpenAI moved too fast on controversial Pentagon AI deal
This means organizations that work with the military can still use the company’s AI tools for other projects that are not directly tied to Pentagon contracts. Anthropic also pointed to a US law that requires authorities to use the least restrictive measures when addressing potential supply chain risks.
The company said it plans to challenge the designation in court because it believes the classification may not be legally justified. At the same time, Anthropic confirmed it has been holding discussions with Pentagon officials to explore possible cooperation or transition options under existing rules.
Controversy Grows After Pentagon AI Agreement
The debate around Anthropic grew stronger after another major artificial intelligence developer signed an agreement with the United States Department of Defense. The Pentagon labeled Anthropic a supply chain risk shortly before another AI developer finalized the deal. The agreement drew attention because it sets specific rules on how the military can use artificial intelligence technology.
Under the contract terms, the AI system cannot independently control autonomous weapons when laws or government policies require human oversight. The agreement also prevents the system from making certain high-risk decisions without human approval. These rules ensure that humans remain responsible for critical military actions and that artificial intelligence systems do not act without supervision.
US military’s use of Anthropic’s Claude AI in Venezuela mission shocks tech and defence experts
The contract also sets limits on surveillance. It prevents the technology from carrying out unlimited monitoring of private information belonging to people in the United States. Policymakers introduced these safeguards to prevent misuse of artificial intelligence and maintain ethical boundaries when authorities use the technology for national security purposes.
Reports earlier suggested that Anthropic had declined to support certain uses of AI, including large-scale surveillance and fully autonomous weapons. This policy difference reportedly caused earlier negotiations with the Pentagon to fail. The situation later sparked online debate as users discussed whether AI companies should cooperate with defense organizations.
