In a surprising development in the world of artificial intelligence (AI) and national defense, the AI company Anthropic submitted a proposal to compete in a major Pentagon technology challenge, even as it engaged in a public dispute with the U.S. Department of Defense over how it could use its technology.
The Pentagon launched the high‑stakes competition, worth about $100 million, to develop software and systems for voice‑controlled autonomous drone swarms. These are groups of drones that can operate together, respond to commands, and move as a coordinated team.
What Anthropic Tried to Do — Even During a Dispute
Anthropic, known for building advanced AI tools like its Claude model, took part in the Pentagon’s call for ideas by submitting a proposal focused on using its AI technology to help coordinate drone swarms. The goal was to let the AI take a commander’s spoken intent — such as “move to grid point Alpha” — and translate that into instructions that could guide multiple drones.
Most U.S. bases in the Middle East are within Iran’s missile range, Pentagon officials quietly warn
Importantly, people familiar with the submission said that unlike some defense systems that make automatic targeting decisions, Anthropic’s idea did not involve letting the AI make life‑or‑death choices on its own. The company said that a human would always be able to oversee and stop what the system was doing if needed.
This approach — combining advanced automation with human control — was intended to stay within Anthropic’s own ethical limits, sometimes called “red lines.” These are rules the company has set for itself about how and where its AI should be used.
What the Pentagon Wanted — And Why Talks Broke Down
At the same time Anthropic entered the drone swarm contest, the company held tense negotiations with the Pentagon over how the military could use its AI technology. The Defense Department sought assurances that it could use advanced AI broadly for “all lawful purposes,” a phrase covering many types of military operations.
Anthropic pushed back. The company refused to let its technology power mass domestic spying or fully autonomous weapons that could select and fire on targets independently. Anthropic’s leaders argued that AI is not yet reliable enough to make such decisions without human oversight and raised concerns about civil liberties.
Defense officials, on the other hand, said they had no intention of pursuing such uses, but they still wanted broad flexibility to apply the AI in many different combat and defense scenarios. The disagreement grew sharper over time, with both sides giving differing accounts of what was or was not acceptable.
A Major Rift: The Pentagon’s Reaction
Ultimately, the dispute did not end in a deal. The Defense Secretary took the rare step of labeling Anthropic a “supply chain risk”, a designation usually used for companies with ties to foreign adversaries — not U.S. tech firms. This meant that Pentagon contractors and partners were ordered not to do commercial business with Anthropic.
This action also followed orders from the U.S. government for federal agencies to stop using Anthropic’s AI technology under current contracts. The move effectively removed Anthropic from future Pentagon work unless the situation changes.
Pentagon confirms large-scale US strikes on IS in Syria under Operation Hawkeye Strike
Anthropic will challenge the designation in court, arguing that the label is unprecedented and legally unwarranted, and that it can continue working with other companies and customers outside defense contracts.
Competition Continues Without Anthropic
With Anthropic’s proposal not among those selected for the drone swarm contest, other teams moved forward. Notably, the Pentagon chose SpaceX and xAI to compete in the technology challenge. In addition, officials selected several defense contractor teams that work with rival AI companies.
One of the selected proposals involved technology from a company that partnered with OpenAI, focusing on using AI to help convert spoken military commands into digital instructions — similar to the role Anthropic had pitched. OpenAI has since announced its own agreement with the Defense Department to supply AI tools under terms that keep humans fully responsible for the use of force, including autonomous systems.






