US military’s use of Anthropic’s Claude AI in Venezuela mission shocks tech and defence experts

More from Author

Ruta R Deshpande
Ruta R Deshpande
Ruta Deshpande is a seasoned Defense Technology Analyst with a strong focus on cutting-edge military innovations and strategic defense systems. With a deep-rooted interest in geopolitics and international relations, she brings nuanced insights into the intersection of technology, diplomacy, and global security. Ruta has reported extensively on defense modernization, space militarization, and evolving Indo-Pacific dynamics. As a journalist, she has contributed sharp, well-researched pieces to Deftechtimes, a reputed defense and strategy publication. Her analytical writing reflects a strong grasp of global military doctrines and regional conflict zones. Ruta has a particular interest in the Arctic race, cyber warfare capabilities, and unmanned combat systems. She is known for breaking down complex defense narratives into accessible, compelling stories. Her background includes collaborations with think tanks and participation in strategic dialogue forums.

A new report says U.S. military forces used a powerful artificial intelligence system called Claude. The military used the AI during the operation that captured former Venezuelan President Nicolás Maduro earlier this year. The Wall Street Journal first reported this development. Several news agencies later confirmed the information. Reuters also confirmed the report, citing people familiar with the matter.

This marks a rare case of a commercial AI tool being used in a classified military mission. The report offers new details on how modern technology like AI is becoming part of top-level defence work. It shows that the military is using such tools even in dangerous operations carried out far from U.S. territory.

A Look at Claude and the Military Operation

Claude is an artificial intelligence system built by the U.S. tech company Anthropic. It is designed to answer questions, summarise information, analyse text, and help with complex tasks. Normally, Claude is used for business and research work.

Pentagon names Alibaba and BYD as China military-linked firms weeks before Trump-Xi summit

According to the reports:

  • The Pentagon (the U.S. Department of Defense) used Claude in the U.S. raid that captured Nicolás Maduro, the former president of Venezuela.
  • Palantir Technologies provided Claude to the military through its partnership, using platforms already widely used in defence and law enforcement.
  • Officials have not fully explained Claude’s exact role, but they confirmed that the AI supported planning and operational functions during the mission..

The raid took place in early January. Special U.S. forces entered Venezuelan territory. They captured Nicolás Maduro. They then transported him to the United States. He is facing major drug-trafficking charges in New York.

The mission received wide coverage around the world. Many described it as bold and unexpected. The new reporting stands out because it reveals how AI technology was used behind the scenes. The operation involved intelligence work, logistics, and real-time decisions made under pressure.

Rules, Partnerships, and Ethical Questions

One of the most striking parts of the report highlights a contradiction between how Anthropic says its AI should be used and how the mission used it.

Anthropic publicly states that Claude cannot support violence, help design weapons, or carry out surveillance. These rules form part of the company’s core usage policies and aim to keep its technology safe and ethical.

Pentagon warns China has loaded over 100 ICBMs as nuclear expansion accelerates

Yet, the Pentagon deployed Claude in a mission that included combat and violence:

  • Claude was used within a classified military network, meaning government users had access to more secure systems than ordinary customers.
  • Leading AI companies such as Anthropic and OpenAI have been asked by the Pentagon to make their systems available for classified military tasks with fewer restrictions.
  • Anthropic is currently the only major AI developer whose technology is available on classified military networks through third-party partnerships.

Concerns over how Claude’s technology could be used have led to tensions between Anthropic and U.S. officials. Some officials within the Pentagon have pushed for looser limits on artificial intelligence use, especially for missions linked to national security.

Despite this, Anthropic did not directly confirm the specific use of Claude in the Maduro mission and said it would not comment on the details of any classified operation.

Where This Stands Now

While Reuters and other outlets have reported this news, they also noted that neither the Pentagon, the White House, Anthropic, nor Palantir provided immediate comment on whether Claude was used or exactly how it was used.

Lockheed pours $1B into F-35 readiness as Pentagon pushes jets to stay combat-ready daily

What we do know from the reports:

  • Claude was accessed through Palantir’s systems that the U.S. Department of Defense uses regularly.
  • The AI model is part of a growing effort by the Pentagon to use advanced technology including AI tools for defence work.
  • Anthropic’s usage policies technically forbid violent or weapons-related use, yet Claude was deployed through channels tied to a forceful military operation.

The Wall Street Journal report marks an important moment in how AI technologies are being used by governments, especially in sensitive military missions.

- Advertisement -

Trending on Deftechtimes