The US technology firm Anthropic has found itself at the center of a growing international controversy following reports that its artificial intelligence model, Claude, was used by the Pentagon during a January 3 military operation in Venezuela that left 83 people dead and resulted in the extraction of then-serving President Nicolás Maduro.
The operation, which involved coordinated bombing across the Venezuelan capital, Caracas, has already strained diplomatic relations across Latin America. Now, revelations that a leading commercial AI model may have been employed in the mission are intensifying debate about the militarization of artificial intelligence and the ethical boundaries of Silicon Valley’s partnerships with defense agencies.
While there is no official confirmation regarding the extent to which Claude influenced the planning, targeting, or execution of the mission, a February 15 report by The Wall Street Journal stated that Anthropic’s AI model was actively used during the operation. The report cited unnamed sources familiar with the matter.
Anthropic’s terms of service explicitly prohibit the use of its AI systems for violent purposes, weapons development, or surveillance operations that could result in harm. Yet the company declined to confirm or deny the report.
“We cannot comment on whether Claude, or any other AI model, was used for any specific operation,” an Anthropic spokesperson said. “Any use of Claude – whether in the private sector or across government – is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.”
The US Department of Defense also declined to comment on whether Claude was deployed during the Venezuela mission.
According to the Journal, Claude may have been utilized through Anthropic’s partnership with Palantir Technologies, a major contractor to the Pentagon and US federal law enforcement agencies. Palantir specializes in data integration, battlefield intelligence analysis, and operational planning software.
If confirmed, the arrangement would highlight a growing trend in which commercial AI tools are embedded within defense contractor platforms, blurring the line between civilian technology and military application.
The reported use of Claude comes amid tensions between Anthropic and US defense officials. The Trump administration, according to sources cited in the report, had considered canceling a defense contract with Anthropic worth up to $200 million over concerns that the company’s usage policies were too restrictive for wartime needs.
Defense Secretary Pete Hegseth signaled this tension in January, stating that the Pentagon would not “employ AI models that won’t allow you to fight wars.” His remarks underscored a widening rift between AI developers advocating guardrails and defense planners demanding operational flexibility.
Anthropic is believed to be the first major AI developer publicly linked to a classified US military operation. It remains possible that additional AI systems were employed for unclassified tasks such as logistics management, document summarization, intelligence analysis, or drone coordination.
Artificial intelligence today can perform a vast range of military-support tasks. From scanning satellite imagery and analyzing intercepted communications to optimizing supply chains and assisting in autonomous navigation, AI tools are already embedded across defense ecosystems.
The controversy surrounding the Venezuela operation comes at a time when the global debate over AI’s role in warfare is intensifying.
“This is the Oppenheimer moment of our generation,” warned Alexander Schallenberg at a Vienna conference on autonomous weapons earlier this year. He cautioned that AI-driven warfare could spiral into an uncontrollable arms race, with autonomous drones and algorithm-driven targeting systems making mass killing “a mechanized, near-effortless process.”
The reference to the atomic age is telling. Just as nuclear weapons reshaped global geopolitics in the 20th century, many experts believe AI could redefine power balances in the 21st.
Despite the current controversy, artificial intelligence has long been intertwined with military research.
During World War II, British mathematician Alan Turing played a pivotal role in breaking Germany’s Enigma code—an achievement widely regarded as a precursor to modern computational intelligence. In 1950, Turing proposed what became known as the “Turing Test,” a benchmark for determining whether a machine could exhibit behavior indistinguishable from human intelligence.
In 1956, American computer scientist John McCarthy coined the term “artificial intelligence” at a landmark workshop at Dartmouth College.
Two years later, the US Department of Defense established the DARPA—originally known as the Advanced Research Projects Agency—to drive innovation in military technologies. DARPA would go on to fund early AI research, the internet’s precursor (ARPANET), and numerous autonomous systems programs.
By the 1960s, the Pentagon was investing heavily in teaching computers to simulate basic human reasoning. In 1979, researchers at Stanford’s AI laboratory developed one of the first autonomous vehicles. The 1980s saw rapid advances in machine learning, enabling computers to analyze large datasets and “learn” from patterns.
The first confirmed use of AI in combat operations occurred during the 1991 Gulf War. The Dynamic Analysis and Replanning Tool (DART) optimized logistics and transportation of troops and supplies, reportedly saving millions of dollars.
By 1997, IBM’s Deep Blue defeated world chess champion Garry Kasparov, symbolizing AI’s growing capabilities.
In the 2010s, as China and Russia accelerated their own military AI programs, Washington launched a series of initiatives to maintain technological superiority.
In 2014, the Pentagon unveiled the “Third Offset” strategy aimed at preserving US dominance through cutting-edge technologies including AI, robotics, and autonomous systems.
In 2017, the Defense Department launched Project Maven, formally known as the Algorithmic Warfare Cross-Functional Team. Project Maven focused on automating target identification from drone footage, dramatically accelerating battlefield intelligence analysis.
The project sparked internal protests at Google, which had initially partnered with the Pentagon. The backlash illustrated growing unease within the tech community about military AI applications.
Artificial intelligence is now deeply embedded in modern conflicts.
In the Russia-Ukraine war, AI-enhanced drones conduct target identification, autonomous navigation, and strike coordination. Military analysts estimate that drones account for up to 70–80 percent of casualties in some contested areas.
In 2023, Israel reportedly deployed AI tools named Lavender and Gospel during the Gaza conflict. According to accounts cited by TIME magazine, these systems dramatically accelerated target generation.
“During the period in which I served in the target room, you needed a team of around 20 intelligence officers to work for around 250 days to gather between 200 to 250 targets,” Tal Mimran, a lecturer at Hebrew University and former legal adviser in the Israeli Defense Forces, told TIME. “Today, the AI will do that in a week.”
Such efficiency gains, critics argue, risk normalizing high-speed lethal decision-making.
The reported involvement of Claude in Venezuela highlights a difficult balancing act for AI firms. On one hand, many companies publicly commit to ethical guidelines restricting violent applications. On the other, defense contracts provide substantial revenue and institutional validation.
Anthropic’s reported $200 million Pentagon contract illustrates the financial stakes.
The broader AI industry is grappling with similar dilemmas. Firms must reconcile investor expectations and government partnerships with internal ethical frameworks and public scrutiny.
If Claude was indeed used in the Venezuela operation, questions will arise about enforcement mechanisms for AI usage policies. Can a company meaningfully control how a client deploys a model once integrated into secure defense systems? And if not, where does accountability lie?
Experts warn that regulatory frameworks are lagging behind technological progress. International efforts to establish norms governing autonomous weapons have stalled amid geopolitical rivalries.
Meanwhile, AI’s capabilities continue to expand. Future battlefields could feature swarms of coordinated autonomous drones conducting intelligence, surveillance, and reconnaissance missions around the clock. Algorithm-driven systems may recommend or even select targets, compressing decision cycles from hours to seconds.
The Venezuela operation may prove to be more than a diplomatic flashpoint. It could mark a pivotal moment in the integration of commercial AI into high-intensity military missions.
As governments race to secure technological advantage, and AI firms navigate ethical and commercial pressures, the world may indeed be approaching a defining crossroads.
Whether this is the Oppenheimer moment of artificial intelligence—or simply another step in its long military evolution—remains to be seen. What is clear is that AI is no longer a distant prospect in warfare. It is already here, shaping decisions in real time, and raising profound questions about who—or what—ultimately decides matters of life and death.