The ongoing US–Israel war on Iran has increasingly been described by military analysts as “the first AI war,” a phrase that captures both the technological sophistication of the conflict and the profound unease surrounding it. Yet the reality is more complex. Artificial intelligence is not a sudden rupture in the history of warfare but the latest chapter in a decades-long effort to accelerate the military “kill chain” — the sequence of identifying, tracking and eliminating a target.
What has changed is the scale, speed and autonomy with which decisions are now made.
“Sixty seconds – that’s all it took,” claimed a former agent of Mossad, referring to the operation that killed Iran’s supreme leader, Ali Khamenei, on February 28, 2026 — the opening day of the war. Whether the claim is exaggerated or not, it reflects a deeper transformation: warfare is no longer measured in hours or days, but in seconds.
Modern warfare generates an overwhelming volume of data. Intelligence streams pour in from intercepted communications, mass internet surveillance, satellite imagery and continuous drone video feeds. For decades, the challenge was not gathering information but processing it.
As early as 2010, the US Air Force warned it was “swimming in sensors and drowning in data.” Thousands of analysts were required to review footage and signals manually, often leading to delays that rendered intelligence obsolete by the time decisions were made.
Artificial intelligence has changed that equation.
According to Brad Cooper, head of US Central Command, AI systems are now capable of filtering vast amounts of intelligence in seconds. “These systems help us sift through vast amounts of data in seconds,” he said, “so our leaders can cut through the noise and make smarter decisions faster than the enemy can react.”
This acceleration is not merely incremental — it is transformative. A 2024 investigation by Georgetown University revealed that the US Army’s 18th Airborne Corps used AI-assisted intelligence processing to reduce a team of 2,000 analysts to just 20. Tasks that once required hours or even days are now completed almost instantly.
The concept of “compressing the kill chain” has long been central to US military strategy. During the Gulf War, Iraqi forces used mobile Scud missile launchers that could fire and relocate before US forces could respond. This “shoot and scoot” tactic exposed the limitations of slower intelligence cycles.
The response was a sustained investment in technologies that could detect, track and strike targets more rapidly.
A major breakthrough came after the September 11 attacks with the deployment of armed Predator drones. These unmanned systems combined surveillance and strike capabilities in a single platform, allowing operators thousands of miles away to identify and eliminate targets in near real time.
In November 2002, the CIA used such a drone to kill Qaed Salim Sinan al-Harithi, an Al Qaeda leader in Yemen. This marked the beginning of a new era in which remote warfare became normalized across multiple theatres, including Afghanistan, Iraq and Somalia.
The logic was simple: the faster the kill chain, the greater the advantage.
If drones marked the first major leap in speed, AI represents the next. Where human analysts once interpreted data and recommended targets, algorithms now perform much of that work autonomously.
The phrase “bombing at the speed of thought,” coined by historian Nick Cullather in 2003, once seemed like hyperbole. Today, it is closer to reality than ever before.
The US Department of Defense has explicitly framed AI as a competitive necessity. Its strategy document states that “military AI is going to be a race for the foreseeable future, and therefore speed wins.” It adds that the risks of falling behind outweigh the dangers of deploying imperfect systems.
This mindset reflects a broader shift: hesitation is now seen as a liability, not a safeguard.
While the military advantages of AI are clear, its human consequences are far more troubling.
In Gaza, Israeli intelligence sources have described AI systems such as Lavender and Gospel, which assist in identifying targets. According to reports, these systems have been configured to tolerate high levels of civilian casualties — in some cases accepting up to 100 civilian deaths for a single suspected militant.
Since October 7, 2023, more than 75,000 people have been killed in Gaza, according to various estimates. While not all these deaths can be attributed to AI, the technology has undeniably played a role in accelerating the tempo of strikes.
The concern is not just the scale of destruction, but the erosion of human oversight.
In February 2024, a US airstrike killed a 20-year-old student, Abdul-Rahman al-Rawi. A senior US official acknowledged that AI-assisted targeting had been used, although the military later claimed it had “no way of knowing” whether AI was involved in specific strikes.
Such ambiguity underscores a growing accountability gap.
The current war with Iran has amplified these concerns. Since the conflict began, more than 1,200 civilians have reportedly been killed, according to the Iranian Health Ministry.
One of the most controversial incidents occurred on February 28, when US forces struck an elementary school in southern Iran, killing at least 175 people, most of them children. The speed of the strike — and the apparent lack of verification — has raised serious questions about the role of AI in the targeting process.
Critics argue that the prioritization of speed has come at the expense of accuracy and ethical judgment.
This concern is reinforced by statements from Pete Hegseth, who has emphasized that the military’s objective is “maximum lethality, not tepid legality.” Such rhetoric signals a shift away from the cautious approach that once characterized US military operations.
For years, the US military invested heavily in systems designed to minimize civilian casualties. Legal advisors played a central role in reviewing targets and ensuring compliance with international law.
However, under the current administration, many of these safeguards have been dismantled.
Military lawyers have reportedly been sidelined or dismissed, reducing the level of scrutiny applied to targeting decisions. At the same time, the integration of AI into the kill chain has made it easier to approve strikes quickly, often with minimal human intervention.
The result is a dangerous convergence: faster decisions, fewer checks and greater reliance on opaque algorithms.
Beyond the immediate human cost, AI raises broader strategic concerns. By reducing the time and effort required to conduct military operations, it may lower the threshold for initiating conflict.
When decisions can be made in seconds, the opportunity for reflection — or restraint — diminishes.
There is also the risk of automation bias, where human operators defer to the recommendations of AI systems, even when those systems may be flawed. Over time, this could lead to a gradual erosion of human responsibility in warfare.
In effect, the role of humans may shift from decision-makers to supervisors, rubber-stamping actions initiated by machines.
The integration of AI into military operations is not limited to the United States and Israel. China, Russia and other powers are investing heavily in similar technologies, raising the prospect of an AI-driven arms race.
Unlike nuclear weapons, which are difficult to develop and deploy, AI systems can be scaled rapidly and integrated into existing platforms. This makes them more accessible — and potentially more destabilizing.
The emphasis on speed further exacerbates the risk. In a high-speed conflict, there may be little time to verify information or de-escalate tensions. Miscalculations could escalate rapidly, leading to unintended consequences.
One of the most pressing challenges posed by AI warfare is accountability.
When a strike results in civilian casualties, who is responsible? The programmer who designed the algorithm? The analyst who reviewed the data? The commander who approved the strike?
These questions remain largely unresolved. International law has yet to catch up with the realities of AI-driven warfare, leaving a legal and ethical vacuum.
At the same time, the secrecy surrounding military AI programs makes it difficult to assess their impact. Governments are reluctant to disclose details, citing national security concerns.
This lack of transparency only deepens public mistrust.
The war on Iran may not be the first AI war in a strict sense, but it is undoubtedly a turning point.
It demonstrates how far military technology has evolved — and how quickly it continues to advance. It also highlights the risks of prioritizing speed over deliberation, efficiency over accountability.
As AI becomes more deeply embedded in military operations, the challenge will be to balance its advantages with the need to protect human life.
Without robust safeguards, the consequences could be devastating.
The promise of AI is that it can make warfare more precise and efficient. The danger is that it may also make it more frequent, more opaque and more deadly.