• Close
  • Subscribe
burgermenu
Close

AI and the speed of war

AI and the speed of war

The Iran war highlights how AI is transforming warfare not through autonomy, but by accelerating the processes that turn intelligence into action.

By The Beiruter | April 15, 2026
Reading time: 4 min
AI and the speed of war

As a fragile ceasefire settles after the 2026 Iran war, the role of AI in accelerating the pace of targeting has emerged as a defining feature of the campaign.  While more than 1,000 targets were struck by U.S. forces in the opening 24 hours of the war, U.S. and Israeli operations hit thousands more in the weeks that followed, as Iran responded with waves of missiles and drones targeting Israel, U.S. bases, and regional infrastructure.

Noah Sylvia, a researcher at the defence and security think tank Royal United Services Institute (RUSI), said the central shift in AI-enabled targeting is not machines replacing human decision-making, but the acceleration of the processes around it. Speaking with The Beiruter, he noted that the time between intelligence gathering and strike tasking has decreased by “orders of magnitude” compared to a decade ago. The implications extend beyond speed, altering how decisions are made and how risk is distributed across conflict zones.

 

Integration, not automation

Public debate has largely framed AI in warfare as a question of autonomy, namely whether machines are making lethal decisions. Sylvia’s account points in a different direction.

As he explains, AI in targeting is not a single decision-making system but a broader set of tools designed to process and organize vast amounts of battlefield data. One particularly visible example is the United States’ usage of Palantir’s Maven Smart System, which integrates inputs from satellites, drones, radar, and human intelligence into a single interface.

Because military infrastructure remains fragmented and often outdated, with intelligence dispersed across siloed systems, platforms like Maven connect these streams and organize disparate data so that operators can identify and prioritize targets more quickly. The effect is not the automation of decision-making, Sylvia noted, but the compression of the steps that lead up to it.

That emphasis on integration is reflected in recent research. A 2025 report by the Center for Security and Emerging Technology finds that AI systems are increasingly used to process large volumes of information during military operations, enabling faster decision-making under pressure. The RAND Corporation similarly concludes that AI’s primary contribution lies in improving the “finding” function of warfare by combining intelligence from multiple sources in real time.

For Sylvia, this is the core of the shift. The system does not decide faster but enables humans to act with greater speed.

 

Compression and verification gap

That acceleration, however, produces significant consequences. As the targeting cycle compresses, the time available for verification narrows, placing higher pressure on human operators to validate information under tighter timelines.

Sylvia is cautious on outcomes, arguing that whether faster targeting increases or decreases errors or civilian harm remains an open question, even as the structure of the system introduces new risks.

A brief example from the war illustrates how that pressure can play out in practice. In Minab, Iran, he noted, a strike on 28 February hit a building that had previously been part of a military compound but had since been converted into a school. Since the targeting profile had not been updated, the failure was not in identifying the target, but in the underlying data

Research suggests this is not incidental. A 2026 study in International Law Studies finds that AI-enabled decision-support systems can compress the time available for precautionary assessments required under the law of armed conflict, increasing the likelihood that operators fail to fully assess the underlying intelligence. The International Committee of the Red Cross has similarly warned that the use of AI in targeting introduces layered risks, including incomplete or outdated data and limited transparency into how outputs are generated, particularly under tight decision timelines.

The issue is therefore not simply whether systems are accurate but whether there is still sufficient time to question them.

 

Speed, strategy, and asymmetry

If AI creates an advantage, it is one of tempo. But, as Sylvia argues, that advantage is often misunderstood.

He points to the concept of “decision advantage,” widely used in North Atlantic militaries, which holds that acting faster than an opponent can determine the outcome of a conflict. Systems that compress the targeting cycle are designed to produce that edge.

But that logic, he cautions, has limits.

“Striking faster than the enemy only confers an advantage if the enemy is operating on the same terms,” he said. In practice, adversaries rarely do.

The Iran war reflects that tension. The United States and Israel operated with clear technological superiority, combining air dominance with highly integrated targeting systems. Yet, as Sylvia noted, speed did not translate cleanly into strategic outcomes. The assumption that faster targeting would produce decisive results ran up against a more familiar dynamic, as adversaries adapted.

Rather than match that speed, Iran and its network of proxies often avoided large, visible targets and relied on cheaper systems like drones and missiles to impose costs without engaging directly. Attacks on infrastructure and sustained low-cost strikes forced more advanced militaries to respond repeatedly, often at far higher cost.

For Sylvia, the implication is ultimately strategic. Technological superiority, he argues, has too often been treated as a substitute for strategy. Faster targeting and greater firepower can shape the battlefield, but they do not resolve the conflict itself, particularly when the opposing side is willing to absorb losses and adapt its methods. If anything, the war suggests that speed has become a defining condition of conflict, not a guarantee of its outcome.

    • The Beiruter