In an era where technology increasingly dominates all aspects of human activity, the role of artificial intelligence (AI) in warfare is growing exponentially. The ongoing war in Ukraine and Israel’s counterattacks in Gaza and Lebanon have laid bare the profound and transformative potential AI holds for military operations. Autonomous weapons, intelligence, surveillance, reconnaissance (ISR), and command and control systems, once the realm of science fiction, now shape the battlefield and future military strategy. The impact of AI on logistics, information management, and even training in militaries around the world is undeniable.
As AI shapes modern warfare, it has also sparked a fierce global competition, particularly among the world’s great powers. The race to develop cutting-edge military AI technology has become an essential element of national security strategies, with China and the United States emerging as the two primary rivals. China, in particular, has gained significant attention, with many analysts now suggesting that it may be outpacing the U.S. in the race for AI dominance in military applications.
Global AI Race in Military Affairs: China Versus the United States
Until recently, the United States was widely regarded as the leader in AI innovation, benefiting from its elite research institutions, a strong technology sector, and favorable regulations. However, there is growing concern among American strategic thinkers that China has not only caught up but may have surpassed the U.S. in terms of AI advancement. China’s rise has been attributed to several factors, including a robust research ecosystem, substantial government investment, and a focus on integrating AI into its military strategy.
Militarily, China has made significant strides in developing autonomous weapons systems, AI-powered ISR capabilities, and command and control systems. These advancements could shift the balance of power globally, especially as AI technology rapidly evolves. A key worry among Western military analysts is that China could deploy AI-driven weapons systems prematurely—before they are fully tested or safe under real combat conditions. This could not only lead to operational risks but could also exacerbate global security concerns.
Perhaps more troubling for the United States and its allies is the possibility that China might export AI-driven military technology to U.S. adversaries, thereby shifting power dynamics in regions critical to American strategic interests. Given the complex nature of AI and its potential for misuse, this possibility presents a major challenge for maintaining global security and ensuring AI compliance with international laws.
Experts Andrew Hill and Stephen Gerras from the U.S. Army College have posited that the adoption of AI in the military by authoritarian regimes such as China and Russia is being driven by several critical factors: demographic challenges, control of military operations, and strategic fears of falling behind the U.S. in technology.
Firstly, these regimes are grappling with shrinking workforces and aging populations, trends that are expected to significantly reduce the number of eligible recruits for their armed forces. AI provides a solution by filling gaps left by declining human resources. Automated systems, including AI-powered drones, autonomous tanks, and robotic infantry, could potentially perform many tasks currently carried out by human soldiers, reducing reliance on dwindling manpower.
Secondly, AI offers a tool for authoritarian regimes to maintain control over their militaries. Autonomous weapons and command systems reduce the need for human decision-makers, decreasing the risk of military dissent or coups. In regimes where leaders fear internal opposition as much as external threats, AI becomes a valuable asset for maintaining centralized control over military operations.
Lastly, from a geopolitical perspective, Russia and China are acutely aware of the strategic disadvantage they could face if the U.S. takes the lead in military AI development. As Hill and Gerras argue, these regimes are motivated to aggressively push the boundaries of AI capabilities to maintain a competitive edge, fearing that failure to do so could leave them vulnerable in future conflicts.
Balancing Military AI with Legal and Ethical Considerations
While the promise of AI in military applications is alluring, the technology also raises profound ethical and legal concerns. The idea of autonomous weapons making life-and-death decisions without human oversight is unsettling for many policymakers and analysts. Some argue that an AI-driven arms race could lead to unintended consequences, particularly if these technologies fall into the wrong hands or are misused in ways that violate international law.
A growing body of opinion favors the creation of rules and frameworks to ensure that AI systems comply with international humanitarian law. The United States has taken the lead in promoting responsible AI use in military applications, advocating for international norms and safeguards to prevent AI-enabled autonomous systems from being deployed recklessly.
In November 2022, the U.S. State Department released a set of ten principles for the responsible development and use of military AI. These principles emphasize transparency, accountability, and compliance with international law. They call for rigorous testing of AI systems, measures to minimize bias, and oversight mechanisms to ensure that human judgment remains a central component of military decision-making. Key recommendations include the development of safeguards to prevent unintended AI behavior, thorough legal reviews of AI capabilities, and efforts to prevent the proliferation of AI-driven weapons systems to non-state actors or rogue regimes.
The concerns over the ethical use of AI in military operations have led to several global initiatives aimed at establishing guidelines for responsible AI use. One such effort was the “Responsible AI in the Military Domain” (REAIM) summit held in Seoul, South Korea, in September 2023. Co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, this summit brought together 1,952 participants from 96 countries, including 38 ministers, to discuss the responsible use of AI in warfare.
The summit resulted in a 20-clause blueprint addressing the potential humanitarian, legal, and security challenges posed by military AI applications. The document emphasized the importance of ensuring AI systems comply with international law, particularly international humanitarian law, and called for human control in critical decision-making processes, especially regarding nuclear weapons.
The blueprint also highlighted the risks AI poses to global security, particularly its potential role in exacerbating arms races and the proliferation of weapons of mass destruction (WMDs). The participants called for greater international cooperation to ensure that AI technologies do not contribute to the spread of WMDs by either state or non-state actors, including terrorist groups.
Despite its broad support, with 61 countries endorsing the blueprint, China chose not to sign the agreement, despite sending a delegation to the summit. Beijing’s reluctance to endorse the document highlights the complex dynamics at play in the global AI race. China insists that any framework governing AI should respect the security concerns of all nations and avoid triggering an arms race. Chinese officials have stressed the need for “prudent” and “responsible” AI development, while also advocating for an international dialogue to prevent miscalculations and conflicts.
AI’s Transformative Impact on Warfare
As the AI arms race intensifies, it is clear that technology will continue to shape the future of warfare in unprecedented ways. The benefits AI brings to the battlefield—improving efficiency, speed, and accuracy—are enormous, but so too are the potential risks. Autonomous systems, once developed and deployed, could make decisions that human operators struggle to predict or control, especially in complex, high-stakes scenarios such as nuclear conflict.
To manage these risks, the international community must continue to work toward frameworks that ensure the safe and responsible use of AI in military contexts. This involves a delicate balance between fostering innovation and protecting global security. The principles set out by the U.S. State Department and the initiatives like the Seoul REAIM summit offer a starting point, but there is much more work to be done.
Countries like China and Russia, which remain wary of endorsing Western-led initiatives, will need to be brought into the conversation if any meaningful global consensus is to be achieved. Dialogue, transparency, and trust will be key to preventing an AI-driven arms race from spiraling out of control.
Ultimately, the future of warfare will be shaped not only by the technological advances of AI but also by the decisions policymakers make today. Will AI lead to a safer, more efficient military? Or will it open the door to a new era of conflict, where machines make decisions humans cannot control? The answer depends on how carefully nations navigate the ethical and strategic challenges that AI presents in the years to come.