A New Arms Race and Global Stability

November 9, 2020

This article is a part of Modern Conflict and Artificial Intelligence, an essay series that explores digital threats to democracy and security, and the geopolitical tensions they create.

T

wo capabilities excite military planners more than anything else: first, the ability to get inside the decision-making loop of an adversary, and stay a step ahead of their responses in a conflict; second, the ability to sense the battle space fully and see what is going on at any place at any time (Hitchens 2019). And nothing makes military planners more jittery than the prospect of body bags, hence the attraction of finding ways to wage “bloodless” wars or do violence at a distance.

Artificial intelligence (AI) systems promise them a threefold bounty. First, data from different domains can be fused and weighted in real time by weapons platforms and related decision-support systems. Force can thus be applied at a faster tempo. Second, human limitations in digesting sensor inputs, say, live video feeds from a variety of locations, can be overcome for enhanced situational awareness, which could enable tailored and timely application of force for maximum effect. Finally, and perhaps not today but soon, machines can step into front-line combat roles, which mitigates the political implications of human casualties.

Of course, there are a number of important hurdles to be overcome first. The technology is still not mature. Very few AI models today can capture exceptions beyond the classifiers that are used during their training with data, nor can they learn in real time beyond such training on an incremental basis. There are safety issues; for example, image recognition models can be easily spoofed by so-called “adversarial” attacks (Vincent 2017). Even if an algorithm performs well in simulations, trust in the human-AI interface is still to be established, including in domains such as autonomous cars that have had years of testing and billions in investments. Commanders who have seen sailors or aviators struggle with simple digital dashboards would be loath to trust obscure and inscrutable algorithmic models. Not least, significant legal and ethical challenges remain to the ceding of human control and judgment to black-box algorithms.

Despite the headlines and the catchy titles, the nature and the extent of the AI arms race are hard to discern at this stage.

At this stage of development, a useful analogy is the financial sector, where the stakes (and potential rewards) are high, as is the interconnectedness of risk, often in ways that are not so explicit or clear. A report by the Basel, Switzerland-based Financial Stability Board (FSB) highlights how “third-party dependencies” could grow as AI is introduced in financial services and how “new and unexpected forms of interconnectedness” could arise, for instance, as previously unrelated data sources come together (FSB 2017, 1).

Traditionally, arms control experts have looked at the introduction of new technologies of warfare from the perspectives of both stability and compliance with existing legal norms. In the latter context, there is no specific injunction against AI systems in existing arms control treaties on conventional weapons or weapons of mass destruction (nuclear, biological and chemical). How potential AI systems could impact existing international humanitarian law (IHL) principles such as distinction, proportionality and military necessity has been the subject of discussions in Geneva, Switzerland, since 2014 on “lethal autonomous weapons systems” under the 1980 Convention on Certain Conventional Weapons (CCW) (Gill, forthcoming 2020). A number of options to regulate such systems have been proposed to address concerns related to the undermining of IHL, in particular the fudging of human accountability for the laws of armed conflict. Moral and human rights concerns have also been cited to propose a complete ban on systems that can take life and death decisions without human intervention (Guterres 2018).

The stability arguments highlight several risks: a lowered threshold for use of lethal force; a new arms race (arms race stability); miscalculation and escalation due to the use of autonomous systems during tense faceoffs (crisis stability); and an undermining of the fragile balance in strategic weapons (deterrence stability) due to an AI-driven breakthrough in strategic offence or defence. During the Cold War, the two superpowers prohibited the deployment of nationwide missile defence systems through a bilateral treaty, since such technologies could have created an illusion of invulnerability and tempted one side to launch pre-emptive nuclear strikes on the other (Korda and Kristensen 2019). They invested in invulnerable systems, such as submarines carrying strategic missiles, to shore up deterrence. Today, AI could make it easier to follow submarines with nuclear weapons as they leave port for their deterrence patrolling stations, thereby allowing an adversary to neutralize what has been considered thus far the invulnerable leg of the nuclear triad. Further, an outlier could introduce immature AI technologies into highly destructive conventional or nuclear arms as it seeks to restore deterrence with a more powerful adversary or nudge it back to the negotiating table with outlandish systems. (See, for instance, the discussion on Russia’s Poseidon underwater autonomous nuclear delivery system in Boulanin [2019].)

Despite the headlines and the catchy titles, the nature and the extent of the AI arms race are hard to discern at this stage. In many ways, the battlefield is still techno-commercial (Lee 2018). What is worrisome is that the AI technological rivalry among the major powers is coming at a time when mutual trust is low and when traditional structures for dialogue on arms control are withering away. Equally, because there are no dramatic markers of progress on the outside, unlike the Cold War experience of nuclear tests and missile launches, and because AI algorithms and their training data sets are inherently opaque, each side is guessing what the other is up to and probably attributing more AI military intent and capability than is necessitated. The upheaval and economic losses created by the coronavirus disease 2019 pandemic have added to the uncertainty and mutual suspicion. The psychological backdrop for an arms race dynamic is very much in place.

Where are we likely to see this arms race play out first, and does it have the potential to become a global arms race, as has been the case with nuclear arms?

AI systems for perimeter defence of naval task forces, anti-submarine warfare, mine-detection and counter-mine measures, aerial defence against drones at sea, and seabed-deployed sensor networks, as well as submersibles for protecting communications cables, could see investments and eventual deployments by advanced navies. Investments in autonomous aerial combat vehicles, autonomous swarms, and target detection and acquisition systems, for force application from air and the navigation and control of supersonic and hypersonic combat systems, are likely to grow as well. On the ground, a range of logistics and support functions, as well as over-the-horizon reconnaissance and attack capabilities against high-value targets, are likely to see investments. AI use in some dirty and dangerous jobs, such as counterterrorism or IED (improvised explosive device) clearing operations, would also grow. In all these areas, since the relative quality of AI systems will be harder to assess than physically embodied weaponry, contestants will be left less certain as to each other’s capabilities, increasing the risk of miscalculation and disproportionate responses in capability development and deployment.

A significant area of AI use is likely to be cyberwarfare capabilities. Today, cyberweapons do not mutate during use, but tomorrow they could do so autonomously in response to defensive measures. This could endow them with strategic effects.

Hopefully, strategic systems themselves will not see early and consequential integration of the AI technologies that are available today. This hope rests mainly on the culture of strategic communities, which prefer hard-wired systems with calculable certainties and failure rates. The risk of “entanglement” will remain, nonetheless, as new systems bring new types of data, actors and domains into the calculations of the strategic communities (for a pessimistic view, see Johnson [2020]). There will be pressure also on the offence-defence equation if there are AI breakthroughs in areas such as submarine detection and communications with, or control of, hypersonics. Another concern is the perceptions of parity among and between the players; some nuclear armed states may get an early-mover advantage by using AI to better manage the conventional and sub-conventional parts of the conflict escalation ladder, which might force others to threaten to increase their reliance on early nuclear weapons use or risky deployment postures.

In terms of geographical theatres of contestation, AI systems are likely to be deployed earlier in the maritime domain and in areas such as the North Atlantic/Arctic, the Gulf and the South China Sea in the Indo-Pacific. This is because of the sensitivity attached to shifts in balance of power in these areas and the operational focus of the major military powers.

Do AI weapons systems have the potential to impact the global balance of power? Possibly — but not so much as a stand-apart variable different from other trends driving shifts in power today. In that sense, relying on the historical experience with nuclear weapons can only take us so far with regard to AI systems. Proliferation could still turn the AI arms race among the major powers into a global phenomenon, and regional AI competition could throw up a few nasty deployment surprises. But the global balance of power will be shaped by many interdependent factors; digital technologies will be just one among many.

To conclude, what should be the key areas of immediate action for the international community to prevent the AI arms race from going global and to manage its international security consequences?

First, bring autonomous weapons systems into the agendas of current dialogues on disarmament, arms control and non-proliferation. Doing so would enhance transparency and encourage better understanding of intentions, capabilities and, eventually, deployment doctrines. It would also encourage sharing of best and “worst” practices, just as shared learning on safety and security of nuclear weapons was built up during the Cold War.

Second, discourage the commingling of strategic systems and AI-based decision-support systems. This work could take the form of political understandings among the nuclear-armed states. Additional understandings could be built around AI use that might impinge on the offence-defence equation.

Third, pursue discussions that have been taking place in Geneva among the United Nations’ Group of Governmental Experts (2018) working in this area, to reach agreement on national mechanisms to review autonomous weapons systems with regard to obligations under IHL, and to exclude those systems that cannot comply with such obligations. Such an agreement could be accompanied by regular exchange of experience on the quality of the human-machine interface. Thus, use scenarios, where the pace of action on the battlefield exceeds the limits of human decision makers to exercise effective supervision or correctly interpret the information that AI systems are relaying to them, could be identified and avoided.

Decades ago, Jonathan Schell highlighted the danger that hair-trigger alert systems pose and argued powerfully for abolishing nuclear weapons (Schell 1998). Today, we need to advocate similarly for the “gift of time” in regard to autonomous weapons. After all, when we do something as quintessentially human as taking a deep breath, we allow wisdom to flow into the moment and enhance the quality of our actions.

Works Cited

Boulanin, Vincent, ed. 2019. The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk. Volume I: Euro-Atlantic Perspectives. Stockholm, Sweden: Stockholm International Peace Research Institute.

FSB. 2017. Artificial intelligence and machine learning in financial services: Market developments and financial stability implications. Basel, Switzerland: FSB. www.fsb.org/wp-content/uploads/P011117.pdf.

Gill, Amandeep S. Forthcoming 2020. “The changing role of multilateral forums in regulating conflict in the digital age. International Review of the Red Cross.

Group of Governmental Experts. 2018. Report of the 2018 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. CCW/GGE.1/2018/3. October 23.

Guterres, António. 2018. “United Nations Secretary-General — Remarks at ‘Web Summit.’” Speech, Lisbon, Portugal, November 5. www.un.org/sg/en/content/sg/speeches/2018-11-05/remarks-web-summit.

Hitchens, Theresa. 2019. “Navy, Air Force Chiefs Agree To Work On All Domain C2.” Breaking Defense, November 12. https://breakingdefense.com/2019/11/exclusive-navy-air-force-chiefs-agree-to-work-on-all-domain-c2/.

Johnson, James S. 2020. “Artificial Intelligence: A Threat to Strategic Stability.” Strategic Studies Quarterly 14 (1): 16–39. www.airuniversity.af.edu/Portals/10/SSQ/documents/Volume-14_Issue-1/Johnson.pdf.

Korda, Matt and Hans M. Kristensen. 2019.US ballistic missile defenses, 2019.” Bulletin of the Atomic Scientists 75 (6): 295–306.

Lee, Kai-Fu. 2018. AI Superpowers: China, Silicon Valley, and the New World Order. Boston, MA: Houghton Mifflin Harcourt.

Schell, Jonathan. 1998. The Gift of Time: The Case for Abolishing Nuclear Weapons Now. New York, NY: Metropolitan Books.

Vincent, James. 2017. “Google’s AI thinks this turtle looks like a gun, which is a problem.” The Verge, November 2. www.theverge.com/2017/11/2/16597276/google-ai-image-attacks-adversarial-turtle-rifle-3d-printed.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Amandeep Singh Gill is the director of the International Digital Health & AI Research Collaborative project at the Graduate Institute of International and Development Studies in Geneva, Switzerland.

Autonomous systems are revolutionizing our lives, but they present clear international security concerns. Despite the risks, emerging technologies are increasingly applied as tools for cybersecurity and, in some cases, cyberwarfare. In this series, experts explore digital threats to democracy and security, and the geopolitical tensions they create.