Autonomy in Weapons Systems and the Struggle for Regulation

November 28, 2022

This essay is part of The Ethics of Automated Warfare and Artificial Intelligence, an essay series that emerged from discussions at a webinar series.

W

ar is changing. Decisions on the battlefield that used to only be made by human beings can now be delegated to machines. This raises fundamental ethical, legal and political questions regarding who or what — human or machine — is deciding what, when and where, in particular when the military use of force is concerned. The United Nations, more specifically the Convention on Certain Conventional Weapons (CCW) in Geneva, has been discussing weapon autonomy since 2014. This essay reconstructs that process along its three key struggles: the struggle for public and diplomatic awareness after scholarly communities had initially raised the issue; the struggle for conceptual clarity when the debate left expert circles and gained public prominence and diplomatic traction; and the long and ongoing struggle for international regulation — with changes potentially coming after 2022.

The Struggle for Awareness

In 2021, United Nations Security Council report S/2021/229 made headlines. “Lethal autonomous weapons systems,” it stated, had attacked fighters in a battle in Libya in 2020 (United Nations Security Council 2021). It remained unclear, however, if the quadcopters described in the report were remotely piloted or not; that is, if a human was involved in the decision to apply force or not, and if humans were in fact targeted or harmed or killed in the incident. But the term lethal autonomous weapons systems (LAWS), idiosyncratic UN parlance for weapons capable of selecting and engaging targets without human intervention, had once again made a big splash on newspaper frontpages and websites around the globe. The public realized that weapons technology is at a point where algorithmically hunting and killing people is no longer the stuff of dystopian science fiction.

Did anybody see this coming? The answer is yes. In fact, most subject matter experts were not particularly surprised — after all, loitering munitions not unlike those described in the UN report have had a presence on battlefields for almost 20 years. In other words, endowing weapons with the capability to select and engage targets without human intervention can be traced back decades, and so can the scholarly inquiry into the implications.

Scientific Concerns, Civil Society and the UN Arms Control and Disarmament Agenda

Awareness in expert circles started to condense into a persistent field of research in the 2000s. An important milestone in creating wider recognition was the formation of the International Committee for Robot Arms Control (ICRAC) in 2009 and 2010, a global network of scholars (of which this author has been a member since 2010) engaged in working on the topic from the vantage point of their various disciplines.1 In 2012, the US Department of Defense presented the first doctrine on autonomy in weapons systems, lending additional credibility to the issue — and drawing criticism.2

Prompted by ICRAC and the concerns voiced by the scientific community, the non-governmental organization (NGO) Human Rights Watch, a key player in past humanitarian disarmament processes, began forming a global civil society coalition of NGOs — the Campaign to Stop Killer Robots.3 Its first goal was to move the issue onto and upwards on the arms control and disarmament agenda of the United Nations in Geneva. This succeeded with extraordinary swiftness in 2014, and the CCW became the diplomatic and scholarly focal point of the global discussion surrounding autonomy in weapons systems.

The Struggle for Clarity

There are two main areas of confusion regarding autonomy in weapons systems, which, over time, have begun to be cleared up in both scholarly and diplomatic discourse. The first one is tied to the conceptualization of the subject matter. The second one is tied to the relevance of technology in relation to human agency.

A Platform-Agnostic, Functional Perspective

Definitional battles are no longer as fierce as they used to be at the CCW. However, even in 2022, some stakeholders still seek a “possible definition of LAWS,” the rationale being that arms control always requires a precise categorization of the object in question before any regulative action can be taken. In the case of weapon autonomy, however, defining a class of objects along specific technical characteristics is a non-starter. After all, almost any current and future weapons system can conceivably be endowed with autonomous functionality regarding target selection and engagement — with no one being able to tell what any given system’s level of dependence on human input is by inspecting it from the outside. In addition, autonomy will, in many cases, be distributed in a system of systems architecture, independent of one specific platform.

The conceptual challenge thus cannot be met by trying to define a weapon category — “LAWS,” as separated with a list of specific criteria from “non-LAWS” — and then counting or capping its numbers or prohibiting it. Instead, it requires conceptualizing and regulating the interaction between humans and machines in twenty-first-century warfighting. A “functionalist” approach thus treats the issue as “autonomy in weapons systems” rather than “autonomous weapons,” that is, as a machine rather than a human performing a certain function (or certain functions) during a system’s operation at certain points in time. Every military operation concluding with an attack on a target can be systematized along the steps of a targeting cycle, namely finding, fixing, tracking, selecting and engaging the target. An autonomous weapon completes the entire cycle — including the final stages of selecting and engaging the target — without human intervention. In the debate about weapon autonomy, the focus mainly rests on those last two “critical” functions because most of the strategic, legal and ethical implications of weapon autonomy derive from giving up human control over them.

This functional understanding has gained considerable traction at the UN level. It has been adopted by the United States in its doctrine,4 by the International Committee of the Red Cross (ICRC) in its position papers5 and by a majority of civil society organizations, scholars and diplomats.6

A Technology-Agnostic, Human-Centric Perspective

As mentioned above, autonomy in weapons systems is not as new as headlines tend to make it out to be. Terminal defence systems with autonomy in their critical functions have been in use for decades — take Patriot and Phalanx as just two examples.7 Machine learning (or whatever is currently in vogue in the wide field that is artificial intelligence [AI]) is not necessarily required for giving a weapons system autonomy in its critical functions.

But AI is a new and powerful enabler. Recent innovations are allowing the application of weapon autonomy on a much larger scale. Collecting and fusing various sensor data is what allows machines to select and engage complex “target profiles.”

Nevertheless, it is possible to remain largely agnostic regarding the precise characteristics of the underlying technologies, especially in the UN debates. After all, the CCW is a framework convention focused on international humanitarian law (IHL). Since legal decisions under IHL explicitly call for human judgment, since only humans can be moral agents, and since humans are well-suited to act as circuit breakers in runaway automated processes, the human element might as well be made the focus of the debate regarding all legal as well as ethical and strategic concerns. Considerable headway has been made in the CCW in recent years in this regard, with more and more convergence on the fact that the issue is best characterized by asking what the circumstances of autonomous (or automatic, the wording is not as crucial as some make it out to be) target selection and engagement are within an overall framework of human command and control.

The Struggle for Regulation

As already alluded to above, one of the strategically relevant implications is that it is impossible for humans to intervene as a circuit breaker if operations at machine speed go awry. Weapon autonomy runs the risk of unpredictable outcomes, with a real possibility of swift and unwanted escalations from crises to wars, or, within existing armed conflicts, to higher levels of violence. This risk of “flash wars” is a first major incentive for regulation.

From a legal perspective, secondly, one of the key implications of weapon autonomy is the open question as to whether machines would be able to exert the use of military force in compliance with IHL. The assumption that machines are — and will be for the foreseeable future — unable to, for instance, discriminate between combatants and civilians has been a major point of contention, linking the automation of warfare directly to concerns about an increase in civilian harm and worries about a lack of human accountability in the case of war crimes.

From an ethical point of view, lastly, IHL compliance is not even the most fundamental issue at stake. The narrow focus on discrimination implies that, as long as civilians remain unharmed, algorithmically attacking combatants could be acceptable. But combatants, too, are imbued with human dignity — and being killed by a mindless machine that is not a moral agent is infringing on that dignity. Algorithmic targeting of humans reduces them to data points and strips them of their right to be recognized as fellow human beings at the time of their wounding or death. This matters especially from a wider societal point of view because modern warfare, in particular in democracies, is already decoupling societies from war in terms of political and financial costs. A society additionally outsourcing moral costs by no longer even concerning itself with the act of killing, with no individual combatants’ psyches burdened by the accompanying responsibility, risks losing touch with not only democratic norms but also fundamental humanitarian norms.

Qualitative Regulation and the Concept of Meaningful Human Control

Since the quantitative arms control paradigm of the twentieth century with its numerical limits and counting rules cannot be the cornerstone of regulation regarding weapon autonomy, a more promising, qualitative answer lies in clearly stipulating and codifying the human role and the circumstances of autonomous target selection and engagement in a differentiated and context-dependent manner. What is widely accepted as well, is that the human element in the human-machine interaction does not have to imply permanent remote control. Instead, it is recognized that weapons systems already do — and increasingly will continue to — perform functions without human intervention and supervision. “Meaningful human control” has thus become one of the most widely used terms to describe the role humans are to keep playing, especially regarding the critical functions of the targeting cycle. It stipulates that the human operator should be able to foresee a weapon’s effects on the battlefield, administer its operation in a manner that is compliant with IHL and trace its performance back to human agency.

Humans have to decide what, when and where to engage, in particular when an application of military force could endanger human life.

But operational context is key. There is no “one-size-fits-all meaningful human control.” Operationalizing human control is a case-by-case process. Human control also has to be “baked into” the system at the design stage. Hence, understanding the human element as human control both “by design” as well as “in use” means that a need to differentiate arises. On the one hand, defending against lifeless incoming munitions remains an application of autonomy that can rely on a weapon’s critical functions being performed without human intervention, if the decision is delegated to a machine set up with spatial and temporal limits appropriate in the operational context. On the other hand, selecting and engaging targets in a cluttered environment at points in time hard to ascertain in advance requires much greater human judgment and agency. In other words, in this case, humans have to decide what, when and where to engage, in particular when an application of military force could endanger human life.

At the CCW in Geneva, convergence in the diplomatic talks is slowly but surely taking place in these terms, resulting in much less “talking past each other” regarding the regulatory challenge and how to address it. Convergence is also observable regarding the contours of a potential regulation. A two-tiered approach combining prohibitions and regulations is being discussed by many as a promising structure.

First, specific applications of weapon autonomy are unacceptable to many members of the international community and would thus be prohibited. The ICRC and many states suggest that this pertains to all weapons having humans as target profiles as well as those with potentially unforeseeable or indiscriminate effects on the battlefield due to being uncontrollable.

Second, autonomous application of force against target profiles other than those intended to represent humans, such as various military objects, is acceptable but requires certain limits and constraints, that is, positive obligations to curb ethical risks, safeguard IHL compliance and address security and safety concerns. Those limits and constraints can be temporal, spatial and, generally speaking, be subsumed under the notion that meaningful human control must be preserved in the design of a weapons system and with adequate tactics, techniques and procedures in use.

The Struggles Yet to Come

It is fair to say that a soft “proto norm” has already emerged and taken hold. After all, in 2022, virtually no one can contemplate and discuss autonomy in a weapon’s critical functions without being pointed to the serious concerns involved, the open letters published by the scientific community, the ongoing UN debates, emerging domestic legislations, large bodies of scholarly works in moral philosophy, international law, political science and more.

That said, the regulatory structure sketched above is far from universally accepted, nor is — at the diplomatic level — the notion that the next step should be codifying it as a legally binding instrument. The last CCW Review Conference (RevCon) at the end of 2021 demonstrated that at least a handful of states parties are clearly intended to prevent the CCW — a consensus body — from making any headway toward regulation.8

But pressure on the United Nations keeps increasing. Eight years into the process, civil society as well as many CCW states parties are frustrated by the lack of tangible results and the poor outcome of the RevCon. If the CCW keeps offering nothing but non-committal deliberations after 2022, the calls for moving the process into another forum will most certainly increase in volume and number. Then, the CCW would — once again — have served as only an incubator for arms control processes that eventually come to completion in other fora.


  1. See www.icrac.net/.
  2. See Department of Defense (2012).
  3. The NGO coalition is now called the Stop Killer Robots campaign. See www.stopkillerrobots.org/.
  4. See Department of Defense (2012).
  5. See ICRC (2016).
  6. See Stop Killer Robots (n.d.) and Sauer (2020).
  7. Patriot is a land-based surface-to-air missile system. Phalanx is a close-in weapon system for naval ships using a Gatling gun. Both feature automatic firing modes.
  8. The logjam at the 2021 CCW RevCon ended up prompting an unusual joint declaration by Austria, Belgium, Brazil, Chile, Finland, Germany, Ireland, Italy, Luxembourg, Mexico, the Netherlands, New Zealand, Norway, South Africa, Sweden and Switzerland in which the 16 countries state that “it is abundantly clear that the…results to date are not sufficient to address the urgency of this issue. At the present rate of progress, the pace of technological developments risks overtaking our deliberations.” See https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2021/RevCon/statements/17Dec_Switzerland-joint.pdf.

Works Cited

Department of Defense. 2012. “Department of Defense Directive.” No. 3000.09. November 21 (updated in May 2017). Washington, DC: Department of Defense. www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf.

ICRC. 2016. “Views of the International Committee of the Red Cross (ICRC) on autonomous weapon system.” April 11. www.icrc.org/en/download/file/21606/ccw-autonomous-weapons-icrc-april-2016.pdf.

Sauer, Frank. 2020. “Stepping back from the brink: Why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible.” International Review of the Red Cross 102 (913): 235–59. https://doi.org/10.1017/S1816383120000466.

Stop Killer Robots. n.d. “Emerging tech and AI.” www.stopkillerrobots.org/stop-killer-robots/emerging-tech-and-artificial-intelligence/.

United Nations Security Council. 2021. “Letter dated 8 March 2021 from the Panel of Experts on Libya established pursuant to resolution 1973 (2011) addressed to the President of the Security Council.” March 8. S/2021/229. https://documents-dds-ny.un.org/doc/UNDOC/GEN/N21/037/72/PDF/N2103772.pdf?OpenElement.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Frank Sauer is the head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich.