Autonomous Weapons: The False Promise of Civilian Protection

November 28, 2022

This essay is part of The Ethics of Automated Warfare and Artificial Intelligence, an essay series that emerged from discussions at a webinar series.

E

merging military technologies employing advancements in artificial intelligence (AI) and machine learning — fitted with improved sensor technology and robotics — are expected to transform warfare (Vergun 2020). Some tech, in particular autonomous weapons systems (AWS), commonly known as killer robots, which will operate without significant human assessment or oversight, have been described as the “third revolution in warfare” (Lee 2021).

We have not yet seen in operation systems that can find, select and engage targets without humans in charge. Some analysts believe that such systems are still under development (Knight 2022; Woodson 2020). Others contend that they exist, but have not been deployed on a scale that would allow any claims to be tested (Kallenborn 2022).

Advanced militaries have clearly indicated their interest in such systems. Reasons range from speed of response and enhanced situational awareness to the ability to overwhelm the defence systems of adversaries. However, several countries, including the United States, also assert that AWS can strengthen the implementation of international humanitarian law (IHL) and protect civilians by limiting collateral damage (Reaching Critical Will 2019). As Robert Work, former US deputy secretary of defense, has contended, “it is a moral imperative to at least pursue this hypothesis” (Reuters 2021).

But there are risks in developing and deploying autonomous weapons. The unreliability and fragility of AI technologies central to the systems make the testing of such a hypothesis potentially harmful to civilians (Morgan et al. 2020). There are also operational concerns; for example, it is not clear how AWS will interact with crewed platforms in wider military operations. No concrete information exists on how AWS responds in complex and rapidly changing environments such as battlefields.

Experts also do not agree on how to apply existing laws and norms to these new weapons (Winter 2022). With so much unknown and so little agreement, all claims about the direct and indirect impacts of AWS on civilians and civilian infrastructure must be carefully scrutinized.

Our Current Understanding of Autonomous Weapons

Have we seen active deployment of killer robots? The answer depends on how this technology is defined. Some countries and experts contend that fully autonomous weapons systems do not exist and would not be wanted by any country (Jeangène Vilmer 2021). In their view, any human involvement means that the systems are not autonomous. With this restrictive definition, the whole problem of autonomous weapons disappears.

But the autonomous capabilities of certain systems do seem to be increasing. Active use of the Turkish-made Kargu-2 loitering munition in the conflict in Libya appears to demonstrate significant autonomous capabilities, including the ability to engage targets independently (Kallenborn 2022). However, the maker of the Kargu-2 is coy about its AI capabilities, offering no specific information on whether it can function on its own or always functions under the control of human operators. Manufacturers also tend to hype the autonomous features, which might not include any capability to independently select and engage a target (Knight 2022). Certainly, the states that develop and use the new technologies have insisted that they are not autonomous. In informal discussions, Turkey, for example, has insisted that the Kargu-2 is not fully autonomous and that humans are in control (Marijan and Standfield 2021).

Last autumn, Frank Kendall, secretary of the US Air Force, assured an audience that humans were the ultimate decision makers after the Air Force used AI for the first time to help identify a target or targets in “a live operational kill chain” (Miller 2021). No verification was provided.

Seven years of discussions at the United Nations Convention on Certain Conventional Weapons (CCW), the key international forum at which autonomous weapons are examined, have led to a tentative acknowledgment that lethal weapons must remain under human control. However, there is no general agreement on the level of awareness and control that the human operator must maintain over a weapon system.

But a common understanding of this level of control, especially over key functions such as target selection and engagement, is important in establishing and ensuring human accountability. If a significant level of human control of operations cannot be demonstrated, who is to be held accountable for what the system does? Who is to be held accountable for civilians who are hurt or killed and civilian infrastructure that is damaged or destroyed?

If a weapon system that makes independent decisions does not appear to have anticipated all scenarios that might involve civilians, can the software developers be held to account (Sharkey 2012; Winter 2022)? Perhaps, although such an attempt seems likelier to lead to a diffusion of responsibility. It is often the case that a vast number of individuals contribute to the coding and training of systems, generally in ignorance of the activity of others and even the overall end product. Moreover, as Missy L. Cummings (2019) notes, “currently in the United States, manufacturers of military weapons are indemnified against accidents on the battlefield.”

Who is to be held accountable for civilians who are hurt or killed and civilian infrastructure that is damaged or destroyed?

While developers and users of AWS persist in maintaining the significant role of human operators, a number of questions about the nature of that role remain. Does the human operator simply approve decisions made by the system, possibly distanced by both time and space from the targeting event? Or does the system have the ability to search for targets based on pre-approved target profiles, using sensor inputs to, for example, recognize military-age males holding weapons? In other words, does the human operator have all the necessary information and the ability to make evidence-based decisions that might prevent unintended victims from being targeted? How good are the systems at distinguishing between combatants and non-combatants? Are they as good as humans?

Those who support the development of autonomous systems might say they are better than humans. For some, the humanness of soldiers and operators is the problem that the technology solves. William H. Boothby (2018) argues that “robotic technologies will not be distorted by fear, anger, vengeance, amnesia, tiredness or other peculiarly human fallibilities.”

But Elliot Winter (2022) argues that for such tech to work well, “machines would need to possess advanced skills in observation and recognition as well as sophisticated judgement-making ability.” In his view, these capabilities would ensure compliance with the IHL principle of distinction, the ability to distinguish combatants from non-combatants.

However, if humans are affected by emotions and bias, so are the technologies coded by those humans. Researchers have demonstrated that AI technologies tend to disproportionately misrepresent disadvantaged communities (Gebru 2020). Image recognition, for example, often incorrectly identifies women and racialized minorities.

Another problem relates to technology that does not adequately distinguish among different types of actors. Some experts fear that groups of individuals might be misidentified; for example, disabled individuals holding aid devices might be viewed as soldiers with guns. A key question is how accurate the technology is in distinguishing among different types of actors in a conflict zone. Will the judgment-making ability of the system be impacted by factors such as gender or race (Cummings 2017; Hunt 2019; Ramsay-Jones 2020)? And there is a further complication. As Winter concedes, combatants in a conflict zone can become hors de combat, no longer able or willing to fight. This change might be difficult for machines to interpret.

Whether the technology is even capable of achieving human-like judgment is debatable. AI experts speculate on various timelines for the achievement of “human-level” cognition, but even the most optimistic do not see much likelihood of it happening before 2075 (Müller and Bostrom 2016). Such a time frame raises the possibility that weapon systems will be brought into service before all the bugs are fixed. In such an event, any claims to protect civilians and other non-combatants will be disingenuous at best.

Ensuring Civilian Protection

Strict regulations are needed to ensure that the systems that are developed in the next few decades truly protect civilians and other non-combatants (Boulanin, Bruun and Goussac 2021).

A sufficient amount of human control over weapon systems must be mandated. Any weapon system that depends on sensor input to make decisions over target selection and engagement should be deemed high risk and unacceptable for use. At present, there seems to be a fair amount of acceptance among states that autonomous functions in already-prohibited weapons should not be used.

Weapon platforms that can most easily penetrate civilian areas are most in need of strict regulation. These would include autonomous aerial vehicles (drones), loitering munitions and tanks. The greatest concern would be not when such systems autonomously survey an area but if they are capable of selecting or engaging targets.

Exports of technologies and systems to countries where such systems could be used for purposes not intended by exporters must be restricted and, in some cases, monitored. Monitoring is particularly important because of the multi-use nature of much of the technology that would be incorporated into autonomous weapons systems.

What Must Happen

Proponents often portray autonomous weapons as if they exist in a vacuum, separate from political considerations and military strategies that in past and contemporary conflicts have resulted in the targeting of civilians and civilian infrastructure. However, relevant literature shows us that states often strategically target civilians and civilian infrastructure (Downes 2008; Sowers and Weinthal 2021). Thus, AWS could be instructed to target civilians or potentially decide to target civilians in order to achieve a given objective, such as securing a portion of a city.

State actors are still in charge of setting overall objectives. Ultimately, they will make decisions on how to use these new systems. Without specific constraints, regulations and practices to ensure civilian protections, it is not clear how autonomous systems add to the protection of civilians. On the other hand, the potential for errors and unanticipated actions by autonomous systems could escalate conflicts in ways not intended by military leadership.

The world needs specific regulations that evoke civilian protection measures when weapon systems using AI technologies are deployed. There must be minimum limits to the degree of human control, as well as restrictions on the types of autonomous weapons systems that can be used and the situations in which they are used. As Paul Scharre (2018) rightly notes, if these systems follow some current laws but operate without an understanding of context, they could take actions that human soldiers and operators guided by moral and ethical codes would not take.

States ultimately must be the ones to agree upon these regulations and to ensure the protection of civilians. So far, agreement at the CCW remains elusive and the more powerful states are reluctant to engage in serious discussions, instead allowing countries such as Russia to act as spoilers and stall all movement toward legally binding instruments. However, one potential common reason for putting in place regulations despite the current geopolitical realities is the recognition that these weapons will proliferate and could potentially be transferred to non-state groups and terrorist organizations. Preventing this destabilizing and dangerous proliferation is in the interest of all states and should make states more receptive to the need for regulatory efforts.

Previous arms control and disarmament agreements offer important insights into addressing the seemingly unique challenges of regulating AI technologies. For example, the Biological Weapons Convention and Chemical Weapons Convention have relevant frameworks on addressing the dual-use nature of AI technology — that is, civilian and military uses. Moreover, a focus on behaviours and operational uses of weapons, such as limiting or prohibiting use of systems that target personnel, could be pursued.

Without such rules and restrictions, the risks to civilians in armed conflict will continue to increase, given the unpredictability and potential risks that autonomous weapon systems will pose if left unchecked.


Works Cited

Boothby, William H. 2018. “Highly Automated and Autonomous Technologies.” In New Technologies and the Law in War and Peace, edited by William H. Boothby, 137–81. Cambridge, UK: Cambridge University Press.

Boulanin, Vincent, Laura Bruun and Netta Goussac. 2021. Autonomous Weapon Systems and International Humanitarian Law: Identifying Limits and the Required Type and Degree of Human-Machine Interaction. Stockholm, Sweden: Stockholm International Peace Research Institute. www.sipri.org/sites/default/files/2021-06/2106_aws_and_ihl_0.pdf.

Cummings, Missy L. 2017. “Artificial Intelligence and the Future of Warfare.” Chatham House, January. www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings.pdf.

———. 2019. “Lethal Autonomous Weapons: Meaningful Human Control or Meaningful Human Certification?” IEEE Technology and Society Magazine 38 (4): 20–26.

Downes, Alexander B. 2008. Targeting Civilians in War. Ithaca, NY: Cornell University Press.

Gebru, Timnit. 2020. “Race and Gender.” In The Oxford Handbook of Ethics of AI, edited by Markus D. Dubber, 252–69. New York, NY: Oxford University Press.

Hunt, Erin. 2019. “Why ‘killer robots’ are neither feminist nor ethical.” Open Canada, January 22. https://opencanada.org/why-killer-robots-are-neither-feminist-nor-ethical/.

Jeangène Vilmer, Jean-Baptiste. 2021. “A French Opinion on the Ethics of Autonomous Weapons.” War on the Rocks, June 2. https://warontherocks.com/2021/06/the-french-defense-ethics-committees-opinion-on-autonomous-weapons.

Kallenborn, Zachary. 2022. “Russia may have used a killer robot in Ukraine. Now what?” Bulletin of the Atomic Scientists, March 15. https://thebulletin.org/2022/03/russia-may-have-used-a-killer-robot-in-ukraine-now-what/.

Knight, Will. 2022. “Russia's Killer Drone in Ukraine Raises Fears About AI in Warfare.” Wired, March 17. www.wired.com/story/ai-drones-russia-ukraine/.

Lee, Kai-Fu. 2021. “The Third Revolution in Warfare.” The Atlantic, September 11. www.theatlantic.com/technology/archive/2021/09/i-weapons-are-third-revolution-warfare/620013/.

Marijan, Branka and Emily Standfield. 2021. “Kargu-2 debate raises awareness of autonomous weapons.” Project Ploughshares, July 15. https://ploughshares.ca/2021/07/kargu-2-debate-raises-awareness-of-autonomous-weapons/.

Miller, Amanda. 2021. “AI Algorithms Deployed in Kill Chain Target Recognition.” Air Force Magazine, September 21. www.airforcemag.com/ai-algorithms-deployed-in-kill-chain-target-recognition.

Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima and Derek Grossman. 2020. Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. Santa Monica, CA: RAND Corporation. www.rand.org/pubs/research_reports/RR3139-1.html.

Müller, Vincent C. and Nick Bostrom. 2016. “Future Progress in Artificial Intelligence: A Survey of Expert Opinion.” In Fundamental Issues of Artificial Intelligence, edited by Vincent C. Müller, 555–72. Synthese Library, vol. 376. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-26485-1_33.

Ramsay-Jones, Hayley. 2020. “Racism and Fully Autonomous Weapons.” Submission to the UN Special Rapporteur regarding the thematic report on new information technologies, January 29. www.ohchr.org/sites/default/files/Documents/Issues/Racism/SR/Call/campaigntostopkillerrobots.pdf.

Reaching Critical Will. 2019. “Implementing International Humanitarian Law in the Use of Autonomy in Weapon Systems.” Reaching Critical Will. https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2019/gge/Documents/2019GGE.2-WP5.pdf.

Reuters. 2021. “US has ‘moral imperative’ to develop AI weapons, says panel.” The Guardian, January 26. www.theguardian.com/science/2021/jan/26/us-has-moral-imperative-to-develop-ai-weapons-says-panel.

Scharre, Paul. 2018. Army of None. New York, NY: W. W. Norton.

Sharkey, Noel. 2012. “Killing Made Easy: From Joysticks to Politics.” In Robot Ethics: The Ethical and Social Implications of Robotics, edited by Patrick Lin, Keith Abney and George A. Bekey, 111–28. Cambridge, MA: MIT Press. www.dhi.ac.uk/san/waysofbeing/data/governance-crone-sharkey-2012b.pdf.

Sowers, Jeannie and Erika Weinthal. 2021. “Humanitarian challenges and the targeting of civilian infrastructure in the Yemen war.” International Affairs 97 (1): 157–77. https://doi.org/10.1093/ia/iiaa166.

Vergun, David. 2020. “Experts Predict Artificial Intelligence Will Transform Warfare.” DOD News, June 5. www.defense.gov/News/News-Stories/Article/Article/2209480/experts-predict-artificial-intelligence-will-transform-warfare/.

Winter, Elliot. 2022. “The Compatibility of Autonomous Weapons with the Principles of International Humanitarian Law.” Journal of Conflict and Security Law 27 (1): 1–20.

Woodson, Alex. 2020. “Killer Robots, Ethics, & Governance, with Peter Asaro.” Artificial Intelligence & Equality Podcast, February 11. www.carnegiecouncil.org/media/series/global-ethics-review/20200211-killer-robots-ethics-governance-peter-asaro.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Branka Marijan is a CIGI senior fellow and a senior researcher at Project Ploughshares.