AI and the Actual IHL Accountability Gap

November 28, 2022

This essay is part of The Ethics of Automated Warfare and Artificial Intelligence, an essay series that emerged from discussions at a webinar series.


rticle after article bemoans how new military technologies — including landmines, unmanned drones, cyberoperations, autonomous weapon systems and artificial intelligence (AI) — create new “accountability gaps” in armed conflict. Certainly, by introducing geographic, temporal and agency distance between a human’s decision and its effects, these technologies expand familiar sources of error and complicate causal analyses, making it more difficult to hold an individual or state accountable for unlawful harmful acts.

But in addition to raising these new accountability issues, novel military technologies are also making more salient the accountability chasm that already exists at the heart of international humanitarian law (IHL): the relative lack of legal accountability for unintended, “awful but lawful” civilian harm.

Expanding Incidental Civilian Harm

In its attempt to reconcile minimizing needless civilian harm and enabling necessary state action, IHL explicitly permits many acts that foster incidental harm. For example, while intentionally targeting civilians is forbidden,1 an attack that incidentally results in civilian harm — even foreseeable and extensive civilian harm — may be lawful. If the commander authorizing an attack reasonably determined that the “collateral damage” (the expected civilian deaths, civilian injury, destruction of civilian objects and the associated reverberating effects) would not be excessive compared with the anticipated military benefit, the attack satisfies the proportionality requirement. The operation remains “proportional” for legal purposes regardless of how many or how badly civilians are actually injured when it is carried out.

New weapons technologies exacerbate this tension. First, new technologies have narrowed the list of protected civilian objects. Not only has technology facilitated the shift to urban battlespaces (Schmitt 2006), the incentives to network and link military systems have resulted in civilian objects such as electrical grids, telecommunications systems and internet services increasingly becoming dual-use and thus possibly targetable infrastructure (Shue and Wippman 2002).

Second, this legal structure incentivizes the development and use of weapons that exploit the grey zones of permitted incidental harm. For example, one of the most effective — insofar as there are no known violations — international weapons regulations of all time is the prohibition on lasers designed to permanently blind (Crootof 2015a).2 The apparent success of this treaty has led some to argue that it is a useful precedent for regulating other technologies, such as autonomous weapon systems (Human Rights Watch and the International Human Rights Clinic 2015). This treaty has been “successful,” however, because it defines the regulated technology so narrowly (Crootof 2015b). In prohibiting the use of “laser weapons specifically designed, as their sole combat function or as one of their combat functions, to cause permanent blindness to unenhanced vision,”3 the treaty does not govern the use of lasers that temporarily blind (such as the dazzlers US forces used in Iraq, which allegedly blinded US service members) (Hambling 2009) or those that have an unintended side effect of causing permanent blindness (such as laser weapons used for other purposes) (Hecht 2002).

Third, while increasingly precise weapons are celebrated for simultaneously minimizing risks to troops and civilians, they also create a precision paradox, given that “the ability to engage in more precise strikes may mean that a military can lawfully engage in more strikes overall, as an operation that might have once risked too much civilian harm and thus failed the proportionality requirement may be rendered newly lawful” (Crootof 2022). The more attacks there are, the more civilian harm is permitted — either as anticipated collateral damage or unforeseen accidental harm.

Expanding Accidental Civilian Harm

There have always been accidents in the fog of war, but new military technologies both introduce new manifestations of familiar sources of harmful error and magnify their damage potential.

Take weapons error. Just as every physical system is subject to design and manufacturing defects, every code-based system is subject to bugs — programming errors that cause unanticipated results. In armed conflict, these errors can be deadly. During a 2007 South African military exercise, for example, a software glitch allegedly resulted in an anti-aircraft cannon malfunction that killed nine soldiers and wounded 14 others (Shachtman 2007). More recently, “a technical malfunction led to the accidental firing of a missile” by India, into Pakistan (BBC News 2022). AI-based systems are complicated, and, per normal accident theory, the more complicated the system, the greater the likelihood of an accident (Perrow 1984). As militaries incorporate AI and other algorithms into their systems, they are introducing other errors unique to this technology (Scharre and Horowitz 2018) as well as new vulnerabilities that can be hacked, gamed, or otherwise exploited by adversaries (Brundage et al. 2018).

Or consider how new military technologies exacerbate certain types of user error. While some technologies may extend a user’s abilities — say, by enabling more considered or informed decision making — many also risk making users’ jobs more difficult. One of the “ironies of automation” is that the more automated a hybrid system is, the more important and difficult the human’s role in the system becomes (Bainbridge 1983; Jones 2015). As the “easier” tasks are delegated to machine intelligence, not only is the human expected to handle more strenuous or difficult tasks more often — without the breaks provided by working on easier ones — they must do so while simultaneously overseeing a system that introduces automation errors and new vulnerabilities. Rather than relieving burdens on the human in the loop, some of the most highly automated systems require the most attentive and diversely skilled human operators.

The incentives to network and link military systems have resulted in civilian objects...increasingly becoming dual-use and thus possibly targetable infrastructure.

Further, even the best-trained and most competent humans in hybrid systems are subject to errors unique to that role, including automation bias (where humans defer overmuch to machine assessments), deteriorated situational awareness (a lessened ability to draw on a particular piece of knowledge in a particular situation) and skill fade (the loss of skills due to a lack of practice). There are plenty of examples of overtrust in algorithmic decision makers fostering accidents. People have followed their GPS navigation directions so faithfully that they drove into ponds, lakes or bays (Kircher 2018) and even into the ocean (Fujita 2012); tragically, at least one incident was fatal (Conklin 2022). In armed conflict, overtrust can have lethal consequences. In 2003, on three separate occasions, the US defensive Patriot system misclassified friendly planes as ballistic missiles. In each case, the operator ordered the system to engage the perceived threat, despite having information available that contradicted the system’s assessment, resulting in painfully avoidable fratricides (Lewis 2018). As a final report on Patriot system performance concluded, the system was “a poor match to the conditions” in part because “the protocol was largely automatic, and the operators were trained to trust the system’s software” (Defense Science Board 2005, 2).

Nor is the proposal to train users to undertrust systems a silver bullet (Hao 2021). If users undertrust systems too much, we risk overinvestment in useless or even harmful infrastructure. The captain of the USS John S. McCain quickly learned not to trust a glitchy new navigation system, and so he often used it in backup manual mode. Unfortunately, this undertrust created new risks, as the manual mode disabled built-in safeguards, thereby contributing to a 2017 collision that killed 10 sailors and injured 48 others — the US Navy’s worst accident at sea in the past 40 years (Miller et al. 2019).

Not only do new military technologies introduce new sources of errors, they may also magnify the harmful impact of accidents, insofar as they facilitate actions at superhuman speed and unprecedented scale. For example, the possibility that integrating algorithmic decision-assistants will speed up the OODA (observe, orient, decide, act) loop is often touted as a benefit. Certainly, in defensive maneouvres, anti-ballistics and dogfights, faster decisions offer a strategic advantage. However, in many other situations, such as urban conflict, slower but better decisions are preferable (Koch and Schoonhoven 2022).

Meanwhile, nuclear weapons, chemical weapons, bioweapons and other weapons of mass destruction raise the possibility of massively destructive accidents — and the numerous disasters and close calls that have already occurred are haunting reminders of how probable that possibility is (United States Nuclear Regulatory Commission 2018; Schaper 2020; Bennetts 2017). In addition to increasing the risk of mass destruction, new technologies can scale the harms of errors: networked systems may allow once-isolated incidents to propagate throughout a system and simultaneously minimize opportunities for oversight.

An Inherent Accountability Gap

Despite this parade of horribles, this is not a situation where new technology is creating a new problem. In expanding unintended civilian harms — both incidental and accidental — new technology has made an older problem more salient. Namely, these examples all highlight the fact that there is no international accountability mechanism for most unintended civilian harms in armed conflict.

There are some legal accountability mechanisms, but they are insufficient for addressing unintended and thus lawful civilian harms. Individuals who intentionally target civilians or otherwise commit serious violations of IHL can be held criminally liable for war crimes — but, as highlighted by autonomous weapon systems and AI-enabled systems, it is possible for a system to take harmful action without anyone acting with the requisite mens rea for criminal liability. States that engage in internationally wrongful acts can be held responsible under the law of state responsibility — but if an act is lawful (and collateral damage and accidental harms are often lawful), it is not internationally wrongful, and the law of state responsibility is not implicated.

The law of armed conflict is designed to minimize, rather than prevent, the likelihood of causing needless civilian harm. It still permits a lot of incidental civilian harm. Under international law, no entity is legally accountable for the harmful consequences of lawful acts in armed conflict.

That should be said again.

Under international law, no entity is legally accountable for the harmful consequences of lawful acts in armed conflict.

Instead, unintended civilian harms lie where they fall — on innocent civilians. And, in failing to establish a deterrent, IHL arguably facilitates unintended civilian harm (Lieblich 2019).

New Accountability Mechanisms

By throwing this accountability gap into sharper relief, tech-enabled conduct highlights the need for new accountability mechanisms.

In response to concerns about AI’s potential for accidental harm and the IHL accountability gap, some have proposed relatively tech-specific regulations and policy guidance. For example, the US Department of Defense (DoD) recently adopted a set of ethical principles for the use of AI, which emphasize that DoD personnel must remain “responsible for the development, deployment, and use of AI capabilities” (DoD 2020a).

Others focus instead on revising the legal regimes themselves. Militaries, military advisers and civilian advocates have made moral and strategic arguments for voluntarily providing amends to harmed civilians (DoD 2020b; Center for Civilians in Conflict and Columbia Law School Human Rights Institute 2020; Kolenda et al. 2016, 32–33; Lewis 2013). Some legal scholars discuss how to improve the amends process (Wexler and Robbennolt 2017); others propose reinterpretations or revisions of existing rules that would expand state liability (Ronen 2009), individual criminal liability (Bo 2021), command responsibility (Ohlin 2016; Krupiy 2018) and domestic civil causes of action (Abraham 2019) for civilian harms in armed conflict.

This author has argued for the creation of a new accountability “war torts” regime, which would require states to pay compensation for both lawful and unlawful acts in armed conflict that cause civilian harm, either when they employ autonomous weapon systems (Crootof 2016) or more generally (Crootof 2022). Such a regime might be structured as an adversarial tribunal, where states are held strictly liable for the harmful consequences of their acts in armed conflict; as a no-fault system, where states would pay into a victim’s fund that is then distributed to claimants; or as some hybrid of the two, which would attempt to marry the best aspects of both (Crootof, forthcoming 2023).

Technological developments often make older, infrequent or underreported problems more stark, pervasive or significant. While many proposals focus on regulating particular weapons technologies to address concerns about increased incidental harms or increased accidents, this is not a case of the law failing to keep up with technological development. Instead, technological developments have drawn attention to the accountability gap built into the structure of IHL. In doing so, AI and other new military technologies have highlighted the need for accountability mechanisms for all civilian harms.

  1. Additional Protocol to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be Excessively Injurious or to have Indiscriminate Effects (Protocol IV, entitled Protocol on Blinding Laser Weapons), 13 October 1995, No 22495 (entered into force 30 July 1998), online:
  2. Ibid.
  3. Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol 1), 8 June 1977 (entered into force 7 December 1979), online: <>.

Works Cited

Abraham, Haim. 2019. “Tort Liability for Belligerent Wrongs.” Oxford Journal of Legal Studies 39 (4): 808–33.

Bainbridge, Lisanne. 1983. “Ironies of Automation.” Automatica 19 (6): 775–79.

BBC News. 2022. “India accidentally fires missile into Pakistan.” BBC News, March 11.

Bennetts, Marc. 2017. “Soviet officer who averted cold war nuclear disaster dies aged 77.” The Guardian, September 18.

Bo, Marta. 2021. “Autonomous Weapons and the Responsibility Gap in light of the Mens Rea of the War Crime of Attacking Civilians in the ICC Statute.” Journal of International Criminal Justice 19 (2): 275–99.

Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy and Dario Amodei. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation. February.

Center for Civilians in Conflict and Columbia Law School Human Rights Institute. 2020. In Search of Answers: U.S. Military Investigations and Civilian Harm.

Conklin, Audrey. 2022. “North Carolina man dead after following GPS to destroyed bridge that dropped into water.” Fox News, October 8.

Crootof, Rebecca. 2015a. “The Killer Robots Are Here: Legal and Policy Implications.” Cardozo Law Review 36: 1837–1915.

———. 2015b. “Why the Prohibition on Permanently Blinding Lasers is Poor Precedent for a Ban on Autonomous Weapon Systems.” Lawfare (blog), November 24.

———. 2016. “War Torts: Accountability for Autonomous Weapons.” University of Pennsylvania Law Review 164 (6): 1347–1402.

———. 2022. “War Torts.” New York University Law Review 97 (4): 101–73.

———. Forthcoming 2023. “Implementing War Torts.” Virginia Journal of International Law 63.

Defense Science Board. 2005. “Report of the Defense Science Board Task Force on Patriot System Performance: Report Summary.” January. Washington, DC: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics.

DoD. 2020a. “DOD Adopts Ethical Principles for Artificial Intelligence.” News release, February 24.

———. 2020b. “Development of a DoD Instruction on Minimizing and Responding to Civilian Harm in Military Operations.” Memorandum, January 31.

Fujita, Akiko. 2012. “GPS Tracking Disaster: Japanese Tourists Drive Straight into the Pacific.” ABC News, March 16.

Hambling, David. 2009. “Soldiers Blinded, Hospitalized by Laser ‘Friendly Fire.’” Wired, March 30.

Hecht, Jeff. 2002. “Fighter plane’s laser may blind civilians.” New Scientist, July 24.

Human Rights Watch and the International Human Rights Clinic. 2015. “Precedent for Preemption: The Ban on Blinding Lasers as a Model for Killer Robots Prohibition.” Memorandum to Convention on Conventional Weapons Delegates. November.

Jones, Meg L. 2015. “The Ironies of Automation Law: Tying Policy Knots with Fair Automation Practices Principles.” Vanderbilt Journal of Entertainment &Technology Law 18 (1): 77–134.

Kircher, Madison Malone. 2018. “Yet Another Person Listens to GPS and Drives Car into Lake.” Intelligencer, January 24.

Koch, Bernhard and Richard Schoonhoven, eds. 2022. Emerging Military Technologies: Ethical and Legal Perspectives. International Studies on Military Ethics, vol. 8. Leiden, The Netherlands: Brill Nijhoff.

Kolenda, Christopher D., Rachel Reid, Chris Rogers and Marte Retzius. 2016. The Strategic Costs of Civilian Harm: Applying Lessons from Afghanistan to Current and Future Conflicts. Open Society Foundations. June.

Krupiy, Tetyana (Tanya). 2018. “Regulating a Game Changer: Using a Distributed Approach to Develop an Accountability Framework for Lethal Autonomous Weapon Systems.” Georgetown Journal of International Law 50 (1): 45–112.

Lewis, Larry. 2013. “Reducing and Mitigating Civilian Casualties: Enduring Lessons.” Joint and Coalition Operational Analysis. April 12.

———. 2018. Redefining Human Control: Lessons from the Battlefield for Autonomous Weapons. March. Center for Autonomy and AI.

Lieblich, Eliav. 2019. “The Facilitative Function of Jus in Bello.” The European Journal of International Law 30 (1): 321–40.

Miller, Christian T., Megan Rose, Robert Faturechi and Agnes Chang. 2019. “Collision Course.” ProPublica, December 20.

Ohlin, Jens David. 2016. “The Combatant’s Stance: Autonomous Weapons on the Battlefield.” International Law Studies 92 (1): 1–30.

Perrow, Charles. 1984. Normal Accidents: Living with High-Risk Technologies. New York, NY: Basic Books.

Ronen, Yael. 2009. “Avoid or Compensate? Liability for Incidental Injury to Civilians Inflicted During Armed Conflict.” Vanderbilt Journal of Transnational Law 42 (1): 181–225.

Schaper, David. 2020. “Congressional Inquiry Faults Boeing And FAA Failures For Deadly 737 Max Plane Crashes.” NPR, September 16.

Scharre, Paul and Michael Horowitz. 2018. “Artificial Intelligence: What Every Policymaker Needs to Know.” Center for a New American Security.” June.

Schmitt, Michael N. 2006. “War, Technology, and the Law of Armed Conflict.” International Law Studies 82: 137–82.

Shachtman, Noah. 2007. “Robot Cannon Kills 9, Wounds 14.” Wired, October 18.

Shue, Henry and David Wippman. 2002. “Limiting Attacks on Dual-Use Facilities Performing Indispensable Civilian Functions.” Cornell International Law Journal 35 (3): 559–79.

United States Nuclear Regulatory Commission. 2018. “Backgrounder on the Three Mile Island Accident.”

Wexler, Lesley and Jennifer K. Robbennolt. 2017. “Designing Amends for Lawful Civilian Casualties.” The Yale Journal of International Law 42 (1): 121–85.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Rebecca Crootof is an associate professor of law at the University of Richmond School of Law. Her primary areas of research include technology law, international law and torts.