rticle after article bemoans how new military technologies — including landmines, unmanned drones, cyberoperations, autonomous weapon systems and artificial intelligence (AI) — create new “accountability gaps” in armed conflict. Certainly, by introducing geographic, temporal and agency distance between a human’s decision and its effects, these technologies expand familiar sources of error and complicate causal analyses, making it more difficult to hold an individual or state accountable for unlawful harmful acts.
But in addition to raising these new accountability issues, novel military technologies are also making more salient the accountability chasm that already exists at the heart of international humanitarian law (IHL): the relative lack of legal accountability for unintended, “awful but lawful” civilian harm.
Expanding Incidental Civilian Harm
In its attempt to reconcile minimizing needless civilian harm and enabling necessary state action, IHL explicitly permits many acts that foster incidental harm. For example, while intentionally targeting civilians is forbidden,1 an attack that incidentally results in civilian harm — even foreseeable and extensive civilian harm — may be lawful. If the commander authorizing an attack reasonably determined that the “collateral damage” (the expected civilian deaths, civilian injury, destruction of civilian objects and the associated reverberating effects) would not be excessive compared with the anticipated military benefit, the attack satisfies the proportionality requirement. The operation remains “proportional” for legal purposes regardless of how many or how badly civilians are actually injured when it is carried out.
New weapons technologies exacerbate this tension. First, new technologies have narrowed the list of protected civilian objects. Not only has technology facilitated the shift to urban battlespaces (Schmitt 2006), the incentives to network and link military systems have resulted in civilian objects such as electrical grids, telecommunications systems and internet services increasingly becoming dual-use and thus possibly targetable infrastructure (Shue and Wippman 2002).
Second, this legal structure incentivizes the development and use of weapons that exploit the grey zones of permitted incidental harm. For example, one of the most effective — insofar as there are no known violations — international weapons regulations of all time is the prohibition on lasers designed to permanently blind (Crootof 2015a).2 The apparent success of this treaty has led some to argue that it is a useful precedent for regulating other technologies, such as autonomous weapon systems (Human Rights Watch and the International Human Rights Clinic 2015). This treaty has been “successful,” however, because it defines the regulated technology so narrowly (Crootof 2015b). In prohibiting the use of “laser weapons specifically designed, as their sole combat function or as one of their combat functions, to cause permanent blindness to unenhanced vision,”3 the treaty does not govern the use of lasers that temporarily blind (such as the dazzlers US forces used in Iraq, which allegedly blinded US service members) (Hambling 2009) or those that have an unintended side effect of causing permanent blindness (such as laser weapons used for other purposes) (Hecht 2002).
Third, while increasingly precise weapons are celebrated for simultaneously minimizing risks to troops and civilians, they also create a precision paradox, given that “the ability to engage in more precise strikes may mean that a military can lawfully engage in more strikes overall, as an operation that might have once risked too much civilian harm and thus failed the proportionality requirement may be rendered newly lawful” (Crootof 2022). The more attacks there are, the more civilian harm is permitted — either as anticipated collateral damage or unforeseen accidental harm.
Expanding Accidental Civilian Harm
There have always been accidents in the fog of war, but new military technologies both introduce new manifestations of familiar sources of harmful error and magnify their damage potential.
Take weapons error. Just as every physical system is subject to design and manufacturing defects, every code-based system is subject to bugs — programming errors that cause unanticipated results. In armed conflict, these errors can be deadly. During a 2007 South African military exercise, for example, a software glitch allegedly resulted in an anti-aircraft cannon malfunction that killed nine soldiers and wounded 14 others (Shachtman 2007). More recently, “a technical malfunction led to the accidental firing of a missile” by India, into Pakistan (BBC News 2022). AI-based systems are complicated, and, per normal accident theory, the more complicated the system, the greater the likelihood of an accident (Perrow 1984). As militaries incorporate AI and other algorithms into their systems, they are introducing other errors unique to this technology (Scharre and Horowitz 2018) as well as new vulnerabilities that can be hacked, gamed, or otherwise exploited by adversaries (Brundage et al. 2018).
Or consider how new military technologies exacerbate certain types of user error. While some technologies may extend a user’s abilities — say, by enabling more considered or informed decision making — many also risk making users’ jobs more difficult. One of the “ironies of automation” is that the more automated a hybrid system is, the more important and difficult the human’s role in the system becomes (Bainbridge 1983; Jones 2015). As the “easier” tasks are delegated to machine intelligence, not only is the human expected to handle more strenuous or difficult tasks more often — without the breaks provided by working on easier ones — they must do so while simultaneously overseeing a system that introduces automation errors and new vulnerabilities. Rather than relieving burdens on the human in the loop, some of the most highly automated systems require the most attentive and diversely skilled human operators.
The incentives to network and link military systems have resulted in civilian objects...increasingly becoming dual-use and thus possibly targetable infrastructure.
Further, even the best-trained and most competent humans in hybrid systems are subject to errors unique to that role, including automation bias (where humans defer overmuch to machine assessments), deteriorated situational awareness (a lessened ability to draw on a particular piece of knowledge in a particular situation) and skill fade (the loss of skills due to a lack of practice). There are plenty of examples of overtrust in algorithmic decision makers fostering accidents. People have followed their GPS navigation directions so faithfully that they drove into ponds, lakes or bays (Kircher 2018) and even into the ocean (Fujita 2012); tragically, at least one incident was fatal (Conklin 2022). In armed conflict, overtrust can have lethal consequences. In 2003, on three separate occasions, the US defensive Patriot system misclassified friendly planes as ballistic missiles. In each case, the operator ordered the system to engage the perceived threat, despite having information available that contradicted the system’s assessment, resulting in painfully avoidable fratricides (Lewis 2018). As a final report on Patriot system performance concluded, the system was “a poor match to the conditions” in part because “the protocol was largely automatic, and the operators were trained to trust the system’s software” (Defense Science Board 2005, 2).
Nor is the proposal to train users to undertrust systems a silver bullet (Hao 2021). If users undertrust systems too much, we risk overinvestment in useless or even harmful infrastructure. The captain of the USS John S. McCain quickly learned not to trust a glitchy new navigation system, and so he often used it in backup manual mode. Unfortunately, this undertrust created new risks, as the manual mode disabled built-in safeguards, thereby contributing to a 2017 collision that killed 10 sailors and injured 48 others — the US Navy’s worst accident at sea in the past 40 years (Miller et al. 2019).
Not only do new military technologies introduce new sources of errors, they may also magnify the harmful impact of accidents, insofar as they facilitate actions at superhuman speed and unprecedented scale. For example, the possibility that integrating algorithmic decision-assistants will speed up the OODA (observe, orient, decide, act) loop is often touted as a benefit. Certainly, in defensive maneouvres, anti-ballistics and dogfights, faster decisions offer a strategic advantage. However, in many other situations, such as urban conflict, slower but better decisions are preferable (Koch and Schoonhoven 2022).
Meanwhile, nuclear weapons, chemical weapons, bioweapons and other weapons of mass destruction raise the possibility of massively destructive accidents — and the numerous disasters and close calls that have already occurred are haunting reminders of how probable that possibility is (United States Nuclear Regulatory Commission 2018; Schaper 2020; Bennetts 2017). In addition to increasing the risk of mass destruction, new technologies can scale the harms of errors: networked systems may allow once-isolated incidents to propagate throughout a system and simultaneously minimize opportunities for oversight.
An Inherent Accountability Gap
Despite this parade of horribles, this is not a situation where new technology is creating a new problem. In expanding unintended civilian harms — both incidental and accidental — new technology has made an older problem more salient. Namely, these examples all highlight the fact that there is no international accountability mechanism for most unintended civilian harms in armed conflict.
There are some legal accountability mechanisms, but they are insufficient for addressing unintended and thus lawful civilian harms. Individuals who intentionally target civilians or otherwise commit serious violations of IHL can be held criminally liable for war crimes — but, as highlighted by autonomous weapon systems and AI-enabled systems, it is possible for a system to take harmful action without anyone acting with the requisite mens rea for criminal liability. States that engage in internationally wrongful acts can be held responsible under the law of state responsibility — but if an act is lawful (and collateral damage and accidental harms are often lawful), it is not internationally wrongful, and the law of state responsibility is not implicated.
The law of armed conflict is designed to minimize, rather than prevent, the likelihood of causing needless civilian harm. It still permits a lot of incidental civilian harm. Under international law, no entity is legally accountable for the harmful consequences of lawful acts in armed conflict.
That should be said again.
Under international law, no entity is legally accountable for the harmful consequences of lawful acts in armed conflict.
Instead, unintended civilian harms lie where they fall — on innocent civilians. And, in failing to establish a deterrent, IHL arguably facilitates unintended civilian harm (Lieblich 2019).
New Accountability Mechanisms
By throwing this accountability gap into sharper relief, tech-enabled conduct highlights the need for new accountability mechanisms.
In response to concerns about AI’s potential for accidental harm and the IHL accountability gap, some have proposed relatively tech-specific regulations and policy guidance. For example, the US Department of Defense (DoD) recently adopted a set of ethical principles for the use of AI, which emphasize that DoD personnel must remain “responsible for the development, deployment, and use of AI capabilities” (DoD 2020a).
Others focus instead on revising the legal regimes themselves. Militaries, military advisers and civilian advocates have made moral and strategic arguments for voluntarily providing amends to harmed civilians (DoD 2020b; Center for Civilians in Conflict and Columbia Law School Human Rights Institute 2020; Kolenda et al. 2016, 32–33; Lewis 2013). Some legal scholars discuss how to improve the amends process (Wexler and Robbennolt 2017); others propose reinterpretations or revisions of existing rules that would expand state liability (Ronen 2009), individual criminal liability (Bo 2021), command responsibility (Ohlin 2016; Krupiy 2018) and domestic civil causes of action (Abraham 2019) for civilian harms in armed conflict.
This author has argued for the creation of a new accountability “war torts” regime, which would require states to pay compensation for both lawful and unlawful acts in armed conflict that cause civilian harm, either when they employ autonomous weapon systems (Crootof 2016) or more generally (Crootof 2022). Such a regime might be structured as an adversarial tribunal, where states are held strictly liable for the harmful consequences of their acts in armed conflict; as a no-fault system, where states would pay into a victim’s fund that is then distributed to claimants; or as some hybrid of the two, which would attempt to marry the best aspects of both (Crootof, forthcoming 2023).
Technological developments often make older, infrequent or underreported problems more stark, pervasive or significant. While many proposals focus on regulating particular weapons technologies to address concerns about increased incidental harms or increased accidents, this is not a case of the law failing to keep up with technological development. Instead, technological developments have drawn attention to the accountability gap built into the structure of IHL. In doing so, AI and other new military technologies have highlighted the need for accountability mechanisms for all civilian harms.