International Legal Regulation of Autonomous Technologies

November 16, 2020

This article is a part of Modern Conflict and Artificial Intelligence, an essay series that explores digital threats to democracy and security, and the geopolitical tensions they create.

T

he advent of new technologies always prompts questions concerning their legality, and this is certainly true with respect to autonomous technologies, including those using varying degrees of artificial intelligence (AI). As autonomous solutions are developed and employed, countries need to ensure that their use aligns with established moral and ethical principles, which are often enshrined in both domestic and international legislation. The basic legal dilemma concerning any new technology is ascertaining whether existing law is capable of regulating it in conformity with those principles and, if not, what new legal instruments are necessary to meet that objective. This essay explores that question in the context of autonomous technologies.

Innovation in autonomy is being driven simultaneously by civilian and national security (including military) demands. Commercial autonomous technology for civilian application is primarily subject to domestic legal regulation, although international law can, and is likely to, play some role in its governance. Autonomous military technologies are predominantly developed for employment in an international environment during armed conflict; in that setting, international law is prominent. The essay begins with a discussion of the prospect for international legal regulation of autonomous civilian technologies. It then turns to certain challenges relating to international law that employing autonomy in national security and defence contexts, including on the battlefield, presents.

Regulation for Civilian Purposes

Legislatures across the globe should be preparing to amend their laws, and possibly adopt new ones, governing autonomous technologies. Some applications, such as aircraft autopilot systems and industrial robots, have been employed for decades, albeit in strictly controlled environments where robust security controls are in place. In the future, technologies with varying degrees of autonomy will become pervasive in many societies. Driverless public transit systems, self-driving cars and AI algorithms in medical diagnosis are leading this innovation, with countless other use cases bound to follow. Inevitably, domestic laws will require some degree of revision to ensure adequate regulation of such systems.

For the present, these new technologies are primarily subject to industry self-regulation, with several large companies having adopted internal policies relating to the use of automation in their products and services (for examples, see International Committee of the Red Cross 2019, 25–26). The experience states have had with current digital technologies offers valuable lessons in this regard; when the private sector is left to self-regulate, friction between companies and governments is likely to arise. Criticism by states directed at Twitter and Facebook about their handling of online content, such as fake news and live streaming of violent incidents, or the susceptibility of their algorithms to manipulation and biases, is illustrative. That these companies have called on governments to specify through regulation the kinds of action expected of them is therefore unsurprising (Press Association 2019; Rudgard and Cook 2019).

The extent to which industry self-regulation can govern more advanced autonomous technologies to the satisfaction of governments, civil society and the public generally is limited. Google, itself, has acknowledged that “self- and co-regulatory approaches will remain the most effective practical way to address and prevent AI related problems in the vast majority of instances, within the boundaries already set by sector-specific regulation,” but that “there are some instances where additional rules would be of benefit” and “relying on companies alone to set standards is inappropriate” (Google, n.d., 29; Evans 2020). Accordingly, it is sensible for governments to engage with the private sector and collaboratively work toward optimal governance regimes, as opposed to intervening only when unwanted consequences of this new technology have begun to manifest.

Legislatures across the globe should be preparing to amend their laws, and possibly adopt new ones, governing autonomous technologies.

Regulatory rules, rather than legislative solutions, are likely to emerge first, as has been the case with other novel technologies. In the field of nanotechnology, for example, several European countries have adopted regulations that impose reporting requirements on companies that manufacture, import or distribute nanomaterials.1 In the field of autonomy, we can likewise expect regulations tackling discrete issues, which at some point will be followed by legislative action, whether through amendments to existing laws or the adoption of new ones (this is without prejudice to the adoption of so-called enabling legislation, that is, legislation that grants the power to adopt regulations to a certain person or entity, such as a government minister).

Public international law, by contrast, will largely play a bystander’s role insofar as commercial autonomous solutions meant for civilian use are concerned. However, the international community may at some point feel the need to harmonize countries’ domestic laws to ensure that the internal legal regulation of these commercial technologies is consistent across borders. The legal mechanism for harmonization would be the adoption of a so-called uniform law treaty that obligates states that are parties to the instrument to legislate domestically with respect to their criminal, civil or administrative laws. For example, such a treaty could prescribe uniform safety standards, liability rules, certification schemes, data management processes, human supervision requirements over the use of the technology, fail-safe mechanisms to be put in place, operational constraints, rules regarding bias, and criminal offences involving autonomous technologies.

The most likely starting point for international legal regulation along these lines would be the European Union, for it is the only international organization with an institutional capacity, a pre-existing mandate and the political appetite to adopt such far-reaching binding rules (in fact, it has already taken preliminary steps toward intra-community regulation of AI; see European Commission 2020). Although formally the European Union only has the authority to legislate vis-à-vis its member states, the effect of any resulting regulation would extend beyond the organization’s borders. The situation might be analogous to the European Union’s General Data Protection Regulation — to the extent foreign companies offering products and services in the field of automation want to operate in the EU market, they would be obliged to follow applicable EU rules. This presents a strategic opportunity for the European Union, for it is uniquely well-positioned to serve as a pioneer in this area, thereby shaping the conversation as to the appropriate legal and regulatory regime for autonomous technologies.

Regulation of National Security and Defence-related Autonomous Technologies

It is widely accepted that autonomy, in particular AI, will revolutionize warfare. Examples of contexts in which autonomy is and will be employed include information processing, notably intelligence analysis; unmanned weapon systems; realistic military training; psychological warfare; and military command and control. It is therefore unsurprising that great-power competition for supremacy in military autonomous technologies and AI is under way.

In that warfare is governed by a dense international legal framework, many rules already exist that regulate the use of autonomous technologies in war. These rules form a regime of international law known as international humanitarian law (IHL), also labelled the law of armed conflict.

Scholarship on the interplay between these new technologies and IHL has primarily focused on the use of lethal autonomous weapons (Schmitt and Thurnher 2013; O’Connell 2014; Sassòli 2014; Geiss 2015). At the state level, a group of governmental experts convened under a UN umbrella has confirmed that “international humanitarian law continues to apply fully to all weapons systems, including the potential development and use of lethal autonomous weapons systems” (Group of Governmental Experts 2017, para. 16(b)), which logically leads to the conclusion that other military uses of autonomous technologies are likewise governed by this subfield of international law.

IHL, in particular its rules governing the conduct of hostilities (that is, the way in which a war is waged), are relevant insofar as the international community has not prohibited particular means or methods of warfare. Presently, no automated or autonomous technologies have been banned, although states have been under political, scholarly and civil society pressure to prohibit fully autonomous lethal weapons since the launch of the “Ban Killer Robots”2 movement. For instance, the European Parliament in 2018 adopted a resolution in which it urged the European Commission, individual member states and the European Council to “work towards the start of international negotiations on a legally binding instrument prohibiting lethal autonomous weapon systems” (European Parliament 2018, para. 3). In the absence of such a treaty, existing IHL rules govern their use.

The issue of lethal autonomous weapon systems aside, it is clear that autonomous technologies will increasingly find military usage. It is equally clear that the application of the pre-automation, pre-autonomy rules of IHL to those technologies is not without challenges. Many existing debates over how to apply IHL rules would apply equally to autonomous systems, as in the case of questions concerning the permissibility of directing non-destructive military operations against civilian objects3 or the geographical boundaries of the applicability of humanitarian law.4

Yet, issues unique to autonomy are bound to arise as well. For example, a cross-cutting issue in IHL, as well as in related fields of international law such as international criminal law, concerns accountability. If, for instance, autonomous cyber capabilities unexpectedly cause harm to civilians or damage to civilian objects, questions of responsibility attach. Under IHL, states are responsible for ensuring their weapon systems are used in a manner consistent with the conduct of hostilities rules. This obligation begs the question of weapon systems that operate autonomously, perhaps even using AI to select targets. If the armed forces using a system cannot assess the harm likely to be caused to the civilian population or civilian objects by an autonomous system with the requisite degree of reliability, whatever the correct standard of likelihood is, those armed forces are using the weapon indiscriminately in the battlespace. This would constitute a breach of IHL by the state employing the autonomous weapon system.5

Furthermore, international criminal law imposes individual criminal responsibility for war crimes, which include directing attacks against civilian objects with “intent” and “knowledge.”6 Questions about how criminal tribunals would apply these notions to encompass civilian damage caused by autonomous systems in circumstances such as those mentioned above would loom large in any criminal prosecution.

Autonomy is also being used for national security purposes, both benign and malicious, beyond the battlefield. As malicious uses are exposed, they often raise legal and ethical alarm bells. The highest-profile case of a government resorting to these technologies to surveil and identify individuals is the Chinese government’s continuous monitoring of the Uighur Muslim minority (Taddonio 2019), a case that set a precedent for other authoritarian governments to employ advanced technologies for illicit purposes. Adding to the complexity of the situation is commercial opportunism. The case of Clearview AI — a facial recognition software company that automatically scrapes images from the internet to form a database of several billion files, thereby enabling facial recognition (Hill 2020) — is a telling example of how the private sector, if left to self-regulate, risks societal harm that is not necessarily outweighed by the legitimate use of their services for national security and public order purposes. These and other cases demonstrate the potential negative effects of autonomy and automation, including the erosion of human rights (such as the right to privacy, freedom of the press and freedom of assembly). They also highlight the need to pay even greater attention to preserving and safeguarding the rule of law and basic moral and ethical values in the face of technological developments.

Conclusion

New technologies present normative challenges to both domestic and international law, in particular with regard to the suitability of pre-existing rules. Certain technology-specific issues are inevitably bound to arise that will require regulatory and legislative action. The resulting normative evolution will first occur in the domestic setting, for international law making is a relatively slow process, especially in fields with a national security nexus.

In this process, states will face many challenges. A fundamental difficulty stems from the dual-use nature of autonomous solutions. Accordingly, both domestic regulators and legislatures, as well as states as they engage in the interpretation and adoption of international law, will need to tread carefully, ensuring, on the one hand, that the rules and interpretive positions they adopt do not stifle innovation while guaranteeing, on the other, that they effectively prevent malicious uses of the technology. Sensible normative frameworks must be collaborative; governments should therefore work with industry and civil society in adopting fit-for-purpose governance regimes, while states should work together to fashion rules that advance shared values.

A more practical challenge is that it is difficult to regulate something that one does not fully understand. Autonomous technologies are in their infancy, and predicting scientific developments in this field — even in the near term — is difficult, if not impossible. Any new laws and regulations will need to be sufficiently general so as to not become outdated quickly, but also not so vague that they provide no meaningful guidance. The difficulty of this undertaking inevitably means that no overarching area-specific rule set will be adopted in the near future — neither domestic legal acts governing autonomous technologies writ large, nor an international treaty on autonomy as such. Instead, we may expect discrete rules governing relatively specific aspects of autonomous technologies.

  1. See, for example, European Union Observatory for Nanomaterials, National Reporting Schemes, https://euon.echa.europa.eu/national-reporting-schemes.
  2. For more information, see www.stopkillerrobots.org/.
  3. The principle of distinction is set forth in, inter alia, Article 48 of Additional Protocol I to the Geneva Conventions (Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts, 8 June 1977, 1125 UNTS 3): “In order to ensure respect for and protection of the civilian population and civilian objects, the Parties to the conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations only against military objectives.” The term “military operations” is generally understood as prohibiting the parties’ “attacks” in the sense of Article 49(1) of Additional Protocol I against the civilian population, individual civilians and civilian objects. A degree of uncertainty exists as to whether military operations that do not amount to “attacks” can also be considered in violation of the principle of distinction.
  4. For instance, whether IHL’s applicability would extend to the territories of other (non-adjacent) states in an armed conflict between government armed forces and an organized armed group is a matter of controversy. If, for example, a member of the organized armed group travelled to an overseas country and launched a destructive autonomous cyberspace operation against the state that they are fighting, the issue arises as to whether that person and the information technology (IT) equipment that the person is using are subject to IHL. Some experts are of the view that in such a circumstance IHL continues to apply vis-à-vis that person and the equipment, in which case killing that person, and damaging or destroying the equipment (for example, by way of remote cyber operations), would not constitute a breach of IHL. This is because members of organized armed groups, as well as any objects qualifying as military objectives (in this case, the IT equipment), are targetable during an armed conflict. Others posit that IHL does not follow a person and objects in said manner and that the situation would instead be governed by international human rights law. It should also be noted that a scenario of this type involves other complex legal issues, for instance, the legal basis for the state that is engaging lethal or destructive operation in another state’s territory.
  5. Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts, 8 June 1977, 1125 UNTS 3, Art. 51(4)(a).
  6. Rome Statute of the International Criminal Court, 17 July 1998, 2187 UNTS 90, Arts. 8(b)(2) and 30.

Works Cited

European Commission. 2020. “On Artificial Intelligence — A European approach to excellence and trust.” COM(2020) 65 final, February 19. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

European Parliament. 2018. Resolution on autonomous weapon systems. 2018/2752(RSP), September 12.  

Evans, Zachary. 2020. “Google CEO Calls for Government Regulation of Artificial Intelligence.” National Review, January 20. www.nationalreview.com/news/google-ceo-calls-for-government-regulation-of-artificial-intelligence/.

Geiss, Robin. 2015. “The International-Law Dimension of Autonomous Weapons Systems.” International Policy Analysis. Berlin, Germany: Friedrich-Ebert-Stiftung.

Google. n.d. “Perspectives on Issues in AI Governance.” White paper. https://ai.google/static/documents/perspectives-on-issues-in-ai-governance.pdf.

Group of Governmental Experts. 2017. Report of the 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems. UN Doc CCW/GGE.1/2017/3. December 22. https://undocs.org/CCW/GGE.1/2017/3.

Hill, Kashmir. 2020. “The Secretive Company That Might End Privacy as We Know It.” The New York Times, January 18. www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.

International Committee of the Red Cross. 2019. “Autonomy, artificial intelligence and robotics: Technical aspects of human control.” August. Geneva, Switzerland: International Committee of the Red Cross. www.icrc.org/en/document/autonomy-artificial-intelligence-and-robotics-technical-aspects-human-control.

O’Connell, Mary Ellen. 2014. “Banning Autonomous Killing: The Legal and Ethical Requirement That Humans Make Near-Time Lethal Decisions.” In The American Way of Bombing: Changing Ethical and Legal Norms, from Flying Fortresses to Drones, edited by Matthew Evangelista and Henry Shue, 224–36. Ithaca, NY: Cornell University Press.

Press Association. 2019. Mark Zuckerberg calls for stronger regulation of internet.” The Guardian, March 30. www.theguardian.com/technology/2019/mar/30/mark-zuckerberg-calls-for-stronger-regulation-of-internet.

Rudgard, Olivia and James Cook. 2019. “Twitter boss calls for social media regulation.” The Telegraph, April 3. www.telegraph.co.uk/technology/2019/04/03/twitter-boss-calls-social-media-regulation/.

Sassòli, Marco. 2014. “Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to be Clarified.” International Law Studies 90: 308–40.

Schmitt, Michael N. and Jeffrey S. Thurnher. 2013. “‘Out of the Loop’: Autonomous Weapon Systems and the Law of Armed Conflict.” Harvard National Security Journal 4: 231–81.

Taddonio, Patrice. 2019.How China’s Government Is Using AI on Its Uighur Muslim Population.” PBS.org, November 21. www.pbs.org/wgbh/frontline/article/how-chinas-government-is-using-ai-on-its-uighur-muslim-population/.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Liis Vihul is the founder and chief executive officer of Cyber Law International, a firm that provides international cyber law training and consulting services for governments and international organizations worldwide.

Autonomous systems are revolutionizing our lives, but they present clear international security concerns. Despite the risks, emerging technologies are increasingly applied as tools for cybersecurity and, in some cases, cyberwarfare. In this series, experts explore digital threats to democracy and security, and the geopolitical tensions they create.