Artificial Intelligence and Keeping Humans “in the Loop”

November 23, 2020

This article is a part of Modern Conflict and Artificial Intelligence, an essay series that explores digital threats to democracy and security, and the geopolitical tensions they create.

A

rtificial intelligence (AI) technology has evolved through a number of developmental phases, from its beginnings in the 1950s to modern machine learning, expert systems and “neural networks” that mimic the structure of biological brains. AI now exceeds our performance in many activities once held to be too complex for any machine to master, such as the game Go and game shows. Nonetheless, human intellect still outperforms AI on many simple tasks, given AI’s present inability to recognize more than schematic patterns in images and data. As AI evolves, the pivotal question will be to what degree AI systems should be granted autonomy, to take advantage of this power and precision, or remain subordinate to human scrutiny and supervision, to guard against unexpected failure. That is to say, as we anticipate technological advances in AI, to what degree must humans remain “in the loop”?

Computing is arriving at a critical juncture in its development. The traditional approaches relying on CMOS (complementary metal oxide semiconductor) technology, used in the manufacture of most of today’s computer chips, and the pioneering architecture of John von Neumann are nearing their fundamental limits, and the speed of progress in computing power now seems to be falling short of the exponential improvement Moore’s law would predict (Waldrop 2016). Further developments in the field of neuromorphic computing, in which semiconductors can imitate the structures of biological neurons and synapses, along with the advent of quantum computing, present a vision of human-level machine cognition serving as an intellectual partner to help solve some of the most significant technical, medical and scientific challenges confronting humankind.

Although AI researchers have had a checkered record in predicting the pace of technological progress, extrapolations of current trends suggest that AI with human-level cognition (artificial general intelligence) or above (artificial superintelligence) could be a relatively near-term prospect. Some experts predict an explosion in AI capabilities by 2045 (Baum, Goertzel and Goertzel 2011; Sandberg and Bostrom 2011), providing a massive supplement to the human brain, thereby dramatically increasing the general efficiency of human society. Such technology could grant a decisive strategic advantage in political, economic and military domains, and thus warrants the focused efforts of the world’s leading nations.

AI now exceeds our performance in many activities once held to be too complex for any machine to master.

As AI is now clearly being used in a comprehensive and world-changing way, a major challenge will be to make the processes and outputs of complex AI systems comprehensible to humans. This entails transparency of input data, algorithms and results that are clearly conveyed and easy to interpret. Enhanced transparency is a precondition for the acceptance of AI systems, particularly in mission-critical applications impacting life and death. Lack of user trust in AI decisions or understanding of how it functions will raise a host of legal, ethical and economic questions. The increasing delegation of human decisions to AI systems has varying consequences. Translation errors caused by automated systems such as Google Translate will likely have no serious impact on human life and survival. However, AI used in autonomous vehicles or weapon systems must make life-and-death decisions in real time. While it may be inconsequential to allow AI concerned with more mundane tasks to run without a human’s finger hovering over the Off button, the use of AI technology to assist human cognition in more impactful decision making will likely require robust policies for retaining effective human control.

Notwithstanding the current developmental challenges, there are technically no limits to the possible applications of AI, which leads to ethical considerations. Emerging efforts focus on the development of AI technologies that can perceive, learn, plan, decide and act immediately in an environment of uncertainty. Some scholars predict an “intelligence explosion” beginning at the point in time when AI becomes more competent than humans at the very act of designing AI systems, setting AI development on an exponentially accelerating trajectory. This may lead to a “superintelligence,” transcending the bounds of human thought, feeling and action. Such superintelligence could emancipate itself from human intelligence and arrive at different solutions than humans, given greater data, faster processing and, theoretically, more objective evaluation. The relative merit of such solutions may only be decided on the basis of values, raising the question of what canonical basis defines what is “right,” and by whom, and whether by machine or not. These questions are particularly relevant in instances such as real-time conflict situations when human and machine values are incongruent and the competition for advantage in speed and accuracy may mean that humans will no longer be in charge, with those who refuse to delegate ultimate authority being outcompeted by those who do.

Merits of Humans “in the Loop” and “out of the Loop”

Given the prediction that future AI technology will be able to match or exceed human cognition across a wide range of tasks, the crucial question concerns the degree of autonomy that is most desirable. While AI often has the edge on humans in speed, efficiency and accuracy, its inability to think contextually and its tendency to fail catastrophically when presented with novel situations make many reticent to allow the technology to operate free of human oversight. As such, a common refrain is that a human must be kept in the loop to supervise AI in important roles. The AI would either have to attain a human supervisor’s approval for its chosen course of action, or a human would monitor an AI’s actions, with the power to intervene should something go wrong.

It could be argued that removing human control could allow AI-enabled weaponry to temper the most distasteful elements of warfare and enable conflict to be conducted in an ethically superior manner. The intense stress, fatigue and emotional impulses endured by humans engaged in combat result in suboptimal decision making, frequently resulting in unnecessary collateral damage or unintended initiation of hostilities. The emotional and psychological causes that lie behind the accidental loss of human life during conflict cast doubt on the prospect of reforming human behaviour, but give reason for optimism that AI-enabled weapons could exceed human moral performance in similar circumstances (Arkin 2009). Consequently, one of the more attractive prospects of AI-enabled autonomous weapons is their imperviousness to such deficiencies, thereby enabling them to make more effective strategic decisions amid the “fog of war,” or to kill in a more humane way (Lin, Bekey and Abney 2008). This argument potentially provides a strong case for the development and utilization of emotionally uncompromisable artificial combatants.

Conversely, the delegation of strategic decisions to AI could reduce the threshold for the onset of war, as machines would not be affected by the human mind’s natural risk aversion. It could also cause armed conflicts to be prolonged endlessly, as machines do not tire or experience duress during extended periods of chaos and strife. Taking humans out of the loop and allowing autonomous weapon systems to operate fully independently complicates ethical and legal questions of liability and moral responsibility, such as the prosecuting of war crimes. Further, it is plausible that terrorist organizations could also adopt these technologies, possibly necessitating that lethal AI systems be deployed for peacetime policing activities as well.

Finally, whether humans should be kept in the loop or not will depend upon how adept AI becomes at the crucial tasks of discriminating between different data sets to properly “self-learn,” and noticing attempts at manipulation. At present, “data poisoning” and adversarial examples represent ways for malicious actors to exploit AI’s inability to think contextually (Goodfellow et al. 2017). So long as this proves challenging for AI to overcome on its own, keeping a human overseer in the loop may be a necessary safeguard against such hostile actions.

The Strategic Advantage of AI and Its Implications for Humans in the Loop

While different nations have a common interest in striking the right balance between autonomy and human supervision of AI, the fact that the multilateral global environment is being challenged by the re-emergence of great power competition will likely hinder international cooperation on this front.

The international race to develop advanced AI capabilities demonstrates the recognition by world leaders of the transformative potential of AI as a critical component of national security. The technology has the potential to change the international balance of power and to shape the course of unfolding geopolitical competition between the United States and China (and, to a lesser extent, Russia). To that end, each of these countries has implemented national initiatives that recognize the transformative effect that AI technology will have upon its security and strategic calculus. These states will be focused on maintaining information superiority, acquiring vast volumes of data to feed machine-learning algorithms. China’s centralized planning, socialist market economy and the vast reservoir of data produced by its large population could give the country an advantage over competitors. Chinese policy has recently pushed for greater “civil-military fusion,” seeking ways of adapting commercially developed technologies to the military sphere. President Xi Jinping has stated that AI, big data, cloud storage, cyberspace and quantum communications were among the “liveliest and most promising areas for civil-military fusion” (Chin 2018). The United States released its National Artificial Intelligence Research and Development Strategic Plan in 2016, and Russia reportedly harbours ambitions to make 30 percent of its force structure robotic by 2025 (National Science and Technology Council 2016; Eshel 2015).

The competing pursuit of AI technology by great and rising powers, as well as by non-state entities, promotes strategic competition, distrust and global instability. Societal dependence on the Internet of Things and threats posed by AI-enabled cyberattacks will increase commensurately in both digital and physical domains, expanding the scope and scale of future cyberattacks. The many unexplainable elements of AI will compound these risks, further complicating security considerations in an uncertain and complex strategic landscape.

At the pace at which AI systems are able to operate, the time lost on human decision making may prove the difference between victory and defeat.

The growing intensity of this strategic competition may incentivize incautious policies toward human control of AI systems in military contexts. Speed is a crucial element of military effectiveness, and the ability of one actor to gather information, decide upon a course of action and execute these plans faster than its adversary has often proven key to victory. One of the most powerful advantages of AI systems is their ability to perform a given task much faster than a human. However, this advantage in speed may be undermined by efforts to keep humans in the loop. An autonomous weapon system that must prompt a human supervisor for approval before opening fire will be at a disadvantage against one that operates fully autonomously. At the pace at which AI systems are able to operate, the time lost on human decision making may prove the difference between victory and defeat. The aggressive Chinese and Russian pursuit of military-use AI and a relatively low moral, legal and ethical threshold in the use of lethal autonomous weapons may prompt the United States to shift from its current pledge to keep humans in the loop, which would intensify the emerging arms race in AI and adversely affect international security.

Ethical Issues and Meaningful Human Control

As AI systems are charged with making decisions with life-and-death consequences, be it in combat settings, medical facilities or simply on public roads, we are faced with the unpalatable prospect of dividing human lives into more and less valuable groups. Predictably, many are unsettled by the thought of a so-called “death algorithm,” which takes this final decision independently. On what basis should an AI system determine which patient receives care when resources are stretched thin? What level of confidence must an AI weapon system have that a target is a combatant rather than a civilian before engaging?

Questions, including the following, arise:

  • How much autonomy do societal consumers and decision makers wish to grant to AI technologies?
  • What goals and purposeful manner will guide the establishment of ethical limits for AI’s ability to make decisions that may impinge upon a target’s fundamental rights and, ultimately, eliminate that target’s life?
  • More fundamentally, what moral framework does the decision maker utilize to decide?

Currently, there is no universally agreed-upon moral framework — divine command theory, utilitarianism and deontology represent various approaches. There is an element of subjectivity to these judgments, which is difficult, if not impossible, for current AI systems to satisfy. Therefore, international governance bodies should consider this issue seriously in the course of developing regulatory frameworks.

Governance Policy Development

Further scholarly work should be devoted to the unique challenges and risks posed by the need to exercise effective human oversight of increasingly complex AI systems. Analysis related to the societal impact of AI technology should include such topics as algorithmic transparency and the effects of AI on democracy. The prioritization and trade-offs between resource demands, accuracy, robustness and defence against attacks are other important considerations. Further, researchers need to consider potential mitigating measures, such as AI system patching to address software deficiencies as they arise in actual operation, and the application of “exit ramps” and “firebreaks,” the programmatic decision points in the development of such systems to stop or amend the direction, scope and scale of activity to align with socially accepted standards.

China’s astoundingly rapid developments in applying AI to an array of military applications demand close attention and scrutiny. As part of this critical examination, future research on AI-enabled weapon systems must account for the implicit values that are always embedded in the design of technologies and seek a governance framework that is both precautionary and anticipatory. International governance bodies must understand the current limits of technology and become cognizant of how AI-enabled weapon systems are currently being developed regardless of societal concerns related to the nature and degree of human control. Consequently, national governments need to exercise caution when creating laws to govern the development and use of AI technologies, on account of the uncertainties that exist regarding how such laws will affect society. Moreover, governments must acknowledge the competitive pressure to remove human oversight of AI in military and security settings. It would be advisable to consider how prevailing knowledge surrounding effective arms-control agreements can be amended to suit the particular features of AI technology.

The concept of meaningful human control provides a helpful approach to discuss the employment, and ultimate weaponization, of increasingly autonomous AI technologies. This conceptual framework shifts the focus from speculation related to technological development and future capabilities toward the development and use of emerging technologies that conform with established societal norms related to responsibility, accountability, legality and humanitarian principles.

Finally, the AI science and engineering communities, represented via their professional societies, need to be engaged by governments and must articulate a position in the same manner as scientists in the areas of nuclear weapons, chemical agents and the use of disease agents in warfare. Active debate and position papers should be solicited as part of scientific societies’ conferences and proceedings.

Works Cited

Arkin, Ronald C. 2009. “Ethical Robots in Warfare.” IEEE Technology and Society Magazine 28 (1): 30–33. https://ieeexplore.ieee.org/document/4799405.

Baum, Seth D., Ben Goertzel and Ted G. Goertzel. 2011. “How Long Until Human-Level AI? Results from an Expert Assessment.” Technological Forecasting & Social Change 78 (1): 185–95.

Chin, Josh. 2018. “China Looks to Close Technology Gap With U.S.” The Wall Street Journal, April 21. www.wsj.com/articles/china-looks-to-close-technology-gap-with-u-s-1524316953.

Eshel, Tamir. 2015. “Russian Military to Test Combat Robots in 2016.” Defense Update, December 31. http://defense-update.com/20151231_russian-combat-robots.html.

Goodfellow, Ian, Nicolas Papernot, Sandy Huang, Rocky Duan, Pieter Abbeel and Jack Clark. 2017. “Attacking Machine Learning with Adversarial Examples.” OpenAI (blog), February 24. https://openai.com/blog/adversarial-example-research/.

Lin, Patrick, George A. Bekey and Keith Abney. 2008. Autonomous Military Robotics: Risks, Ethics, and Design. Investigative report, version 1.0.9, December 20. Washington, DC: US Department of Navy, Office of Naval Research. https://apps.dtic.mil/dtic/tr/fulltext/u2/a534697.pdf.

National Science and Technology Council. 2016. The National Artificial Intelligence Research and Development Strategic Plan. Washington, DC: Office of Science and Technology Policy, US Government. www.nitrd.gov/pubs/national_ai_rd_strategic_plan.pdf.

Sandberg, Anders and Nick Bostrom. 2011. “Machine Intelligence Survey.” Technical Report #2011-1. Oxford, UK: Future of Humanity Institute, Oxford University. www.fhi.ox.ac.uk/reports/2011-1.pdf.

Waldrop, M. Mitchell. 2016. “The chips are down for Moore’s law.” Nature, February 9. www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

BGen (Retired) Robert Mazzolin is a CIGI senior fellow and serves as the chief technology strategist at RHEA Group.

Autonomous systems are revolutionizing our lives, but they present clear international security concerns. Despite the risks, emerging technologies are increasingly applied as tools for cybersecurity and, in some cases, cyberwarfare. In this series, experts explore digital threats to democracy and security, and the geopolitical tensions they create.