In an effort to replicate the human brain, artificial intelligence (AI) underwent a number of developmental phases — including machine learning — since the 1950s. The question this raises is, to what degree should AI be granted autonomy in order to take advantage of its power and precision, and to what degree should it remain subordinate to human supervision? 

It could be argued that removing human control could temper the most distasteful elements of warfare and enable conflict to be conducted in an ethically superior manner. Conversely, the dehumanization of conflict can reduce the threshold for war, causing armed conflicts to degenerate endlessly. 

Because of the uncertainty around the impacts of AI, Robert Mazzolin argues that there’s an urgent need to clarify the ethical issues involved before technological developments outpace society. International governance bodies are best suited to develop regulatory frameworks and they must seek a governance strategy that’s both precautionary and anticipatory.

Transcript

AI underwent a number of developmental phases since the ’50s via expert systems and machine learning in an effort to replicate the human brain. Neuromorphic computing — whereby semiconductors operate in modes simulating neurons and synapses — combined with developments in quantum computing, present the potential for human-level cognition, or superintelligence. The question this raises is, to what degree should AI be granted autonomy in order to take advantage of its power and precision, and what degree should it remain subordinate to human supervision, or humans remain in the loop?

It could be argued that removing human control could temper the most distasteful elements of warfare and enable conflict to be conducted in an ethically superior manner, given their imperviousness to fatigue, emotional and physical stress, thereby preventing suboptimal human performance and potentially barbaric behaviour, and enable them to make more effective decisions at the strategic level during the fog of war, or kill in a more humane way.

Conversely, the dehumanization of conflict can reduce the threshold for war, causing armed conflicts to degenerate endlessly, as systems operate independently with unforeseen consequences.

Fundamentally, permissibility regarding decisions impacting life and death is an ethical and moral decision. Absent a universally accepted moral framework, there’s an urgent need to clarify the ethical issues involved before technological developments outpace society.

International governance bodies developing regulatory frameworks should consider topics such as the unique challenges and risks posed by AI as a dual-use; the societal impacts of AI technology; algorithmic transparency; and the effects of AI on democracy. Future research on AI-enabled weapon systems must seek a governance framework that’s both precautionary and anticipatory.

For media inquiries, usage rights or other questions please contact CIGI.
The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.