The Ethics of Automated Weapons

Speakers: Frank Sauer Branka Marijan James Rogers Bessma Momani Aaron Shull

November 28, 2022

The Ethics of Automated Weapons

This video is part of The Ethics of Automated Warfare and Artificial Intelligence, an essay series that emerged from discussions at a webinar series.

Advanced military systems and hardware, such as surveillance, analysis and drones, are increasingly integrating artificial intelligence (AI) technology to speed up decision-making and response times. Automated systems are also presented as more precise weapons that limit harms to civilians. While humans may be slower than computers at identifying and deciding on which actions to take, humans possess a vastly greater ability to make life-and-death decisions compared to computers.

Current applications of automated systems to many aspects of war and conflict have opened a new Pandora’s box. Systems operating autonomously with little human intervention raise ethical and legal concerns. Ethicists, international legal experts and international affairs specialists have been sounding the alarm on the potential misuse of this technology and the lack of any regulations governing its use.

The Ethics of Automated Warfare and Artificial Intelligence essay series seeks to understand how the weaponization of AI is operating within the contemporary global governance architecture. The series contributors posit how this technology will continue to advance within the defence and security realm, examine the corresponding influence on the current geopolitical landscape and ask what moral and ethical considerations are associated with the deployment of autonomous weapons.

FRANK SAUER: Humans make mistakes, but we’re kind of slow. Machines also make mistakes, but when they do, they all make the same mistake, at the same time, at lightning speed.

BRANKA MARIJAN: When we say that, you know, soldiers make mistakes and these systems don’t — I don't think that’s quite accurate. AI fails differently than humans.

JAMES ROGERS: It’s very hard to define just what is AI, what is automation, and then what is true AI and future autonomous systems.

FRANK SAUER: The thing that weighs heavily, I think, on many people’s minds in capitals around the world, is not necessarily the ethical implications. It really is the acceleration of processes. And we’re in a real bind there because from a military perspective, accelerating the completion of the targeting cycle is actually the key takeaway to get from autonomous weapons systems.

JAMES ROGERS: They want to be able to act faster than the enemy. They want to be able to take out these drones out of the sky or to make better targeting decisions quicker.

BRANKA MARIJAN: We already have a lot of precise weaponry that’s used very imprecisely. So, the introduction of autonomous systems won’t necessarily change the behaviour of different militaries.

JAMES ROGERS: So, we need to be really careful about how much we rely on computer systems when we’re making these life-and-death decisions.

BESSMA MOMANI: Sadly, in the case of artificial intelligence and all things emerging technologies in the application of war, no one really has a sense of rules. I think what this series demonstrates is that it can go very, very wrong if not regulated.

JAMES ROGERS: And it’s here that we’re hoping to introduce some sort of measures around meaningful human control or what others are calling appropriate human control. And the simple line there is that a human should always make the decision about whether or not another human dies.

BRANKA MARIJAN: At the moment, no one can be held accountable for actions that are carried out by an autonomous system. Our laws of war simply were made for humans. They were not made for machines.

FRANK SAUER: We’re clear on what it is that we’re talking about. We’re clear on how regulation, at least in an abstract sense, would look like. The key to me really now is will we find the political will to do something about it, either at the international level or, if that’s not possible, then domestically.

AARON SHULL: This affects individuals everywhere. And as a consequence of that, it’s our duty, I think, in a think tank, to be able to tell the story of these technologies, the problems around their governance and to advance potential solutions.

FRANK SAUER: Actually, it’s about us. It’s about us humans, and it’s much less about the machines and what they do or might be able to do in the future. The sooner people understand this, the sooner we can get to smart solutions, how to deal with it.

For media inquiries, usage rights or other questions please contact CIGI.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.