Regulating Autonomy in Weapons Systems

Speaker: Frank Sauer

November 28, 2022

Regulating Autonomy in Weapons Systems

This video is part of The Ethics of Automated Warfare and Artificial Intelligence, an essay series that emerged from discussions at a webinar series.

Computer systems are playing an ever-increasing role in surveying, identifying and categorizing potential targets on the battlefield. Modern militaries are deploying autonomous weapons systems to improve the speed with which they respond to threats. A human operator will set the mission parameters, but then an autonomous system can take over and execute the mission.

Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich, looks at the legal and ethical concerns around the use of autonomous weaponry.

Sauer argues that while artificial intelligence (AI) systems are very fast at categorizing objects, they lack the nuance required to make life-and-death decisions: “Machines don’t understand anything, they’re just good at matching patterns.” When deciding how much power an autonomous system has, governments need to consider the impacts of international humanitarian law and ethics, because allowing AI complete, unregulated control could be a runaway nightmare.

The relationship between humans and machines on the battlefield is shifting. For instance, a drone looking for targets on the battlefield using specific target profiles. Let’s say it’s looking for the silhouette of a tank, and then also for the heat signature of that tank. And by matching all kinds of incoming sensor data with this internal feedback loop, the weapon system itself then decides that is a valid target for me to attack and then selects and engages the target. A loitering munition.

If you have a loitering munition that is looking for targets and then selects and engages those targets without human intervention, you can get into all kinds of hot water from a legal perspective, and also from an ethical perspective, in terms of are humans still making the judgments that they are actually required to make under international law with regard to which targets get attacked, with which weapons. So, these are really the questions that we’re talking about when we’re talking autonomy in weapon systems and the benefits and the risks that go along with it.

We should be thinking, what is it that we want humans to still be doing on the battlefield in the twenty-first century?

Why do we think humans still have a role in this?

What are the legal frameworks that force us to keep human judgment in the process of conducting war?

Machines don’t understand anything, they’re just good at matching patterns. And so maybe we should, you know, leave all these very tricky things to humans because we’re actually very, very good at these things, figuring out what a situation means and then adapting to that situation and working properly with what we find in a specific situation.

This is what humans, arguably, should still be doing and, hopefully, will be doing for a long while, because machines, in a way, aren’t there yet.

For media inquiries, usage rights or other questions please contact CIGI.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.