The Legal Void in Which AI Weapons Operate

Speaker: Branka Marijan

November 28, 2022

The Legal Void in Which AI Weapons Operate

This video is part of The Ethics of Automated Warfare and Artificial Intelligence, an essay series that emerged from discussions at a webinar series.

Proponents of autonomous weapons will highlight the precision of these machines to hit their target, avoiding unintended casualties and limiting impacts on civilians. But precise tools wielded in imprecise ways can be very harmful. When states consider deploying modern autonomous systems powered by artificial intelligence (AI), they must consider the legal and ethical concerns in addition to the technical specifications of the tool.

In this video, Branka Marijan, senior researcher at Project Ploughshares, discusses the legal concerns of using automated AI weapons and the role that human judgment plays in determining if a target is legitimate and if the attack falls within international rules of engagement.

“At the moment, no one can be held accountable for actions carried out by an autonomous system,” explains Marijan. The international community must set out clear rules of use before these autonomous systems are so ubiquitous that their use runs rampant, causing unnecessary harm and civilian casualties because guardrails weren’t in place.

One of the challenges is that we often talk about these systems as if they would exist in a vacuum.

We have to recognize that, at the end of the day, it is ultimately states that will be deploying these systems for specific military and strategic purposes. And we have to consider that, in the absence of an international agreement, there is very little guidance from existing regulations and norms to ensure that there is human oversight over these systems and that, ultimately, someone can be held accountable. At the moment, no one can be held accountable for actions that are carried out by an autonomous system.

What we see currently is not the fact that we don’t have the technology to protect civilians. In fact, what we see is a lack of care in the use of different systems. And so, when we talk about the advantages of autonomous systems, there’s often the discussion of more precise weaponry. But we have a lot of precise weaponry now that is used imprecisely, and it doesn’t solve the issue also of whether an attack was legal in the first place. So, even if you have greater technology, there are considerations, such as is this a legitimate target, that needs to be, I think, really thought about by a human analyst.

We need to be quite cognizant of what the technology can do and what it can’t — and where is the value in human judgment is, I think, a really important consideration for, you know, from a strategic perspective and a humanitarian perspective.

For media inquiries, usage rights or other questions please contact CIGI.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.