Proponents of autonomous weapons will highlight the precision of these machines to hit their target, avoiding unintended casualties and limiting impacts on civilians. But precise tools wielded in imprecise ways can be very harmful. When states consider deploying modern autonomous systems powered by artificial intelligence (AI), they must consider the legal and ethical concerns in addition to the technical specifications of the tool.
In this video, Branka Marijan, senior researcher at Project Ploughshares, discusses the legal concerns of using automated AI weapons and the role that human judgment plays in determining if a target is legitimate and if the attack falls within international rules of engagement.
“At the moment, no one can be held accountable for actions carried out by an autonomous system,” explains Marijan. The international community must set out clear rules of use before these autonomous systems are so ubiquitous that their use runs rampant, causing unnecessary harm and civilian casualties because guardrails weren’t in place.