Drones and Humans in the Loop of Control

Speaker: James Rogers

November 28, 2022

Drones and Humans in the Loop of Control

This video is part of The Ethics of Automated Warfare and Artificial Intelligence, an essay series that emerged from discussions at a webinar series.

Modern drones have been around for a few decades. They provide strategic surveillance and air-strike capabilities while removing human pilots from harm. Early drones were essentially controlled remotely by pilots at a safe distance from the battlefield, but modern military drones are increasingly operated autonomously by artificial intelligence (AI) computer systems. Where does the human fit into the decision-making process?

In this video, James Rogers, DIAS Associate Professor in War Studies at the University of Southern Denmark, non-resident senior fellow at Cornell University and associate fellow at the London School of Economics, explores the three types of human involvement: in the loop, on the loop and outside the loop. Right now, humans are in the loop by piloting and operating the weapons remotely. On the loop means that the human isn’t in direct control at all times but takes control over any decisions the machine makes. Lastly, and of most concern, is when the human is outside the loop of control. In the near future, AI-powered drones could execute entire missions without human intervention, only reporting back the result after the attack is complete.

Rogers argues that a human should always be in the loop of control when the fate of another human being is at risk — these decisions should never be left in the hands of a computer.

If you want to understand the history and the future of AI, then we need to look into the history and the future of drone technologies.

Right now, we have humans that are in the loop of control. So, you might have five to seven humans in a team — a pilot, a sensor operator — that will control a drone that takes lethal strikes somewhere around the world. That drone will gather data, the pilot will be monitoring, and the pilot will make that decision about who lives and who dies. The human is in the loop.

Next step is where you have a human on the loop of control. So, you might have one human that’s in charge of seven drones that are flying semi-autonomously or autonomously all around the world. The only time that they come to the pilot’s attention is when they pick up a target. Now that target information will be relayed back to the pilot. The pilot will see it and decide whether or not that person lives or dies. So, the human is on the loop of control there, but most certainly still making that kill decision.

The next step — and I would say this is the near future, some argue that we’re already here — is when you have the human outside the loop of control. So, you might have 100 drones deployed all around the world hovering above the skies of different countries and different continents. You’ll have one pilot, who is monitoring the feeds of all of these drones. And when that drone picks up a target around the world based upon their AI-infused algorithm and their computation capability, they will send that information back to the pilot. But if the pilot is busy, say, in a time of supreme emergency, the drone can make the decision whether or not a human lives or dies. And the first that the pilot will know of it is when they get a receipt back from the drone saying that this action was taken. And it’s then that the pilot can review whether or not the person that was killed was the right person or the wrong person.

It’s here, then, that you start to see how AI and robotics can start to take the decision about whether or not humans live or die.

For media inquiries, usage rights or other questions please contact CIGI.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.