In an effort to replicate the human brain, artificial intelligence (AI) underwent a number of developmental phases — including machine learning — since the 1950s. The question this raises is, to what degree should AI be granted autonomy in order to take advantage of its power and precision, and to what degree should it remain subordinate to human supervision?
It could be argued that removing human control could temper the most distasteful elements of warfare and enable conflict to be conducted in an ethically superior manner. Conversely, the dehumanization of conflict can reduce the threshold for war, causing armed conflicts to degenerate endlessly.
Because of the uncertainty around the impacts of AI, Robert Mazzolin argues that there’s an urgent need to clarify the ethical issues involved before technological developments outpace society. International governance bodies are best suited to develop regulatory frameworks and they must seek a governance strategy that’s both precautionary and anticipatory.