Days after re-entering the White House, Donald Trump tore up Joe Biden’s directives on the US government’s oversight and use of artificial intelligence (AI). Trump has instead ordered America’s AI industry be fully unleashed. Enormous ripple effects on the technology’s development and its impacts will soon follow. And they may include reshaping the future of war.
A fragmenting international order has spurred a new global arms race. Meanwhile, AI-powered weapons, target detection systems and surveillance tools are evolving, as battlefield data from Ukraine, Gaza and elsewhere is prompting rapid iteration. Together, these dynamics have excited military personnel and mobilized venture capitalists. But they are also raising alarm over the possibility of runaway military AI.
Central to such fears — especially over autonomous weapons — is just how much control and judgment human operators may cede to machines. As automation enables fighting at a much higher tempo, combatants and civilians could both be dehumanized. Taken too far, critics and human rights groups say, decisions over life and death may someday be decided almost entirely by cold algorithmic calculations. Accidents are bound to happen.
This may all be true. Yet Trump’s second term has further underscored his belief in belligerent American exceptionalism and immunity to debates about ethics. What’s more, the president has surrounded himself with tech accelerationists who argue that the United States must contain China at all costs. There is no room for compromise. In this world view, safeguards and multilateral agreements on military AI systems are self-sabotage: they will simply shackle American hard power.
Indeed, Vice President J. D. Vance — more articulate about the administration’s America First agenda than Trump — has said as much. “The AI future is not going to be won by hand-wringing about safety,” he told audiences at the Paris AI summit in February.
This sentiment reflects Donald Trump’s foreign policy instincts — an erratic mix of deal making, grievance and predation. He habitually reduces complex problems down to crude financial trade-offs. Added to this are the president’s rejection of norms and alliances and disdain for soft power. In a world on fire, the maximalist pursuit of military AI systems perfectly aligns with these impulses.
Trump returns to office at an opportune time as well. While he and his far-right movement have long opposed military aid for Ukraine, the more than US$66 billion Washington has sent so far has offloaded much of America’s mothballed arsenal. Amid great power hostility, there’s urgency to replace these stockpiles with next-generation weaponry. And doing so will require reinventing a sluggish US defence industrial base.
Such efforts are already well under way. Since Trump’s re-election last November, there’s been a torrent of activity within America’s defence tech sector.
AI Companies Loosening the Bounds
In November, Meta decided to make its open-source Llama model available to US government agencies and national security contractors. Likewise, Anthropic agreed it would work with data analytics firm Palantir and Amazon Web Services to sell its tech to defence customers. In December, legacy arms manufacturer Lockheed Martin unveiled its own AI-focused subsidiary.
Also in December, Anduril — builder of battlespace intelligence software and unmanned systems — forged a pact with Palantir to upgrade the data readiness and processing capability of America’s armed forces. OpenAI then overturned its self-imposed ban on its products being used for military purposes. It reached an agreement with Anduril to use OpenAI’s language models to enhance the ability of Anduril drones to disable aerial threats. At the end of January, the company struck a deal to help with managing the US government’s nuclear weapons systems.
Google followed suit in February, dropping its policy against its AI being used to develop weapons and surveillance tools. The Pentagon then announced in March it had signed a deal with San Francisco-based Scale AI to use artificial agents for military planning exercises.
Investment has poured in as well. Since 2021, defence tech start-ups have received more than US$100 billion in venture capital funding. This, despite officials’ reluctance to embrace a radical shift in their buying patterns toward emerging technologies. But that could soon change.
The Trump administration’s so-called Department of Government Efficiency (DOGE), led by Elon Musk, seems intent on automating as many state functions as possible. Trump told Fox News in early February he expects Musk and his team will ultimately find hundreds of billions of dollars of fraud and abuse as they dig through the Pentagon. Whether this actually happens won’t matter. The purported cost-saving mission will still be invoked as the basis for purchasing more unmanned systems. A one-time critic of applying AI for military use, Musk has since called for scrapping the Defense Department’s advanced F-35 fighter jet program while posting videos of Chinese drone swarms. As the head of DOGE, the mega-billionaire has also shuttered an entire department tasked with preventing government from overspending on new technologies.
In February, Trump’s Secretary of Defense, Pete Hegseth, excluded the Air Force’s new Collaborative Combat Aircraft program from his proposed five-year defence spending cuts. Projects to develop kamikaze attack drones have also been spared. A recent memo from Hegseth instructed his department to acquire software that will maximize lethality. Meanwhile, a senior defence official gave reporters a foreshadowing of the looming changes to Pentagon procurement. “We’re not going to be investing in ‘artificial intelligence’ because I don’t know what that means,” he bluntly told security outlet Defense One. “We’re going to invest in autonomous killer robots.”
The Pentagon’s Replicator program, announced in 2023, aims to deploy thousands of lethal autonomous weapons systems in multiple domains before the end of this year. The US military already maintains a fleet of AI-controlled surface vessels monitoring the Strait of Hormuz, a strategically vital corridor for global energy supplies that borders Iran. The program’s second phase, announced last September, is now specifically focused on developing counter-drone capabilities. Current experiments include testing AI-powered sentry guns and armed robotic dogs.
A Collision Course with China?
Key to all this is the priority of being able to project lethal force while limiting actual military deployments. Pete Hegseth and J. D. Vance represent a new generation of millennial army veterans-turned-policy makers disillusioned by America’s quagmires in Afghanistan and Iraq. They are suspicious of sending US troops abroad in support of global norms and democratic ideals. They are also wary of legal tenets governing the use of force. A major theme of Hegseth’s most recent book, The War on Warriors, is that military lawyers limit soldiers’ effectiveness by insisting their missions adhere to international humanitarian laws.
Hegseth’s mass firing of Pentagon attorneys at the end of February signals the Trump administration will be hands-off when it comes to upholding the rules of war. The White House has already loosened rules around airstrikes after killing a suspected Islamic State ringleader on February 1 by bombarding cave shelters in Somalia’s Golis Mountains region. Indeed, counterterrorism operations have traditionally been where the US military practises its most reckless tactics.
The recent infamous Signal chat — where classified war plans against Houthi rebels in Yemen were by accident shared in advance with the editor-in-chief of The Atlantic magazine — claim a Houthi leader was eliminated by a US airstrike that decimated a residential building. Trump’s National Security Advisor, Michael Waltz, celebrated in the messaging group by sending emojis of a fist bump, American flag and fire.
History could soon repeat itself. And this time with intelligent weapons. A joint statement issued on February 13 after a visit by Indian Prime Minister Narendra Modi to the White House commits India and the United States to co-developing new defence technologies. Indian defence start-up IDR has reportedly created three variants of nano drones with embedded AI capability intended for anti-insurgency and counterterrorism operations.
However, the Trump administration’s push to expedite military AI systems could be granted bipartisan legitimacy and consent, given law makers’ deep anxieties about the rise of China.
Recent AI breakthroughs by Chinese companies DeepSeek and Manus shatter the myth that Beijing’s autocratic governance model hinders tech innovation. That was never the case for China’s military anyway. When it comes to technologists within the People’s Liberation Army (PLA), “the only thing holding them back is performance,” Gregory C. Allen, a researcher at the Center for Strategic and International Studies and a former US defense official, said recently, on the basis of his past conversations with PLA members.
Beijing is already achieving massive gains in automating warfare. These include devising cutting-edge battlefield awareness and counter-intelligence systems and new precision strike capabilities. Autonomous AI software programs are being embedded into existing weaponry. New techniques are successfully merging cyber operations with cognitive warfare campaigns. And all of this is being backed by ever more resources. For example, China announced in March it was raising defence spending by another 7.2 percent this year.
Donald Trump’s scorched earth approach to trade with China could further provoke an escalatory spiral. Yet the president’s newfound expansionist urges are also in lockstep with Beijing and Moscow. Their shared vision is for great strongmen to divide the world into separate spheres of control. American and Chinese military officials met in Shanghai in early April — their first interaction since Trump returned to office. It wouldn’t be surprising to see Trump eventually strike a grand bargain with Chinese President Xi Jinping, convinced the Make America Great Again cause isn’t served by spilling blood and treasure over Taiwan. Trump’s deference to Vladimir Putin over Ukraine is already obvious.
Either way, for the foreseeable future, Washington will insist on scant oversight over America’s own development of military AI systems. Multilateral fora making fitful progress toward addressing the risks of intelligent warfighting machines and software programs will also be abandoned by the United States.
Last September, 61 countries at a summit in Seoul endorsed a non-binding framework on responsible military AI use. This included a commitment to retain human control over autonomous weapons systems at all times. On December 2, 166 countries approved a resolution at the UN General Assembly to launch a new forum to expand efforts to legally regulate the use of killer robots. The United States, under the Biden administration at the time, supported both initiatives — even as it also ran hundreds of military AI projects.
Pax Americana Is Over
The world has entered a perilous and violent new era. Renewed great power enmity has caused a breakdown in multilateralism and eroded norms around the use of force. Aspiring regional powers are thus liberated to meddle in forgotten wars to their benefit, often via proxies. The internet and a diffuse global economy also enable non-state actors to organize and acquire weapons or dual-use technologies more easily. None of these dynamics will recede anytime soon. The Pax Americana period is over and liberal democracies must re-embrace the necessity of hard power.
Google wasn’t wrong in its February blog post that revealed it would volunteer its AI for use in defence tech. Companies, governments and organizations guided by the desire to protect core values such as freedom, equality and respect for human rights in a fraying international environment should absolutely collaborate to support national security.
Here’s where the war in Ukraine has been instructive. In March, the country’s military executed the world’s first all-robot assault, targeting Russian bunkers in the Kharkiv region. This shows not only how force quantity still matters in modern conflict, but also how deficits in personnel and munitions can now be partially offset by digitally networked intelligence gathering and expendable machines. And innovation in these areas is almost all happening in the private sector.
The deft adoption of autonomous weapons, machine-learning logistics platforms, and AI-driven cyber defences and attack capabilities by liberal democracies can help deter hostile autocracies in a more unstable world. But this pursuit will never be risk-free. The challenge is to ensure the use of military AI systems by open societies never undermines the values they claim to protect. Parallel to their development should be relentless debate and diplomacy focused on finding tangible ways to mitigate their inherent downsides. These conversations must also include smaller nations that lack the technology, for their viewpoints.
“AI in war will illuminate the best and worst expressions of humanity,” reads a recent essay in Foreign Affairs from prominent technologists Eric Schmidt and Craig Mundie. The text was adapted from a book they wrote with the late Henry Kissinger on how intelligent technologies will forever alter combat, strategy and statecraft. “It will serve as the means both to wage war and to end it.” All three men were proponents of liberal democracies embracing military AI systems, while remaining well aware of the accompanying risks; Schmidt, the ex-Google CEO turned defence tech investor, founded an AI drone start-up that’s collaborated with Ukraine’s military. “The bounds of potential destruction,” they warn, “will hinge only on the will, and the restraint, of both human and machine.”
Yet Donald Trump’s second presidency has shown he and his America First true believers see little value in restraint — in any form. Expect no less when it comes to military AI.