
The Vatican crashed global AI talks at the UN Security Council. Archbishop Paul Gallagher cautioned against ‘letting AI steer nuclear arsenals or lethal autonomous weapons. His words were an admonition: machines have no morality, but they’re inching closer and closer toward making life-and-death decisions. Gallagher demanded a binding treaty, a global ban, and robust human oversight. His position reverberates Pope Francis’s previous calls, but arrives as military AI advances at breakneck speed. The timing is sharp. And defense budgets bloat and patents sprout and risks accrue. Accustomed to its muffled tone in geopolitics, the Vatican’s voice echoed mightily through a hall still shaking from the security scare.
The Vatican’s Demand for Limits
Archbishop Gallagher didn’t mince words. At the UN pulpit, he cautioned that AI cannot measure human dignity. They just pig out on inputs and outputs. He emphasized that this is what renders them perilous in war, where swift decisions bear moral weight. A UN report in 2021 found that autonomous weapons increased civilian deaths by almost a third. The news is dire, and Gallagher said it simply – humans have to stay in control.
He tied the caution to larger patterns. Military AI patents up 40% since 2020, WIPO says. Nations are racing to roll out AI into their limbs–weapons caches, UAVs, and missile shields. Gallagher dubbed this phenomenon “a slide into disaster.” He called on leaders to gather and agree on a global agreement that prohibits machines from taking deadly leadership.
The lure reverberates with old Vatican lessons. Pope Francis called AI weapons an ‘existential risk.’ A 2025 Vatican report, Antiqua et Nova, detailed ethical limits for AI, supporting peace over automation. Gallagher applied those principles to make the case for a hard red line. He desires states to pledge: no independent nukes, no AI-instigated launches. His point was obvious—keep final decisions human, not machine.
AI, Nukes, and Escalation Risks
Archbishop Gallagher on nuclear risks. Today’s arsenals depend increasingly on algorithmic decision aids. That trend, he continued, breeds “intolerable risk.” He’s not wrong. In 2023, a simulation by the Stockholm International Peace Research Institute demonstrated how AI-guided missile defenses may misinterpret data and trigger false alarms. With nukes, one screw up and it’s doomsday. He also pointed to space. The 2024 U.S. Space Command report tallied over 180 satellites with possible AI capability. Could track or target, or even guide weapons. Add to that swelling budgets — Jane’s Defence Weekly said AI defense spending jumped 25% this year — and you’ve got a vivid image: the arms race is upon us.
Gallagher linked it to theology. ‘Machines can’t play Go, ‘— he quipped. That simile aligns with the catholic perspective on human worth. For centuries, the Church approved of wars. Now it cautions us about wars by code. Ironic, a couple of internet trolls labeled it. Others saw progress. Public response to Gallagher’s warning 3 was mixed. One commenter dubbed it ‘a disaster waiting to happen’. One blogged, ‘Put a philosopher in the kill chain. ’ But the controversy reveals his argument lanced. He made AI in nukes inevitable.
Conclusion: A Call Beyond Faith
Archbishop Gallagher’s comments were a bit more than devotional. It was a savage cautionary tale of technology outpacing ethics. His call for an international ban on lethal AI flagged threats global leaders can’t overlook. Automation, perhaps, but it lacks the human judgment. In warfare, that’s deadly. If the Vatican’s position is right, it compels nations to reprioritize–peace ahead of profit, restraint over arms. Gallagher’s rhetoric won’t, however, stop the arms race immediately. But they could influence treaties, policies, and public opinion. As the AI arms race heats up, his warning remains constant: machines should never decide who lives and who dies.