
The threat of AI to international security is significant, and investors, amongst them Paul Tudor Jones, warn people about potentially devastating consequences. At one of the tech conferences in 2025, businesspeople attending it calculated that there was a 10 percent probability that AI might cause the death of half of humanity in the next 20 years. Such risks are caused by the speedy development of AI, the absence of regulation, and possible abuses such as biohacking. Although everyone admits that AI can be useful in healthcare and education, the rate of advancements exceeds policymakers’ expectations. Regulations are now in a decentralized form, and there is no international regulatory structure.
Exponential Growth, Existential Threats
Paul Tudor Jones described today’s AI landscape as one of accelerating capability and deepening risk. He also added that AI models are becoming 25 percent to 500 percent better every several quarters, or what he termed a vertical lift. This is due to the high competition among firms and the high geopolitical tensions in the global arena that complicate uniting in areas of safety architecture. As each nation races to dominate AI, few are willing to pause development in favor of risk mitigation.
At the heart of Jones’s concern is the accessibility AI creates, not just for innovation, but for potential destruction. Instead, AI has the potential to lessen and popularize sophisticated mechanisms, such as the development of biological and high-end cyber tools. The stronger these systems get, the higher the risk of terrible actors, whether states or non-state, exploiting them terribly.
The belief that some form of mass casualty event will be one of the few things that can bring the world to task and organize it is. Perhaps one of the most heart-chilling moments that the tech conference he attended offered. Without proactive governance, experts fear we may remain one failure away from irreversible consequences. The path forward, Jones warned, is narrowing fast and demands immediate, coordinated action.
The Gap Between Risk and Regulation
Nonetheless, with these warnings, there exists no agreement or enforcing mechanism on the development of high-risk uses of AI around the world. The EU has already adopted the AI Act and developed a regulatory framework that contains risk-based compliance levels. But this legislation has a limited effect across the world. Precious little binding federal legislation has emerged even in the U.S., where executive leadership and public hearings have taken place. In the meantime, the main players in the field of AI release more powerful models. Which means that open access is defined as a source of competition.
This unregulated landscape is especially dangerous given AI’s misuse potential. Jones highlighted biohacking as one such threat. Where AI could be used to generate harmful biological sequences or weaponize lab procedures. He emphasized that much of the current AI development is being driven by entities with no obligation to prioritize public safety. Without enforceable checks, these systems could become tools of mass disruption in the wrong hands.
Compounding the issue is the lack of consensus on what constitutes a “safe” AI. Even within governments and labs, definitions differ. This leads to fragmented efforts and a dangerous gap between capability and control. Jones urged both public and private sectors to collaborate, not only on regulation but also on research into containment, auditing, and model alignment. Failing to do so, he warned, risks allowing innovation to outpace our ability to survive it.
The Road Ahead for AI Governance
The anxieties of Paul Tudor Jones are a reality check to stop thinking that AI is a technological revolution. It is a civilizational tipping point. In a world of increasingly capable and autonomous systems, our existing tools of governance and ethical frameworks might not be sufficient. Although AI promises to create innovative solutions in such fields as healthcare and education, it is also accompanied by a new level of destructive power. The real concern is whether states, businesses, and society get moving to control these threats before it is too late and crisis dictates their course of action. This isn’t just a debate for policymakers or scientists; it’s one for everyone. What’s your take: Is the world doing enough to make AI safe?