
Geoffrey Hinton warns AI could eventually overpower humans, criticizing tech firms for prioritizing profits over safety. Geoffrey Hinton, often referred to as the “Godfather of AI,” has issued a stark Geoffrey Hinton AI warning, suggesting a real risk that artificial intelligence could eventually take control away from humans. In a candid interview with CBS Saturday Morning, Hinton expressed growing concerns over the rapid, largely unchecked development of AI — and criticized tech giants for downplaying the dangers.
Hinton’s Deepening Concerns About AI Takeover
Hinton, a Nobel Prize-winning researcher who pioneered neural networks, estimates there is a 10% to 20% chance that AI takeover risk could materialize in the future.
He explained the emotional gravity of the situation with a vivid metaphor: humanity is like someone raising a “cute tiger cub” — adorable now, but potentially deadly once fully grown. Hinton warned that many people still fail to grasp how serious the risks are, stating, “People haven’t understood what’s coming.”
Despite AI’s potential to revolutionize fields like medicine, education, and even climate change solutions, Hinton said the speed of progress — combined with inadequate regulation — creates major risks.
Criticism of Big Tech and Lack of AI Regulation
Hinton criticized leading AI companies, including his former employer Google, for prioritizing profits over safety. He pointed out that many firms are actively lobbying for even less AI regulation, despite the dangers AI systems could pose if left unchecked.
“If you look at what the big companies are doing right now, they’re lobbying to get less AI regulation,” Hinton told CBS News. Although tech leaders like Sundar Pichai, Elon Musk, and Sam Altman have publicly acknowledged AI risks, Hinton suggests their actions often contradict their words.
According to Hinton, AI companies should dedicate about a third of their computational resources to safety research — far more than the small fraction they allocate today. When CBS News asked AI labs how much computing they devote to safety, none provided clear numbers, even though they publicly state that safety is important.
Calls for Stronger Oversight and Safety Efforts
The lack of concrete actions from major AI players reinforces Hinton’s belief that urgent change is needed. Without stronger oversight and more dedicated safety research, the world could face dangerous scenarios where powerful AI systems operate beyond human control.
Hinton’s remarks echo growing AI regulation concerns voiced by experts and policymakers worldwide. However, so far, many proposed regulations have met stiff resistance from the very companies leading AI development.
As the global AI race accelerates, Hinton’s warning serves as a sobering reminder: the technology that promises so much good also carries real risks that must be addressed with seriousness, transparency, and cooperation.
Conclusion
Geoffrey Hinton’s AI warning is a call to action for governments, companies, and society at large. With a real chance that AI could one day overpower human control, stronger regulation, massive investment in safety research, and global cooperation are more urgent than ever. Ignoring these risks, as Hinton warns, could prove catastrophic in ways we have yet to fully understand.