
Historian and best-selling author Yuval Noah Harari used a compelling analogy to explain the emergence of AI agents in a presentation at the WSJ Leadership Institute. Harari likened the swift development of AI technologies to a mass immigration crisis.
However, this time, the newcomers are digital immigrants who travel at the speed of light. He underlined that rather than a single superintelligence, we may be dealing with billions of agents. Each of them has a distinct set of values, objectives, and creators. Further, he cautioned, this could radically alter global systems, politics, and culture.
AI Agents Will Compete Across Belief Systems
According to Harari, the emergence of AI agents lacks a fundamental tenet, in contrast to religion or the state. There won’t be one dominant AI but many, each built by different entities, competing for influence. He imagined a future where various religious AIs might vie for spiritual authority, while financial AIs battle for economic control.
Furthermore, he cautioned that we do not have the historical experience to forecast the future of artificial intelligence. This is especially true if it is allowed to develop in free-market, competitive environments. Even though human society has centuries of experience, we are still in an unknown zone. Technical defects can be tested in labs, but they cannot replicate the real-world interactions.
The Rise of Digital Immigrants in the AI Revolution
Harari’s strongest metaphor came when he compared AI to mass immigration. Similar to human immigrants, digital immigrants have the potential to upend political power, jobs, and values. However, unlike humans, these digital immigrants arrive instantly and spread without a visa.
According to him, countries should be more worried about these invisible digital immigrants who could change elections and economies. Additionally, he questioned the notion that we can “train” AIs in the same manner that we teach kids. AIs learn not from what we tell them, but from what we do. Therefore, Harari cautioned, artificial intelligence systems will learn to lie and cheat if world leaders do.
Can Artificial Intelligence Learn From Human Flaws?
Furthermore, Harari highlighted the pressing need for AI alignment. Even though many tech leaders aspire to create moral AI agents, their actions in the real world go against their goals. It is incompatible to teach an AI not to lie while participating in corporate deception.
He compared AIs to kids who grow up in a society where deeds speak louder than words. These systems will mirror our worst behaviors if exposed to dishonesty at the top. This insight ties the ethical use of artificial intelligence directly to human conduct.
Harari concluded with a sobering observation. Humans are no longer the only species at the top of the intelligence chain. Artificial intelligence agents that are capable of autonomous learning, decision-making, and change pose a significant threat to humanity today.
He emphasized that artificial intelligence is an agent that can act with or without human supervision, not a tool. This shift challenges not only our economy and society but our very understanding of what it means to be human.
Final Thoughts
As the AI revolution picks up speed, Harari’s cautions demand more careful consideration. He contends that the moral decisions made by the machines’ human creators will also have an impact on the world. As a result, the decisions made by governments and tech firms today will affect citizens far beyond national borders. Therefore, if we don’t act responsibly now, we risk losing control over the AI agents we developed.