
Sweden’s Prime Minister Ulf Kristersson stirred up more than a few eyebrows recently when he admitted to using AI tools like ChatGPT and LeChat to get second opinions on political questions. Not policy documents or sensitive reports he made that clear but things like comparing how other countries have approached a problem, or checking whether his thinking might be missing something. Using AI like a sounding board.
“We Didn’t Vote for ChatGPT”
Aftonbladet ran an editorial saying Sweden’s PM had fallen into what they called the “oligarchs’ AI psychosis.” They warned about the security risks and how political judgment might end up entangled with American tech companies’ data policies. Virginia Dignum, a well-known professor of AI ethics at Umeå University, warned that AI raises serious questions about democracy. She said, “we didn’t vote for ChatGPT,” to point out that these systems are created by a few people, but affect everyone. Her worry isn’t just about privacy, but about who gets to shape the future.
Other researchers flagged similar red flags. Simone Fischer-Hübner at Karlstad University stressed how easy it is to forget that even casual queries can expose patterns or insights that shouldn’t be in commercial hands. The Prime Minister’s team tried to dial things down. His spokesperson described the AI usage as more of a ballpark tool. But the clarification didn’t do much to stop the broader debate.
Political Will and the AI Vacuum
What makes this moment more than just a flashpoint is the timing. Sweden is in the middle of redefining its national AI strategy. Late last year, the AI Commission, led by Carl-Henric Svanberg, delivered a 75-proposal roadmap ahead of schedule. The commission called for about €216 million in annual investment over five years. Their report pointed out that Sweden had slipped in the Global AI Index, falling from 17th to 25th, lagging behind its Nordic neighbors. Lack of political leadership was directly named. That alone puts Kristersson’s AI usage in a different light. Depending on how you look at it, he’s either trying to embody that leadership. However, he has unintentionally illustrated the governance gaps the Commission warned about.
The Commission’s proposals are fairly ambitious: Central AI coordination at the government level, AI education programs for the public, serious research funding, and industry-focused innovation support. Especially now that AI is no longer something that happens in research labs it’s in the room where decisions get made.
Global Governance of AI
This isn’t just a Swedish issue. Around the world, government officials are using AI, but mostly behind closed doors. Kristersson’s openness is rare. But it also reveals just how little is settled. There are no clear rules for how elected officials should use AI, no standard for when that use needs to be disclosed. And there is no consensus on what kind of influence is acceptable.
And it’s all unfolding as new EU AI regulations just came into effect this August. These rules force companies to be more transparent about how their models are trained. They must ensure copyright compliance and tighten safety standards. But they say very little about public sector use. While the legal frameworks are growing, the ethical questions are racing ahead.
There’s a chance this pushes Sweden and maybe others toward clearer policies for AI use in government. Whether it happens quickly or just adds to a pile of good intentions remains to be seen. But this case is already a test of what happens when the newest technology meets very old institutions.
Keypoints:
- Sweden PM admits using AI chatbots for informal political brainstorming sessions
- Critics warn AI tools risk privacy, democracy, and foreign corporate influence
- Ethics experts question algorithms shaping national political decision-making processes
- AI Commission urges €216M yearly investment to revive Sweden’s slipping leadership
- EU’s new AI laws lack clarity on government officials’ public AI use