
As AI continues its rapid evolution, global enterprises are eager to adopt emerging systems like agentic AI. Kelly Forbes, an AI policy advisor, claims that human oversight and current governance are insufficient. Speaking at Qlik Connect 2025, Forbes underlined the pressing need for international standards and more intelligent protections.
As a member of the Qlik AI Council and other international advisory bodies, she has direct experience with the risks of poorly managed AI systems. She understands how difficult it is to apply AI in real-world situations. Furthermore, she is conscious of the significant flaws in the current AI governance.
AI Policy Advisor Backs Smart Council-Led Strategies
Forbes explained that while companies now understand the importance of AI, many still lack clarity on how to apply it effectively. Since these systems require less human input, the transition from generative to agentic AI presents more difficult problems. This increasing independence necessitates strong frameworks.
She highlighted that Qlik’s AI Council brings together a range of expertise to direct ethical deployment and regulatory alignment. Nonetheless, Singapore and the UAE are balancing safety and innovation through regulatory checks. According to her, corporate-led councils are essential in creating consistent, useful AI governance that takes into account actual difficulties.
Is Human Oversight Enough for Agentic AI?
Forbes stressed the need for safeguards to advance in step with the rapid evolution of agentic AI. She pointed out that companies are now incorporating procedures to guarantee the accountability of AI systems. However, having informed oversight is more important than simply having oversight. Even human checks can malfunction in high-risk situations if they are not properly trained.
The AI policy advisor warned that simply keeping a “human in the loop” is no longer enough. Untrained personnel may have trouble understanding system decision-making, especially in crucial areas. This misalignment is risky, especially as AI grows more sophisticated and self-sufficient. As a result, businesses must train their staff and set up transparent accountability systems.
According to her, international frameworks such as the EU’s AI Act may encourage others to do the same. Furthermore, Forbes reported that investors are increasingly asking businesses about their AI governance standards. In particular, it concerns who gives them advice and how moral judgments are made.
AI Policy Advisor Sees Ethics Gap Widening Globally
Adapting policies to new AI realities is one of the most difficult tasks. To adapt established AI laws to new technologies, Forbes has collaborated with ASEAN nations. She clarified that antiquated frameworks are unable to handle contemporary problems like misinformation, copyright issues, and AI hallucinations.
Some experts believe that artificial general intelligence (AGI) is close. Regardless, she says businesses and governments must prepare for a range of futures. Equal access and worker upskilling are crucial to ensuring AI doesn’t worsen inequality.
According to her, education serves as a link between creativity and accountability. Therefore, the nations leading the way in the ethical application of AI will be those that are actively training their citizens. Furthermore, as the number of automated decisions increases, so must public awareness and policy support.
What Will It Take to Safeguard AI?
The main takeaway from Forbes’ insights is that AI systems are evolving faster than our ability to control them. To safeguard the future, companies and regulators need to collaborate, invest in training, and enhance AI governance practices. As the AI policy advisor stated, “AI is getting smarter. We need to get smarter with it.”