
A majority of global enterprises are deploying AI agents to transform workflows, but are doing so without proper security safeguards, according to a new report from SailPoint. The findings raise red flags about growing vulnerabilities tied to agentic AI, which 72% of tech professionals now consider a greater risk than traditional machine identities.
Widespread Adoption, Lacking Governance
According to a new report by Dimensional Research, 82% of organizations across Asia-Pacific, including Singapore, India, Japan, Australia, and others, are already using AI agents in their operations. However, only 44% have formal policies in place to secure these agents. Despite this gap, nearly all respondents (98%) plan to expand their use of AI agents in the next year, and 92% agree that governing these agents is essential for enterprise security.
Understanding Agentic AI and Its Complexities
Agentic AI refers to autonomous systems that can sense their environment, make decisions, and act to achieve objectives. Agentic AI is frequently afforded higher access to sensitive information and systems, can generate sub-agents, and often relies on multiple machine identities. The multiple layers of abstraction and complexity that come with agentic AI create additional risks, especially if the complex nature of these AI agents is not adequately managed.
The Hidden Dangers of AI Agent Behaviour
Respondents pointed to the following significant risks that are associated with AI agents that exceed the regular machine identity threat. Risks like: unauthorized access to privileged data (60%); unintended actions (58%); distribution of sensitive data (57%); inaccurate decisions (55%); exposure to inappropriate content (54%). The risks are concerning given that agents have a relatively high degree of autonomy and are not particularly visible to users in most enterprise environments.
Incidents Highlight Urgent Need for Controls
The report also uncovered real-world incidents underscoring the dangers of under-regulated AI agents. About 23% of organizations reported cases where AI agents were tricked into revealing access credentials. Additionally, 80% experienced unintended behaviors such as unauthorized system access (39%), sharing sensitive or inappropriate information (33%), and downloading confidential materials (32%). These findings highlight the urgent need for robust governance frameworks to manage and secure AI agents.
The Call for Stronger Governance
The report concludes that without strong governance frameworks, AI agents could expose organizations to regulatory violations, data leaks, and AI-powered cyberattacks. Despite growing concerns, a large portion of enterprises lack even basic policies to control what data AI agents can access or share.
As agentic AI continues to gain adoption, identity and access management (IAM) platforms will need to evolve quickly to manage this new class of machine behavior.
Conclusion
SailPoint’s findings are meant to alert organizations that want to quickly implement AI agents without understanding the security ramifications associated with AI agent deployment. As organizations will likely continue ramping up AI deployment, creating a secure, governed, and transparent AI ecosystem will be further needed to protect their sensitive systems from threats will come from internal errors and external maliciousness.