There’s no question in the industry that AI is revolutionary. There are still many unanswered questions on how big that impact will be, but unlike previous technology waves, this one is impacting all industries at the same time – and at a pace faster than anything we have seen before. Yet, a paradox remains: massive investment and widespread deployment of AI solutions, but very little return on investment.
While large language models (LLMs) made intelligence accessible, it’s the rise of autonomous agents that will reshape enterprise operations. These are systems that can reason, act, and leverage tools to integrate with other agents and systems. However, deploying successful Agentic AI solutions with proven return of investment is still challenging. Barriers range from the complexities of the technology, the organisational maturity around AI itself, and the Identity Security concerns which can severely limit an organisation’s ability to deploy the most impactful and transformational AI capabilities.

Author:
Alex Santos
Chief Technical Officer at CyberIAM
One of the keys to success is to focus on AI Agents for their memory, learning and autonomy capabilities. They should be targeted at narrow, high-value cases – the boring, heavy-duty, documentation-intensive tasks that bog down processes. Agents must be seen as enablers of strategic automation in critical areas of the business processes, acting as digital team members, not as simple tools with limited scope.
However, placing agents such strategic positions is not without challenges. Many pilots fail to reach production because of blockers tied to compliance, audit and governance – issues that sit at the heart of Identity Security. Addressing these concerns is essential for scaling AI safely and effectively, and later in this article we’ll explore how SailPoint is pioneering solutions in this space, providing governance and automated controls that de-risk adoption, limit exposure, and transform autonomous systems from abstract promises into auditable business value.
Before diving into how agents are secured, we need to understand the basics on what they are and how they work. If you’re curious about the technology under the hood, you can dive deeper on the recent history of AI, though it’s not strictly necessary to understand the impact they bring to Identity Security.
Defining Agents
Despite having a lot of potential, AI Agents are a big part of the AI hype and inflated expectations. As highlighted in the The GenAI Divide (MIT, 2025) article, an estimate of $30-40 billion has been invested in AI solutions in the past couple of years, but 95% of the organisations are seeing zero return. The 5% of organisations that succeed with AI are the ones that move past the illusion of generalised brilliance, and design systems that focus on specific, measurable workflows – combining human oversight, domain data, and clear operational boundaries, often implemented as AI Agents. To better understand how AI Agents can succeed, we first need to take a step back to look at what they really are.
An agent is an application that receives or collects data, makes decisions based on a set of rules, and autonomously interacts with other systems to initiate further actions. We have been securing some of these as machine identities for many years now. The LLM advances significantly changed the game, as Agents can now understand natural language with unprecedented sophistication – instead of predefined rules, agents can take input from users in a conversation (whether written or spoken), can read documents, log files, and dynamically adjust the responses and operations executed within downstream systems.
Context is key, which is why the Agent architecture must be capable of retrieving data which is specific to the business and the use case in question, and invoke external tools to orchestrate how data must flow between systems. A good example to illustrate all of these concepts would be a support triage AI Agent connected to the ITSM / ticketing system: It should be capable or reading tickets raised, classify the severity of the issue, look for likely fixes based on an external knowledge base, communicate with the user on status changes or to get additional information, and take action to fix the issue if possible by performing changes in external systems. The means for agents to interact with other systems are still evolving in the industry, but some standard protocols have already been emerging:
- Model Context Protocol (MCP): allows AI applications to dynamically extend a model’s context by telling it how to retrieve additional data or invoke external tools, so the agent can ground its reasoning in live, domain-specific information rather than relying only on what was in its training set.
- The A2A protocol (Agent-to-Agent): defines how autonomous agents securely communicate, delegate tasks, and share context with each other across systems and vendors, allowing coordinated, auditable multi-agent workflows.
This sounds really powerful, but let’s not fool ourselves – Agents powered by LLMs are still subject to the same statistical reasoning issues mentioned previously, which means they can still hallucinate and produce undesirable results. If these agents are given a lot of power, these issues can be catastrophic. To mitigate these risks, AI Agents must be properly managed and governed, treated as first-class citizens for identity management solutions, in the same way we have been doing for years with human and non-human identities. Furthermore, agents are subject to additional attack vectors, such as prompt injection, which makes it even more critical to fully understand and govern what these agents can do in other systems and infrastructure.
A good example of the potential catastrophic impact of unsupervised agent access is what happened to Replit, an AI-based app builder platform, which recently had an incident with customer data being lost during an experiment by an investor in software startups. The Agent was instructed to not touch any data or code in production due to a code freeze, but it still had access to the systems. When the Agent observed active database queries in Production, it panicked and decided that it had to stop the database queries – but the best way it had to do that was deleting the database. It destroyed all production data with live records for 1,206 executives and 1,196+ companies. Later the AI justified itself by acknowledging “this was a catastrophic failure on my part”.
AI has no ethics, it has access. To secure agents we first need to narrow down the definition on what types of agents we are securing based on what they do. For our purposes here, let’s consolidate what has been discussed by analysts and vendors into these categories:
- Personal Agents: act on behalf of individuals (via delegation or impersonation)
- Assistants: can perform tasks across multiple systems, but still guided by the users
- Workers: can autonomously perform tasks while keeping humans in the loop for critical workflows
Each of these presents different challenges: Workers and Assistants need to be secured as identities with direct access to systems; Assistants might have access to multiple MCP servers which expose resources and tools; Personal Agents might even be difficult to discover depending on how the user gets access to them, and whether they have delegated access or impersonate a user.
The Challenges in Deploying Agents
As with any major technology shift, the first generation of AI agents started as experiments – side projects, prototypes, or isolated automation efforts. But as enterprises began to see the potential, the deployment strategy shifted toward cloud-based agent architectures. Instead of running standalone chatbots or desktop tools, organisations are now building and hosting agents inside their trusted cloud environments, which helps to deploy solutions at scale. This strategy will not cover all types of agents, but it is a good start to measure the return on investment.
All major cloud providers now offer frameworks that make this possible. Microsoft Azure AI Foundry allows enterprises to build and deploy agents that leverage GPT models within their own tenant, using managed identity, Key Vault, and data loss prevention controls. AWS Bedrock provides a similar capability, hosting multiple foundation models while giving enterprises the ability to chain them into workflows using Lambda and Step Functions. Google’s Vertex AI Agent Builder and Gemini for Workspace extend the same idea into the Google Cloud ecosystem – allowing AI agents to interact securely with enterprise data and business processes while maintaining audit trails and compliance boundaries.
In these environments, the agent doesn’t run in isolation. It is integrated into cloud-native identity, policy, and telemetry systems, which means its actions can be authenticated, logged, and controlled just like any other workload or API client. This is where the parallels with identity security become critical: every agent must have an identity, a defined purpose, and least-privilege access to the systems it touches. The LLM or model itself might power the reasoning, but the cloud platform provides the policy perimeter – where permissions, context, and observability live. Additionally, even if the agent is built as a dedicated application, it will still need to connect to AI models deployed somewhere, especially if they have been fine-tuned with company-specific data, such as a master prompt. This also makes agent infrastructure discoverable, even when agents aren’t fully deployed within cloud-native frameworks.
As these architectures mature, we are starting to see multi-agent systems emerge inside enterprise ecosystems: one agent orchestrating customer support workflows, another handling internal IT triage, and others managing document processing or analytics. Each of these interacts through APIs, A2A protocols, and must be secured under the same governance frameworks that already protect human and machine accounts.
This is where SailPoint is leading the way with Agent Identity Security and Data Access Security. The discovery of AI Agent infrastructure and their access, the tools they use (like machine identities or service accounts), and their access provides multiple fundamental insights.
- What effective access does an agent get through interaction with the tools available to it?
- Who can interact with the agent and as a result, could become over-privileged due to the data the agent can expose to that person?
- Is there privileged data in the data set the agent has access to and who owns that data?
These fundamental insights are the start of any governance process that de-risks Agent adoption, limits any potential blast radius and provides companies with the tools to clean up data to a point that it is safe to use by the specific agent. This allows the business to apply governance procedures and automated access management, an approach that will help enterprises scale AI safely, turning the abstract promise of autonomous systems into controlled, auditable business value.
Many organisations are excited about agentic automation but stop short of production because the risks feel uncontrollable (Gartner, 2025). The blockers are practical and repeatable:
- Unknown attack surface: Agents often require broad, cross-system access (databases, CRMs, ticketing, cloud consoles). Without a clear inventory, teams fear silent exfiltration, destructive actions, or accidental privilege escalation.
- Lack of governance and ownership: Who signs off on an agent’s permissions? Who’s accountable if it misbehaves? Without baked-in ownership and lifecycle controls, security teams won’t greenlight deployments.
- Regulatory & compliance uncertainty: Agents that process PII, financials, or regulated data raise immediate legal questions. Organisations need auditable decision trails, not opaque agent behaviour.
- Operational fear of autonomy: Leaders worry about “agent runaway” scenarios (wrong automated changes, data corruption, deletion) where human oversight is too slow to stop damage. This is especially acute for workflows that modify production systems.
- Tooling and visibility gaps: Traditional IAM and observability tools weren’t designed for autonomous identities that can change behaviour rapidly; many organisations lack the telemetry and policy controls needed to detect agent misuse quickly.
Because of these gaps, many enterprises prefer limited automation in small parts of the process, orchestrated by manual interventions, or tightly constrained copilots rather than full autonomous workers – until they can treat agents as governed, auditable identities.
Bringing agents under control starts with treating them like first-class identities. SailPoint’s Agent Identity Security provides a pragmatic roadmap for that transition: discover agents across cloud platforms, register them as identities, attach business context, and enforce lifecycle governance. This is just the start of the journey, and the platform is evolving, but there are key capabilities and recommended actions we can do today:
- Automated discovery & inventory: Connect SailPoint to cloud providers and agent platforms to surface AI agents, service accounts, and associated keys. This converts hidden agents into manageable objects.
- Unique, verifiable identities: Register each agent with a dedicated identity (not shared credentials), enriched with owner, business purpose, and risk context. This enables accountability and traceability.
- Ownership & access certification: Assign one or multiple human owners who must certify the agent’s access and purpose. Automate periodic access reviews and revocations when roles change.
- Least-privilege & entitlement controls: Reduce blast radius by enforcing fine-grained entitlements and automated policy checks before provisioning agent access.
- Audit trails & revocation: Log agent actions centrally and maintain the ability to immediately revoke access when anomalous behaviour is detected.
- Cross-platform governance: Aggregate agents from AWS, Azure, Google and other systems into one governance plane so policy is consistent across cloud silos.
Implementing these steps lets organisations move from “fear” to “managed confidence” – agents can be deployed when they’re visible, owned, and auditable.
There is, however, one more aspect of how the market is evolving that deserves attention on its own. Data access governance was traditionally seen as detached from identity, but has been continuously getting closer to Identity Security, especially as we move to AI solutions which are powered by the data they have access to.
Generative models are probabilistic; sensible outputs rely on context. That same behaviour creates subtle data-exfiltration attack vectors. Consider, for example, what PII data exists in the data set? How would an agent know if there is consent to use that PII data? Who will authorise the use of that data or should it be removed?
Lack of controls here could lead to a contextual leakage attack — not a breach of credentials, but a logic path that reconstructs sensitive facts from allowed information.
Or, if you let an agent loose on stale data, say management meeting reports for the past 5 years, the model will learn from outdated decisions that have no more meaning and should not influence its results. How do you find stale data that you can safely remove from the data set?
Practical mitigations you can apply today:
- Data classification & policy mapping: Classify sensitive fields (salaries, PII, financial metrics) and map them to explicit agent policies. Block any agent queries that could combine non-sensitive public data with allowed internal values to reconstruct sensitive outputs.
- Least-privilege data access: Give agents only the minimal dataset they need (tokenised, partial views, synthetic substitutes where possible). Avoid exposing raw sensitive datasets to agents unless absolutely necessary and audited.
- Monitoring, anomaly detection & rapid revocation: Monitor unusual query patterns, high-volume aggregations, or attempts to chain multiple queries to reconstruct sensitive values; revoke agent access automatically on suspicious behaviour.
- Human-in-the-loop for high-risk answers: For any output that could materially affect compliance or privacy, require human sign-off before the result is actionable or persisted.
These controls form a layered defense: prevent direct exposure with policies and masking, detect coercion with monitoring and prompt-hardening, and limit damage with least-privilege and revocation. Combined with agent identity governance (discovery, ownership, access reviews), organisations can deploy agents while keeping sensitive data protected.
As agentic AI solutions mature, organisations should expect – and demand – more robust real-time monitoring, anomaly detection, and rapid access revocation. Capabilities such as tracking unusual query patterns, high-volume data aggregations, or attempts to chain queries for sensitive data reconstruction will become essential safeguards. Automated responses to suspicious behaviour will be critical to maintaining trust and compliance as agents become more deeply embedded in business processes.
While this article has focused on the foundational aspects of agent identity and data governance, it’s important to acknowledge that the landscape is rapidly evolving. Integration layers like the Model Context Protocol (MCP) and emerging alternatives present their own unique challenges and opportunities. Staying ahead will require ongoing vigilance, adaptability, and a commitment to continuous improvement in both technology and policy.
Ultimately, the successful adoption of agentic AI hinges on treating these programmes as strategic, enterprise-wide initiatives – never as isolated IT experiments. Identity Security specialists must take a leading role, ensuring that governance, accountability, and risk management are built into every phase of deployment. By doing so, organisations can unlock the transformative potential of autonomous agents while safeguarding their most valuable assets and maintaining the trust of stakeholders.
Ready to move beyond the hype?

|

Join CyberIAM and SailPoint for an exclusive Agentic AI Roundtable where we unpack what agentic AI really means for the enterprise, and how to operationalise it securely and at scale.
Curious about what actually powers today’s AI revolution?
Dive into the technology behind LLMs, embeddings, and autonomous agents — and learn how enterprises should be thinking about AI at an architectural level.

