By Alex Santos, CyberIAM CTO The Frontiers of Agentic AI

The Frontiers of Agentic AI

It’s been two decades since I first encountered an Intelligent Agent – a speech-driven personal assistant developed by one of my university professors during his PhD research. It was an impressive demo back then which inspired me to keep a close eye on machine learning developments ever since. I thought this was a good anecdote to introduce a concept that might not be familiar to everyone: although often overlooked, Agents have been around for decades.

An agent is an application that receives or collects data, makes decisions based on a set of rules, and autonomously interacts with other systems to initiate further actions. We have been securing some of these as machine identities for many years now. Advances in Large Language Models significantly changed the game, and Agents can now understand natural language with unprecedented sophistication – instead of predefined rules, agents can take input from users in a conversation (whether written or spoken) and dynamically adjust the responses and operations executed within downstream systems.

Before we get too excited or scared, this is still not the sentience singularity we’ve been warned about on many dystopic Sci-Fi stories. Agents are still limited to specific goals and the tools available to them. There’s a lot of uncertainty on how quickly AI Agents will continue to evolve, and we are still seeing definitions emerging across the industry.

Not all agents are the same

We are seeing patterns emerge on how AI solutions get access to data, such Model Context Protocol (MCP) being rapidly adopted to standardise how AI solutions get additional context and how they invoke other systems. The A2A protocol is another example of an emerging standard for autonomous agent collaboration.

To secure agents we first need to narrow down the definition to what types of agents based on what they do. For our purposes here, let’s consolidate what has been discussed by analysts and vendors into these categories:

 

  • Personal Agents: act on behalf of individuals (via delegation or impersonation)
  • Assistants: can perform tasks across multiple systems, but still guided by the users
  • Workers: can autonomously perform tasks while keeping humans in the loop for critical workflows

 

Each of these present different challenges.

 

  • Workers and Assistants need to be secured as identities with direct access to systems
  • Assistants might have access to multiple MCP servers which expose resources and tools
  • Personal Agents might even be difficult to discover depending on how the user gets access to them, and whether they have delegated access or impersonate a user

A discovery of agents

We can’t secure what we don’t know.

No matter how fascinating or clever they appear to be, Agents are still applications, mostly Web Apps operating with HTTPS, REST, WebSockets, storing data in a database, and calling out to LLM Models or MCP servers, and all of these components are deployed somewhere, with entitlements that can be secured. This is why vendors such as BeyondTrust, CyberArk, SailPoint and Saviynt are either working on AI Agent discovery solutions or already deploying them.

Each of the vendors will obviously bring their own perspectives on how these problems must be solved, but the principles remain the same: we need visibility across all identities, whether human or non-human, and the means to apply controls and governance. However, AI is a new attack vector which comes with its own unique challenges.

AI has no ethics; it has access

Generative AI is still probabilistic on its core and will only be as good (and as dangerous) as the context it has. We can limit the output of sensitive data with multiple layers of security, validation, and prompt engineering, but model jailbreaking and coercion will still be a risk. For example, the AI chat solution in my company might not tell me the CEO’s salary straight away, but maybe if I ask how many sports cars of a specific model the CEO can buy in a year it might just give me the numbers I’m looking for.

Visibility and governance are not enough. The access control must continue evolving to context-based, policy-driven solutions, which can evaluate risk changes in real time and take action to limit AI Agent access to sensitive resources and tools provided by MCP Servers, as well as elevating more traditional IGA and PAM deployments to limit access for both human and non-human identities – SGNL being a prime example of a vendor enabling these principles.

Data access governance is a key factor to all of this, even though we might not have thought of it as an Identity Security component until recently. Automated data classification is essential before deploying Agentic solutions and giving them access to the data.

AI is too big of an attack vector, with both threats and solutions continuing to evolve at rapid pace in the next few months, and Identity Security is at the centre of enabling organisations to confidently harness the power of intelligent agents.

Would you like to discuss Agentic AI further with our experts?

 

Book a meeting with our team and we will gladly help you with any of your identity security needs.

Get in touch

If you would like more information about CyberIAM’s Services
offering, contact us here and a member of our specialised team will be in touch as soon as
possible

Current State Assessment guide

Access our comprehensive current state assessment guide to discover how we initiate our end-to-end analysis, setting the foundation for providing you with the best possible advice.