Why AI agents are triggering a rethink of enterprise identity

feedzy FqKAWf

The Computer Weekly Security Think Tank considers the intersection of AI and IAM. In this article, we look at the specific impacts of agentic AI on the security stack.​In the rapidly evolving world of technology, many organisations are still in the nascent stages of Artificial Intelligence (AI) maturity. Their focus is primarily on governance and establishing basic controls around these emerging technologies. One of the most significant challenges they face is securely integrating automation and AI into their existing enterprise systems. As the attack surfaces driven by AI expand, the concept of identity becomes a fundamental control for securing automation and, crucially, for limiting the damage when things go awry. Mistakes are inevitable, but the objective of modern identity design is to ensure that the impact is contained and recoverable.

The swift emergence of AI agents is shifting identity controls from a traditional “bouncer at the door” analogy towards a continuous, context-aware evaluation throughout your systems and processes. In the past, once a user or service was authenticated and received a token, that token could be replayed freely until its expiry, sometimes for hours or even days, without the platform rechecking if anything significant had changed about the subject’s standing. This model is no longer viable.

AI is not merely adding a new user type to identity and access management (IAM), it is compelling organisations to redesign identity as a continuous control plane for humans, workloads, and agents alike. In this continuous evaluation model, a valid token is still necessary but not sufficient on its own. When a token is presented, centrally defined policies should confirm that the subject and its context still meet all the requirements at that moment. These checks can include whether the identity is still active, it has been flagged as high risk, the IP or location has changed unexpectedly, whether device posture has degraded, or whether new threat intelligence suggests compromise. Evaluating these signals at the edge can significantly reduce the window of identity abuse. This approach applies equally to human users, machine workloads, and these emerging hybrid identities created by agentic AI acting either autonomously or on behalf of a user (human in the loop).

To address this, enterprises need to treat users, machine workloads, and large language model (LLM)‑driven agents as first‑class identities, governed under a unified zero‑trust model. That means least privilege by default, short-lived credentials, explicit delegation, and end‑to‑end auditability rather than allowing agents to become convenient but ungoverned circumventions around established controls.

So, what does this evolving world of identity look like in practice?

Centralised identity remains the starting point, akin to your Entra tenant. The next step is edge verification and continuous validation throughout the lifetime of a session or workflow. This becomes especially important for long‑running agentic processes: if an agent runs a large task for hours, or continuously, what happens if the underlying account is locked, its risk posture changes, or its permissions should be reduced mid-execution?

The emerging concepts in the world of identity management are transforming the way we perceive and handle identity in the digital realm. The shift from a static, one-time authentication model to a dynamic, continuous evaluation model is a significant leap forward in securing our digital identities. As we continue to integrate AI and automation into our systems, it is crucial to ensure that these technologies are securely embedded and that the potential for damage is minimised when things go wrong. The future of identity management lies in a unified, zero-trust model that treats all identities – human, machine, and AI – as equals, ensuring the security and integrity of our digital world. 

Leave a Reply

Your email address will not be published. Required fields are marked *