AI Agent Identity Crisis: Standards Emerge as Enterprises Lag

Authors: Cloud Security Alliance AI Safety Initiative
Published: 2026-03-18

Categories: Agentic AI Security, Identity and Access Management, Non-Human Identity, Cloud Security
Download PDF

AI Agent Identity Crisis: Standards Emerge as Enterprises Lag

Subtitle: Okta’s Agentic Identity Framework and the Non-Human Identity Governance Gap

Cloud Security Alliance AI Safety Initiative | March 2026


Key Takeaways

  • A March 2026 CSA and Strata Identity survey of 285 IT and security professionals found that only 18% of security leaders report high confidence that their current IAM infrastructure can handle AI agent identities, while 84% doubt they could pass a compliance audit focused on agent behavior or access controls [1].
  • Okta’s March 16, 2026 Showcase 2026 announcement introduced a dedicated “Identity Security Fabric” and its Cross App Access (XAA) protocol, representing the first major general-purpose IAM platform to publish an open cross-system protocol for AI agent identity federation alongside a comprehensive agent lifecycle management offering [2].
  • The Gravitee State of AI Agent Security 2026 survey found that 88% of organizations reported confirmed or suspected AI agent security incidents in the prior year, while only 22% treat AI agents as independent, identity-bearing entities rather than relying on shared API keys or inherited credentials [3].
  • A January 2026 CSA and Oasis Security survey of 383 IT and security professionals found that 92% are not confident that their legacy IAM solutions can effectively manage AI and non-human identity risks, and 78% have no documented, formally adopted policies for creating or removing AI agent identities [4].
  • The standards ecosystem is converging rapidly: the IETF published draft AIMS (Agent Identity Management System) on March 2, 2026, composing WIMSE, SPIFFE/SPIRE, and OAuth 2.0 into a unified framework, while NIST NCCoE published a companion concept paper on February 5, 2026, calling for public comment through April 2, 2026 [5][6].
  • Organizations should treat AI agents as first-class identity principals — inventoried, individually credentialed, and subject to the same governance lifecycle as human accounts — beginning with agent discovery, then policy formalization, and then enforcement of least privilege and just-in-time access.

Background

The Non-Human Identity Scale Problem

Enterprise identity management was designed for humans and the applications they use. Service accounts and API keys were long treated as a narrow supporting category — few in number, relatively static, and attached to well-understood processes. The arrival of AI agents has invalidated that assumption at a pace that most security organizations have not kept up with.

Research from Rubrik Zero Labs indicates that AI agents and non-human identities now outnumber human users by 82 to 1 in enterprise environments [7]. These identities are not distributed evenly or predictably: agents proliferate across public clouds, on-premises systems, and SaaS platforms, often provisioned by individual teams and application owners without coordination from central security functions. Current adoption trajectories suggest this ratio will grow substantially as organizations expand their use of agentic AI, yet the governance infrastructure to manage them remains largely absent.

The consequences are measurable. The January 2026 CSA and Oasis Security survey found that 79% of IT professionals feel ill-equipped to prevent attacks via non-human identities [4]. Entro Security’s 2025 State of Non-Human Identities report found that 97% of non-human identities carry excessive privileges [8]. Obsidian Security’s 2025 AI Agent Security Landscape report found that 90% of deployed AI agents are over-permissioned relative to the actual scope of their assigned tasks [9]. These figures suggest that the non-human identity surface is not merely large but structurally misconfigured. Security practitioners have observed that machine identity compromise is increasingly exploited as a lateral movement vector, in part because service account and API key abuse may generate fewer alerts than attacks targeting human accounts. The surveys cited throughout this note reflect respondent self-reports from IT and security professionals; while consistent with standard security industry research methodology, sample sizes of 285 and 383 should be considered when interpreting the prevalence estimates.

Why Traditional IAM Fails Agentic Workloads

Traditional identity and access management was built around stable, long-lived principals. A human account is provisioned at hire, adjusted incrementally, and de-provisioned at departure; a service account is attached to a specific application and rotated on a schedule. Both models assume that the principal’s scope of behavior is bounded, its session patterns are predictable, and its actions can be reviewed by the person who owns the account. In complex agentic deployments, none of these assumptions hold.

The most capable AI agents in multi-agent orchestration deployments are autonomous and recursive. A single agent execution may invoke tools across cloud APIs, SaaS platforms, internal databases, and other agents, with each step requiring its own authorization decision. The agent may act in a user’s name at one moment and as an independent principal at the next. It may spawn sub-agents, each of which inherits or derives a credential context from the parent, creating delegation chains that extend across system boundaries and administrative domains that were never designed to trust each other. The ISACA analysis “The Looming Authorization Crisis: Why Traditional IAM Fails Agentic AI,” published in 2025, characterizes this as a categorical mismatch — not a configuration problem solvable by extending existing tools, but a structural incompatibility between the assumptions embedded in legacy IAM and the operational patterns that characterize agentic workloads [10].

The CSA and Strata Identity survey quantifies how organizations are responding to this mismatch: nearly half are extending human IAM models to agents without architectural adaptation, a pattern that preserves the efficiency of re-using existing infrastructure but creates exploitable mismatches between assumed privilege scope and actual agent behavior [1]. Only 21% of organizations maintain a real-time registry or inventory of active agents, and only 28% can trace agent actions back to a human sponsor across all environments [1] — leaving the majority unable to answer even basic accountability questions about their deployed agent populations.


Security Analysis

The Okta Identity Security Fabric

On March 16, 2026, Okta announced its strategic blueprint for the “secure agentic enterprise” at Showcase 2026, articulating the core governance problem through three organizing questions: Where are my agents? What can they connect to? What can they do? [2] The framework, which Okta calls the Identity Security Fabric, addresses all three through a unified platform for registering, authenticating, and governing AI agent identities across their full operational lifecycle.

A central technical component of the Showcase 2026 announcement is Cross App Access (XAA), a new open protocol Okta introduced to standardize how AI agents and applications connect securely across system boundaries. XAA targets a longstanding architectural gap in agentic deployments: agents routinely need to access resources across multiple platforms and organizational domains, but there is no standard mechanism for expressing or enforcing authorization as an agent crosses those boundaries. Proprietary integrations fill the gap today, but they do not interoperate, cannot be audited uniformly, and create exactly the kind of fragmented credential ecosystem that attackers exploit. XAA’s stated goal is to replace proprietary connectors with a shared protocol that preserves the identity chain as an agent moves between systems.

Okta also announced general availability of its dedicated Okta for AI Agents platform, which provides formal lifecycle management for agent identities: provisioning through a central registry, authentication using short-lived credentials rather than static API keys, and governance through policy controls that restrict what each agent can access and do. The complementary Okta Privileged Access component applies just-in-time access controls and policy enforcement to agents using static credentials — service accounts or API keys — that cannot yet be migrated to the newer credential model. Okta additionally published a structured knowledge resource, the Agentic AI Framework, providing guidance on governing agents through the provisioning, operation, and de-provisioning lifecycle.

Okta and CyberArk — which released general availability of its Secure AI Agents Solution in November 2025 — represent the leading early commercial platforms to treat AI agent identity as a first-class product. What distinguishes Okta’s Showcase 2026 announcement is the introduction of XAA, an open cross-system protocol intended to enable federation across disparate agent identity platforms. This matters for enterprises because it signals that commercial tooling is now available to operationalize agent identity governance, reducing the dependency on bespoke engineering that has characterized early deployments.

The Emerging Standards Landscape

Parallel to commercial platform development, standards bodies have accelerated work on interoperable agent identity protocols throughout late 2025 and early 2026, with the most consequential activity concentrated in the first quarter of 2026.

The OpenID Foundation published “Identity Management for Agentic AI” in October 2025, establishing a foundational position: current OAuth 2.0 and OIDC standards are already capable of securing many AI agent use cases when agents operate within well-defined boundaries, but three gaps require new work [11]. First, agents can switch between acting independently and acting on behalf of a human user, but current systems cannot track which mode is active for a given action. Second, fragmented proprietary identity systems are emerging in the absence of shared standards, creating interoperability barriers that will compound as agent populations grow. Third, recursive delegation — agents spawning sub-agents, each of which may further delegate — creates permission chains without principled limits on depth or scope. Building on this foundation, one proposal — OpenID Connect for Agents (OIDC-A) 1.0 — has been circulated in the community to extend the existing specification with claims, endpoints, and protocols covering agent identity establishment, attestation, delegation chains, and fine-grained authorization based on agent attributes [12]. While not yet an official OpenID Foundation specification, the proposal illustrates the direction the community is exploring for standardizing agent-specific OIDC extensions.

NIST’s National Cybersecurity Center of Excellence published a concept paper on February 5, 2026, titled “Accelerating the Adoption of Software and AI Agent Identity and Authorization,” with a public comment period running through April 2, 2026 [6]. The paper identifies Model Context Protocol, OAuth 2.0/2.1, OIDC, SPIFFE/SPIRE, and SCIM as candidate standards for agent identity, and anticipates a full demonstration project to follow. NIST’s Center for AI Standards and Innovation (CAISI) is also engaged in developing standards for AI agent authentication and authorization through multi-stakeholder outreach spanning government, industry, and academia. Together, these efforts signal that a standards-based foundation for agent identity is within reach — but is not yet settled.

The most technically comprehensive standards artifact to date is the IETF’s draft-klrc-aiagent-auth-00, published March 2, 2026, which defines AIMS (Agent Identity Management System) [5]. AIMS composes three existing IETF standards — WIMSE (Workload Identity in Multi-System Environments), SPIFFE/SPIRE, and OAuth 2.0 — into a 26-page framework addressing cross-system agent authentication, delegation chain verification, and fine-grained authorization. The IETF’s WIMSE working group, which is chartered to address least-privilege access control for workloads across multiple service platforms, has also published a companion draft, draft-ni-wimse-ai-agent-identity, applying the WIMSE model specifically to AI agents [13]. The core thesis shared across these IETF drafts is that AI agents should be treated as workloads using existing workload identity protocols, rather than requiring new identity categories — a position that reduces migration cost for organizations already investing in workload identity infrastructure.

SPIFFE and SPIRE are particularly relevant to enterprises managing large agent populations because they provide cryptographic workload identity — each agent receives a SPIFFE ID and certificate from a central SPIRE server — that enables mutual TLS-based authentication without shared secrets or static credentials. HashiCorp published dedicated guidance in 2025 on applying SPIFFE/SPIRE to AI agents and non-human workloads, making SPIFFE/SPIRE a mature, production-available path for organizations that need to scale agent identity without depending on a specific commercial IAM platform [14].

Privilege Management and the Over-Permissioning Pattern

The most operationally urgent aspect of the enterprise AI agent identity gap is not authentication — it is authorization. Many organizations have addressed authentication — the “who is this agent” question — through static credentials, a solution that is functional but insecure; almost none have adequately addressed the “what is this agent allowed to do” question in a manner proportionate to the agent’s actual operational scope.

CyberArk, which announced general availability of its Secure AI Agents Solution in November 2025, describes the problem in terms that align with the survey data: AI agents are routinely granted standing access to resources they will only need for a specific task, and that standing access persists after the task completes [15]. The consequence is a continuously expanding privilege footprint that grows with each agent deployment. Entro Security’s finding that 97% of non-human identities carry excessive privileges [8], and Obsidian Security’s finding that 90% of AI agents are over-permissioned [9], are not outlier observations — they reflect the architectural reality of a model in which the default is to provision broadly and restrict narrowly only when something goes wrong.

Multiple security authorities, including Microsoft [16], NIST’s NCCoE [6], and CyberArk [15], identify Zero Standing Privilege (ZSP) and just-in-time (JIT) access as leading remediation approaches for agent over-permissioning. While implementation details vary across platforms, the directional alignment is notable. Under ZSP, AI agents receive no persistent access to sensitive resources; permissions are granted only for the duration of a specific task and revoked automatically upon completion. JIT access extends this to the provisioning layer: rather than creating persistent agent credentials attached to broad permissions, the platform provisions a short-lived token scoped to the immediate task, cryptographically tied to the requesting agent’s identity.

The CSA and Strata Identity survey finding that 44% of organizations use or plan to use static API keys for agent authentication — and 43% use username/password combinations — indicates that most deployments are operating at the opposite end of the spectrum from ZSP [1]. Static credentials do not expire, cannot be scoped to a specific task, and when compromised, provide persistent access until manually rotated. In the context of agents that may operate across dozens of systems, a single compromised static credential can provide the lateral movement foundation for a significant breach.


Recommendations

Immediate Actions

Organizations should begin with agent discovery before attempting to remediate governance gaps. The 79% of organizations that lack a real-time registry of active agents cannot enforce policy on agents they cannot enumerate [1]. Discovery should cover all deployment patterns — SaaS-embedded agents, cloud-hosted autonomous agents, coding tools with infrastructure access, and orchestration frameworks that spawn agents dynamically. The objective is a complete inventory of agent identities, the credentials they hold, the systems they can access, and the human sponsors accountable for their behavior. Without this inventory, all downstream governance actions are incomplete.

Once agents are inventoried, organizations should prioritize credential hygiene for the highest-risk agents: those with access to production systems, financial data, customer records, or critical infrastructure. Static API keys and shared service accounts should be replaced with individually scoped, short-lived credentials following the OAuth 2.0 On-Behalf-Of flow for agents acting in a user’s name, and SPIFFE/SPIRE SVIDs for autonomous agents that require machine identity without human delegation. CyberArk’s Secure AI Agents Solution and Okta’s Okta for AI Agents platform both provide commercial tooling to operationalize this transition for organizations without the engineering capacity to build it from primitives.

Organizations should also establish a formal incident response playbook for AI agent identity compromise. The Gravitee survey finding that 88% of organizations have experienced confirmed or suspected agent security incidents [3] indicates that response is not a theoretical exercise. A playbook should address: how to identify which agents were active during a suspected incident, how to revoke agent credentials without disrupting dependent systems, how to reconstruct an agent’s action log for forensic analysis, and who holds accountability for communicating the incident to affected stakeholders.

Short-Term Mitigations

Formalizing policy for AI agent identity lifecycle is a prerequisite for scalable governance. The CSA and Oasis Security survey finding that 78% of organizations have no documented policies for creating or removing AI agent identities [4] creates a compounding risk: without policy, individual teams provision agents according to local convenience rather than organizational security standards, and de-provisioning is deferred or forgotten. A minimum-viable policy should define the approval workflow for provisioning a new agent identity, the required attributes for each identity in the registry (owning team, sponsoring human, associated systems, maximum privilege scope, and expiration date), and the de-provisioning trigger and process.

Organizations should evaluate their existing IAM infrastructure against the NIST NCCoE concept paper’s candidate standards and submit comment to the NCCoE process before the April 2, 2026 deadline [6]. Organizations that currently use service mesh infrastructure with SPIFFE/SPIRE have a ready-made foundation for extending cryptographic workload identity to AI agents. Organizations on major commercial IAM platforms should engage their vendors on the roadmap for agent identity support, using Okta’s Showcase 2026 announcement as a reference point for the capabilities that mature agent identity management should include.

Human-in-the-loop checkpoints should be implemented for high-consequence agent actions before broader agent autonomy is granted. The CSA and Strata Identity survey found that 68% of organizations require human oversight but lack the architectural mechanisms to implement it [1]. Pragmatically, this means identifying the category of actions — financial transactions above a defined threshold, system configuration changes, access to sensitive data categories, communications sent on behalf of a human — that should pause for explicit human confirmation before the agent proceeds. This is not a permanent model; as agent identity infrastructure matures and audit capabilities improve, the scope of human-required checkpoints can narrow. But in the current environment, it is the most reliable available control against agent actions that exceed intended scope.

Strategic Considerations

The standards convergence underway in early 2026 — IETF AIMS, NIST NCCoE, OpenID Foundation work on agentic identity, and Okta XAA — is likely to produce a more stable technical foundation for agent identity within 12 to 18 months. Organizations making major infrastructure investments in agent identity governance now should architect for interoperability with these emerging standards rather than committing exclusively to proprietary frameworks. The IETF AIMS draft’s composition of WIMSE, SPIFFE/SPIRE, and OAuth 2.0 provides the most concrete current signal of where the open standards consensus is heading [5].

Governance ownership of AI agent identity should be resolved explicitly before the agent population grows further. The current fragmentation — with the CSA and Strata survey showing security owning governance in 39% of organizations, IT in 32%, and AI functions in 13% [1] — will produce inconsistent enforcement as the number of agents scales. The appropriate model treats agent identity as a security function, because the risk exposure from misconfigured or compromised agent credentials falls squarely within the threat and access control responsibilities that security teams are chartered to manage, while acknowledging that provisioning workflows must be embedded in the development and deployment processes that the engineering and product organizations own.

Finally, organizations should monitor the NIST CAISI AI Agent Standards Initiative and the OpenID Foundation’s ongoing agentic identity specification work for signals about which standards will achieve broad vendor adoption. The commercial landscape is moving quickly — Okta, CyberArk, Microsoft, and others announced significant agent identity capabilities in Q1 2026 — and standards adoption will determine whether those capabilities interoperate or fragment into competing silos.


CSA Resource Alignment

This research note connects to several active CSA frameworks and working group efforts.

The MAESTRO framework for agentic AI threat modeling addresses the identity and authorization risk categories that this note describes. MAESTRO’s analysis of privilege escalation and authorization bypass in multi-agent systems provides the threat taxonomy against which the governance controls described here should be validated. CSA’s prior research note “Islands of Agents: IAM Failures Across Agent Boundaries” (March 15, 2026) provides a complementary architectural analysis of how delegation failures propagate across multi-agent systems; the controls in this note may apply to the island categories identified in that analysis.

The AI Controls Matrix (AI-CM) provides a structured framework for mapping agent identity controls to specific control domains. The governance gaps identified in this note — missing agent registries, undocumented lifecycle policies, over-permissioned credentials — map directly to the AI-CM’s identity and access management control categories and should be used to drive gap assessments.

The STAR for AI Level 1 self-assessment program enables organizations to assess their agent identity governance posture against a structured control set and submit results to the CSA STAR Registry. Organizations that have not yet conducted a STAR for AI assessment should treat the findings of the CSA and Strata Identity survey and the CSA and Oasis Security survey as indicative of their likely gap profile and use STAR for AI as the assessment vehicle for formalizing their current state.

The Zero Trust guidance published by CSA’s Zero Trust Working Group applies directly to the agent identity problem. The core Zero Trust principle — never trust implicitly, always verify explicitly — is operationalized for AI agents through continuous authentication (cryptographic attestation at every system boundary), least-privilege access (scoped, short-lived credentials), and comprehensive logging (full audit trail of agent actions tied to verified identities). The ZSP model described in the Recommendations section above is the agentic instantiation of Zero Trust’s “assume breach” design posture.

The AI Organizational Responsibilities working group, whose output includes the Agentic AI Red Teaming Guide, provides the adversarial context for agent identity risks. Practitioners designing agent identity controls should use the red teaming guidance to validate that their controls are effective against the attack patterns — credential theft, privilege escalation through delegation chains, silent agent spawning — that are most likely to be exploited as agent deployments scale.


References

[1] Cloud Security Alliance and Strata Identity, “Securing Autonomous AI Agents,” CSA Research Artifact, February 2026. https://cloudsecurityalliance.org/artifacts/securing-autonomous-ai-agents

[2] Okta, “Okta Showcase 2026: Securing the Agentic Enterprise,” Okta Newsroom, March 16, 2026. https://www.okta.com/newsroom/press-releases/showcase-2026/

[3] Gravitee, “State of AI Agent Security 2026,” Gravitee Research Report, February 2026. https://www.gravitee.io/blog/state-of-ai-agent-security-2026-report-when-adoption-outpaces-control

[4] Cloud Security Alliance and Oasis Security, “The State of Non-Human Identity and AI Security,” CSA Research Artifact, January 2026. https://cloudsecurityalliance.org/artifacts/state-of-nhi-and-ai-security-survey-report

[5] IETF, “AI Agent Authentication and Authorization (AIMS),” Internet-Draft draft-klrc-aiagent-auth-00, March 2, 2026. https://datatracker.ietf.org/doc/draft-klrc-aiagent-auth/

[6] NIST National Cybersecurity Center of Excellence, “Accelerating the Adoption of Software and AI Agent Identity and Authorization,” Concept Paper, February 5, 2026. https://www.nccoe.nist.gov/projects/software-and-ai-agent-identity-and-authorization

[7] Rubrik Zero Labs, “Identity Crisis: Understanding & Building Resilience Against Identity-Driven Threats,” Research Report, November 2025. https://zerolabs.rubrik.com/reports/the-identity-crisis

[8] Entro Security Labs, “2025 State of Non-Human Identities and Secrets in Cybersecurity,” Annual Report, 2025. https://entro.security/resources/2025-state-of-non-human-identities

[9] Obsidian Security, “2025 AI Agent Security Landscape: Players, Trends, and Risks,” Research Report, 2025. https://www.obsidiansecurity.com/blog/ai-agent-market-landscape

[10] ISACA, “The Looming Authorization Crisis: Why Traditional IAM Fails Agentic AI,” ISACA Industry News, December 19, 2025. https://www.isaca.org/resources/news-and-trends/industry-news/2025/the-looming-authorization-crisis-why-traditional-iam-fails-agentic-ai

[11] OpenID Foundation, “Identity Management for Agentic AI,” OpenID Foundation Whitepaper, October 2025. https://openid.net/wp-content/uploads/2025/10/Identity-Management-for-Agentic-AI.pdf

[12] Subramanya, A., “OpenID Connect for Agents (OIDC-A) 1.0: A Proposal,” Personal Research Blog, April 28, 2025. https://subramanya.ai/2025/04/28/oidc-a-proposal/

[13] IETF WIMSE Working Group, “WIMSE Applicability for AI Agents,” Internet-Draft draft-ni-wimse-ai-agent-identity-01, October 20, 2025. https://www.ietf.org/archive/id/draft-ni-wimse-ai-agent-identity-01.html

[14] HashiCorp, “SPIFFE: Securing the Identity of Agentic AI and Non-Human Actors,” HashiCorp Blog, 2025. https://www.hashicorp.com/en/blog/spiffe-securing-the-identity-of-agentic-ai-and-non-human-actors

[15] CyberArk, “CyberArk Introduces First Identity Security Solution Purpose-Built to Protect AI Agents with Privilege Controls,” Press Release, November 4, 2025. https://www.cyberark.com/press/cyberark-introduces-first-identity-security-solution-purpose-built-to-protect-ai-agents-with-privilege-controls/

[16] Microsoft Security, “Four Priorities for AI-Powered Identity and Network Access Security in 2026,” Microsoft Security Blog, January 20, 2026. https://www.microsoft.com/en-us/security/blog/2026/01/20/four-priorities-for-ai-powered-identity-and-network-access-security-in-2026/

← Back to Research Index