The OAuth Gap: AI SaaS Supply Chain Blast Radius

Authors: Cloud Security Alliance AI Safety Initiative
Published: 2026-05-01

Categories: AI Supply Chain Security, Identity and Access Management, SaaS Security
Download PDF

The OAuth Gap: AI SaaS Supply Chain Blast Radius

Key Takeaways

  • The August 2025 Salesloft/Drift OAuth supply chain breach compromised over 700 organizations in ten days, with Obsidian Security research indicating the blast radius was approximately ten times greater than a direct Salesforce breach because one compromised AI integration cascaded across hundreds of downstream customer environments [1][2][12].

  • Non-human identities—OAuth tokens, service accounts, and API keys held by AI SaaS tools—significantly outnumbered human identities in the affected environment, yet the majority lacked lifecycle governance, rotation policies, or scope minimization controls [3][12].

  • Datadog Security Labs identified a class of attack they termed “CoPhish,” in which adversaries build malicious Microsoft Copilot Studio agents with fake consent flows to harvest OAuth tokens at scale, exploiting the trust employees place in legitimate Microsoft-hosted domains [4][10].

  • The April 2026 Vercel/Context.ai incident demonstrated how a single enterprise employee granting broad OAuth permissions to a productivity AI tool can expose environment variables and downstream customer infrastructure without being detected by conventional security monitoring, with the initial breach signal emerging from external threat intelligence [5].

  • OAuth 2.0 was not designed for autonomous AI agents that spawn sub-agents, execute multi-step workflows without a human in the loop, or blend delegated user authority with persistent machine identity—creating authorization gaps that existing identity frameworks do not fully address [6].

  • Security teams should treat every AI SaaS OAuth grant as a potential persistent foothold requiring the same lifecycle governance as privileged service accounts: minimal scope at authorization, periodic review, and immediate revocation when the business need ends.


Background

The Authorization Problem OAuth Did Not Anticipate

OAuth 2.0 was designed to let a user delegate bounded access to a third-party application—a well-understood model for web services where humans approve individual authorization requests and remain loosely aware of what they have granted. The rise of AI SaaS products fundamentally disrupts that model. Employees connecting tools like Drift, Notion AI, Slack AI, or Claude for Work to their corporate Google Workspace or Microsoft 365 accounts typically encounter a single consent screen and a list of requested scopes, the breadth of which users rarely evaluate critically before accepting. Once authorized, those tokens persist indefinitely unless explicitly revoked. The integrating AI tool may evolve its capabilities, change ownership, or be acquired without triggering a re-consent event—and the original grantor may have left the organization entirely.

The growth in AI-connected enterprise applications has made this a structural risk at scale. Enterprise security teams are now operating in environments where AI-enabled SaaS products represent a rapidly expanding class of third-party integrations, each holding delegated access tokens with varying degrees of scope, each with its own security posture, and collectively forming an authorization surface that traditional identity governance tools were not built to enumerate or control. This is the OAuth gap: not a flaw in the protocol itself, but a mismatch between the authorization model OAuth enables and the security governance discipline required to manage it safely at the scale and velocity of AI SaaS adoption.

From Productivity Tool to Supply Chain Node

AI SaaS products occupy an unusual position in the enterprise supply chain. Unlike conventional software vendors, AI SaaS tools are often granted delegated authority over core productivity platforms—email, calendars, document repositories, CRM systems, and code repositories—rather than operating as isolated applications. This means a compromise of the AI SaaS vendor’s infrastructure does not merely expose the vendor’s own data; it can immediately activate all the OAuth grants that vendor’s product holds across every customer tenant. The vendor becomes an involuntary pivot point in the attack chain, and the blast radius of any intrusion scales directly with the scope of the tokens they hold and the number of customers who have authorized them.

This dynamic was not hypothetical in 2025 and 2026. A succession of incidents—spanning sales automation platforms, AI gateways, and developer tools—demonstrated that adversaries have identified and are actively exploiting this attack surface, as the incidents described below make clear. Understanding the mechanics of these incidents is essential for security teams seeking to evaluate the risk posture of their own AI SaaS estates.


Security Analysis

The Salesloft/Drift Incident: Blueprint for OAuth Supply Chain Attacks

In August 2025, threat actor UNC6395 (also tracked as GRUB1) exploited OAuth tokens held by the Drift conversational AI platform, which was at the time integrated with Salesforce CRM deployments across hundreds of enterprise customers via the Salesloft sales engagement platform. The attack ran for approximately ten days—August 8 through 18—before Salesloft and Salesforce revoked all active access and refresh tokens associated with the Drift application [2]. During that window, attackers used Drift’s delegated Salesforce access to conduct systematic data exfiltration, specifically targeting AWS access keys, passwords, and Snowflake tokens for lateral movement into downstream infrastructure. Confirmed impacted organizations included Cloudflare, PagerDuty, Palo Alto Networks, Proofpoint, and Zscaler, among others [1][2][3].

What made this incident analytically significant beyond its immediate scope was the insight it provided about blast radius multiplication. Obsidian Security noted that because a single AI SaaS integration sits between the attacker and hundreds of downstream tenants, the effective blast radius of the compromise was an order of magnitude greater than it would have been for a direct attack on Salesforce [12]. The Salesloft environment also illustrated the scope of non-human identity sprawl: AI integrations had accumulated OAuth tokens over time without lifecycle governance, creating an attack surface far larger than the organization’s headcount would suggest [3][12].

This incident should be understood not as an isolated event but as a proof-of-concept for a replicable attack pattern. The attacker did not need to compromise Salesforce directly. They needed only to compromise one AI SaaS product that held delegated access to Salesforce across many customers simultaneously—a far lower-value target with a far higher downstream yield.

CoPhish: OAuth Token Theft as an AI-Native Technique

In late 2025, Datadog Security Labs documented a new attack technique they named CoPhish, which weaponizes Microsoft Copilot Studio to construct malicious agents that present convincing fake OAuth consent flows to victims [4][10]. Because these agents are hosted on copilotstudio.microsoft.com—a legitimate Microsoft domain—they carry implicit trust signals that conventional phishing infrastructure cannot replicate. Victims who authorize these agents may grant access to mail, calendar, files, and other Microsoft 365 resources without recognizing that the authorization is being captured by an adversary-controlled application rather than a legitimate enterprise service.

CoPhish represents an important evolution in OAuth exploitation because it turns the AI platform itself into the phishing substrate. Historically, OAuth phishing required attackers to stand up convincing lookalike authorization pages. Microsoft Copilot Studio’s legitimate infrastructure eliminates that requirement, and the conversational AI wrapper lowers victim suspicion. Microsoft’s existing tenant consent policy controls offer partial mitigation when configured to require administrator approval for third-party OAuth grants—though many enterprise environments run with default settings that permit user-level authorization without IT review [14]. The technique illustrates a broader principle: as AI platforms gain the ability to construct arbitrary agents that interact with users and request OAuth permissions, the attack surface for OAuth token theft expands to include the AI platforms themselves.

The Vercel/Context.ai Incident: Shadow AI and the Enterprise OAuth Footprint

The April 2026 breach of Vercel’s development infrastructure, traced to a compromised OAuth token held by Context.ai, demonstrated how shadow AI—AI tools adopted by individual employees without security team knowledge—can create enterprise-scale exposure. A Vercel employee had authorized Context.ai with broad OAuth permissions, including access to the employee’s Google Workspace account. When Context.ai’s infrastructure was compromised, the attackers were able to use those tokens to pivot into Vercel’s internal systems and access non-sensitive environment variables for customer projects [5][7]. The breach was not discovered through conventional security monitoring; the initial signal came from external threat intelligence.

The Vercel incident is notable for several reasons. It illustrates that shadow AI risk is not theoretical—individual OAuth grants by individual employees can directly compromise customer infrastructure. It also demonstrates that the damage is not bounded by the breached employee’s own files; OAuth tokens that cascade through interconnected development environments can reach customer data without any direct attack against those customers. CSA’s January 2026 analysis had already flagged this attack class—an over-permissioned OAuth grant from a productivity AI tool cascading into a supply chain breach—as a scenario that traditional DLP and CASB solutions were not designed to detect or contain [8].

The Structural Gap: OAuth 2.0 and Agentic Authorization

Underlying each of these incidents is a common structural condition: OAuth 2.0’s authorization model assumes a bounded, human-initiated delegation relationship, and AI SaaS products routinely violate each of those assumptions. The OpenID Foundation’s October 2025 whitepaper on identity management for agentic AI explicitly flagged the gap between what OAuth 2.0 enables and what autonomous agents actually require [6]. Agents may spawn sub-agents that inherit the parent’s authority without a new consent event. They execute multi-step workflows without a human in the loop to validate each action. They blend the delegated authority of the authorizing user with their own machine identity, blurring audit trails and making attribution difficult when an action needs to be reviewed or revoked.

The permissions AI agents accumulate also tend toward over-breadth by default. Developers building AI integrations frequently request the broadest available scopes—in part because predicting the full set of future requirements is difficult, and in part because re-requesting narrower permissions creates friction for users. Over time this creates a portfolio of standing, over-permissioned grants that persist well beyond the original business justification. NIST’s draft guidance on AI system cybersecurity controls, released for public comment in December 2025, emphasized the need for comprehensive governance over AI systems and the third-party integrations they depend on—an imperative that extends naturally to the delegated tokens those systems accumulate over time [9].


Recommendations

Immediate Actions

The most urgent priority is a complete inventory of every third-party AI SaaS application holding delegated OAuth access to enterprise productivity platforms—Google Workspace, Microsoft 365, Salesforce, Slack, and GitHub. SaaS security posture management (SSPM) tools can automate this enumeration; each platform’s native connected applications interface provides a starting point for environments without SSPM coverage. The objective is a verified list of what exists, what scopes it holds, and whether the original business justification still applies.

Once that inventory exists, any integration that is no longer actively used should have its OAuth grants immediately revoked. Dormant grants are not harmless—they represent persistent authorized access that an adversary can leverage without any interaction with the original authorizing user. Any integration that bypassed formal procurement or security review should be treated as unvetted and revoked pending evaluation. Going forward, existing grants should be reviewed on a quarterly basis, with a defined process for verifying that the granted scopes remain proportionate to the current use case; any grant requesting administrative or write access warrants heightened scrutiny during each review cycle.

Short-Term Mitigations

Organizations should define and enforce maximum permissible OAuth scopes for AI SaaS tools by application category. A sales AI tool has no legitimate need for access to the authorizing user’s code repositories; a document AI tool has no legitimate need for email send permissions. These scope ceilings should be documented in policy and enforced where the identity platform allows scope restriction. Where enforcement is not possible at the platform level, vendor contracts and acceptable use policies should explicitly prohibit over-broad authorization.

For AI integrations that legitimately require broad access, organizations should consider using dedicated service accounts rather than individual employee accounts. This practice contains the blast radius of a token compromise to the service account’s defined permissions rather than the employee’s full identity, and it ensures that access is governed through the organization’s standard privileged access management process. The service account model also makes revocation straightforward: when an integration is decommissioned, the associated service account is disabled.

Security monitoring pipelines should be extended to cover OAuth-related events: new authorization grants, scope modifications, anomalous API call volumes on connected applications, and token revocations. Post-incident analysis of the Salesloft/Drift breach indicates that the intrusion ran for ten days in part because monitoring for anomalous OAuth API call volumes and high-volume data retrieval was not in place [1][2]. Behavioral baselines for AI SaaS integrations—expected call volumes, typical data access patterns—provide the detection signal that rule-based alerting alone cannot generate.

Strategic Considerations

The structural tension between OAuth 2.0’s design assumptions and the requirements of agentic AI will not be resolved through policy controls alone. Organizations should actively track developments from the OpenID Foundation’s AI Identity Management Community Group and NIST’s AI Cybersecurity Framework Profile, both of which are developing guidance on agent-native authorization models [6][9]. The emerging standards work around OAuth 2.1, token exchange mechanisms, and delegated authorization chains for multi-agent workflows will form the basis of the next generation of identity governance tooling for AI systems [6][14].

Procurement processes should be updated to treat AI SaaS vendors as privileged supply chain partners. This means requiring vendors to complete CSA STAR assessments, attesting to their OAuth token handling practices, incident response procedures for token compromise, and minimum-scope authorization architectures. The same due diligence applied to a vendor handling sensitive data at rest should be applied to any vendor holding delegated OAuth tokens with broad enterprise access—because in practice, the risk profile is equivalent.


CSA Resource Alignment

This research note connects directly to several active Cloud Security Alliance frameworks and publications. The AI Controls Matrix (AICM) v1.0, which extends and builds upon the Cloud Controls Matrix for AI-specific deployments, includes controls addressing third-party AI integration governance and supply chain security under its AI Supply Chain Security domain; organizations evaluating AI SaaS vendors should apply AICM controls alongside STAR attestation requirements. The MAESTRO threat modeling framework for agentic AI identifies non-human identity and delegated authorization chains as a primary threat layer in agentic architectures—the OAuth sprawl described in this note is a direct instance of the MAESTRO identity layer risks.

CSA’s Zero Trust guidance is applicable to AI SaaS integrations through the principle that delegated OAuth tokens should never be treated as implicitly trusted simply because they were originally authorized by a legitimate user. Zero Trust access controls should apply to every API call made by an AI SaaS integration, with continuous verification rather than standing authorization. CSA’s analysis of SaaS security trends has consistently identified over-permissioned third-party integrations as a leading cause of SaaS security incidents, a pattern the incidents described in this note confirm [8].


References

[1] Google Cloud Threat Intelligence. “Widespread Data Theft Targets Salesforce Instances via Salesloft Drift.” Google Cloud Blog, September 2025.

[2] Anomali. “Reviewing the Salesforce–Salesloft Drift OAuth Supply Chain Breach.” Anomali Blog, September 2025.

[3] Cloudflare. “The impact of the Salesloft Drift breach on Cloudflare and our customers.” Cloudflare Blog, August 2025.

[4] Datadog Security Labs. “CoPhish: Using Microsoft Copilot Studio as a Wrapper for OAuth Phishing.” Datadog Security Labs, October 2025.

[5] Reco AI. “The Vercel and Context AI breach: an AI supply chain attack, step by step.” Reco AI Blog, April 2026.

[6] OpenID Foundation. “Identity Management for Agentic AI.” OpenID Foundation Whitepaper, October 2025.

[7] Trend Micro. “The Vercel Breach: OAuth Supply Chain Attack Exposes the Hidden Risk in Platform Environment Variables.” Trend Micro Research, April 2026.

[8] Cloud Security Alliance. “Why SaaS and AI Security Will Look Very Different in 2026.” CSA Blog, January 2026.

[9] NIST. “Draft NIST Guidelines Rethink Cybersecurity for the AI Era.” NIST, December 2025.

[10] BleepingComputer. “New CoPhish Attack Steals OAuth Tokens via Copilot Studio Agents.” BleepingComputer, 2026.

[11] The Hacker News. “Salesloft Takes Drift Offline After OAuth Token Theft Hits Hundreds of Organizations.” The Hacker News, September 2025.

[12] Obsidian Security. “OAuth Vulnerabilities Every Security Team Should Know.” Obsidian Security Blog, 2025.

[13] VentureBeat. “Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain.” VentureBeat, April 2026.

[14] Microsoft Tech Community. “The future of AI agents — and why OAuth must evolve.” Microsoft Entra Blog, May 2025.

← Back to Research Index