CISO Daily Briefing — May 5, 2026

CISO Daily Briefing

Cloud Security Alliance — AI Safety Initiative Intelligence Report

Report Date
May 5, 2026
Intelligence Window
48 Hours (May 4–5)
Topics Identified
5 Priority Items
Research Notes Queued
5 Overnight

Executive Summary

The May 4–5 cycle is defined by two converging crises in AI infrastructure security. A scan of over one million exposed AI services found the sector more misconfigured than any other software class studied, while a simultaneous supply chain attack on PyTorch Lightning — 11 million monthly downloads — confirmed that ML development pipelines are now active attack targets, not future risk. At the application layer, CVE-2026-24299 in Microsoft 365 Copilot (“Copirate 365”) enables command injection across enterprise networks at scale, exposing email, Teams, and SharePoint data for tens of millions of users. On the governance front, NIST CAISI formalized pre-deployment testing agreements with Google DeepMind, Microsoft, and xAI — signaling voluntary AI safety evaluation is becoming a procurement-level expectation. Binding all five topics: non-expiring OAuth ghost tokens from employee AI tool integrations are invisible to perimeter controls and persist as silent lateral-movement pathways long after staff departures.

Overnight Research Output

1

The Self-Hosted AI Security Crisis — 1 Million Exposed Services, No Authentication by Default

CRITICAL

Summary: A scan of over one million AI services sourced from certificate transparency logs found that AI infrastructure is more exposed and misconfigured than any other software class studied. Chatbots are publicly exposing full conversation histories; agent management platforms (n8n, Flowise) are reachable from the internet without authentication; and AI tool integrations grant implicit access to every connected system. Intruder’s scan identified hundreds of OpenClaw management interfaces exposed publicly, with the ClawdBot/OpenClaw ecosystem alone averaging 2.6 CVEs per day. This is not a theoretical risk: each exposed AI service management dashboard represents a direct path to credential theft, data exfiltration, and supply-chain leverage across every enterprise system the AI tool touches.

Key Actions: Audit all self-hosted AI infrastructure for internet-accessible management interfaces immediately. Require authentication on all Flowise, n8n, LiteLLM, and OpenClaw management endpoints. Treat exposed AI service dashboards with the same incident urgency as exposed RDP or VNC instances. Apply network segmentation to isolate AI infrastructure management planes from internet-routable addresses.

Coverage Gap: No existing CSA guidance addresses securing self-hosted LLM infrastructure — covering authentication defaults, network exposure hygiene, and the specific attack vectors found in Flowise, n8n, LiteLLM, and OpenClaw deployments.


Read Full Research Note (link pending)

2

Backdoored PyTorch Lightning — Credential Stealer in the ML Framework Supply Chain

HIGH URGENCY

Summary: PyTorch Lightning version 2.6.3 — a deep learning framework with 11 million monthly downloads — was compromised to deliver a credential-stealing payload via build pipeline compromise rather than a malicious package upload. Unlike the ClawHub and HuggingFace model-hub attacks previously covered by CSA, this attack targets the Python ML framework supply chain at the source. The malicious payload executes silently on import, downloads a JavaScript runtime, and runs an 11.4 MB obfuscated credential harvester targeting browsers, .env files, and cloud service credentials. The maintainer has reverted to version 2.6.1; forensic investigation of how the build pipeline was breached is ongoing. The downstream blast radius — ML engineers with cloud credentials — represents a high-value target class for attackers seeking cloud infrastructure access.

Key Actions: Pin PyTorch Lightning to v2.6.1 across all ML environments immediately. Audit and rotate cloud credentials for any environment that imported v2.6.3. Review build pipeline access controls for ML toolchains. Implement SBOM validation and dependency pinning as standard practice for ML framework dependencies.

Coverage Gap: CSA’s HuggingFace/ClawHub note addresses model and skills repositories. This attack targets the Python ML framework supply chain via PyPI and build pipeline compromise — a distinct vector with different mitigation requirements: dependency pinning, build pipeline access controls, and SBOM validation.

View Full Research Note

3

Microsoft 365 Copilot CVE-2026-24299 — “Copirate 365” Enterprise Command Injection

HIGH URGENCY

Summary: Researcher wunderwuzzi (EmbraceTheRed) presented “Copirate 365” at DEF CON, disclosing CVE-2026-24299 — a command injection vulnerability in Microsoft 365 Copilot’s service backend that enables information disclosure across any attacker-reachable network. Unlike earlier M365 Copilot prompt injection research that targeted individual user sessions, this technique operates through the backend command execution path and can affect any organization that has deployed M365 Copilot with default configurations. Microsoft 365 Copilot is now embedded in email, Teams, SharePoint, and calendar workflows for tens of millions of enterprise users, making the blast radius of a systematic exploitation campaign potentially catastrophic. Patch availability from Microsoft is not yet confirmed.

Key Actions: Contact Microsoft for CVE-2026-24299 remediation status and patch timeline immediately. Review Copilot deployment configurations against Microsoft’s hardening guidance. Assess whether Copilot should be scoped to lower-sensitivity environments pending a patch. Ensure Copilot access policies exclude sensitive data repositories until the vulnerability is resolved.

Coverage Gap: CSA has no published guidance on enterprise Copilot security hardening, or on how the class of enterprise AI assistant platforms (M365 Copilot, Google Gemini for Workspace, Salesforce Einstein) creates new information disclosure attack surfaces beyond traditional email and document security controls.

View Full Research Note

4

CAISI Signs Pre-Deployment Testing Agreements with Google DeepMind, Microsoft, and xAI

HIGH · GOVERNANCE

Summary: NIST’s Center for AI Standards and Innovation (CAISI) today announced voluntary pre-deployment evaluation agreements with Google DeepMind, Microsoft, and xAI — the first multi-lab, pre-release AI security testing regime of this scale. The agreements provide for unclassified government evaluations of frontier model capabilities with national security implications, focusing on cybersecurity, biosecurity, and chemical weapons risks before public deployment. This is a direct response to the March 2026 White House National AI Policy Framework, which urged joint AI security-testing environments between agencies and frontier labs. Building on CAISI’s earlier work establishing the AI Agent Standards Initiative and the RFI on securing AI agent systems, today’s announcement marks a meaningful escalation: voluntary pre-deployment security review is becoming a regulatory expectation, not merely a vendor safety commitment. For CISOs and AI governance leads, vendor participation (or non-participation) in CAISI testing will increasingly function as a due diligence criterion in AI procurement.

Key Actions: Update vendor risk questionnaires to include CAISI participation status when evaluating frontier AI vendors. Brief board-level governance committees on the emerging voluntary pre-deployment testing regime and its compliance trajectory. Review AI procurement policies to anticipate CAISI-equivalent testing becoming a mandatory requirement within the next 12–18 months.

Coverage Gap: CSA has analyzed Five Eyes agentic AI guidance but has no research note addressing the U.S. government voluntary pre-deployment testing regime — how it works, what the security evaluation scope covers, and what enterprises should expect from AI vendors who participate in (or decline) CAISI testing.

View Full Research Note

5

OAuth Ghost Tokens and the Enterprise AI Integration Risk

HIGH URGENCY

Summary: The April 2026 Context.ai breach — which cascaded into a supply chain attack on Vercel and its downstream customers — illustrates a structural identity risk that is being amplified by AI tool adoption: OAuth tokens created when employees connect AI tools to enterprise SaaS environments are persistent, do not expire when employees leave, do not reset when passwords change, and are invisible to most perimeter security controls. Wiz’s technical analysis and THN’s reporting on the cascade estimate that enterprises now average 40–50 automated credentials per employee, with AI agent connections and OAuth grants accounting for a growing share. The attack pattern — compromise one SaaS AI tool, pivot via trusted OAuth tokens to downstream enterprise customers — is a systemic risk distinct from any single CVE. The Context.ai/Vercel incident is the clearest current-cycle illustration, but the structural issue is broader than any single breach.

Key Actions: Conduct a full OAuth grant audit across enterprise identity providers, specifically targeting AI tool integrations created by individual employees. Implement token expiry enforcement and automated revocation upon employee departure for all AI-tool OAuth grants. Establish detection rules for anomalous OAuth-based access patterns from AI integration accounts. Inventory all “shadow AI” tool connections to enterprise SaaS that were created without IT authorization.

Coverage Gap: CSA IAM research addresses OAuth in traditional SaaS contexts. No published guidance addresses the specific risk introduced when employees independently connect AI tools to enterprise identity providers — the lifecycle of those OAuth grants, detection approaches, and revocation procedures in agentic AI integration contexts.

View Full Research Note

Notable News & Signals

cPanel CVE-2026-41940 — “Sorry” Ransomware Hits 44,000 Servers

A critical authentication bypass (CVSS 9.8) in cPanel and WHM is being mass-exploited to deploy “Sorry” ransomware. At least 44,000 IP addresses have been compromised; 7,135 confirmed hosts show files encrypted with a “.sorry” extension. The vulnerability was silently exploited for two months before a patch was made available. Outside the AI Safety Initiative scope but operationally urgent for infrastructure and platform engineering teams managing shared hosting or managed services.

ScarCruft/APT37 — BirdCall Android Backdoor via Gaming Platform Supply Chain

North Korea-aligned ScarCruft compromised sqgame.net, a gaming platform used by ethnic Koreans in China, to deploy BirdCall — a multi-platform backdoor with screenshot capture, keystroke logging, and cloud C2 via Dropbox and pCloud. The supply chain approach extends BirdCall’s reach to Android devices for the first time. Nation-state espionage story; outside AI scope but notable for mobile threat intelligence and supply chain monitoring.

Topics Already Covered — No New Action Required

← Back to Research Index