CISO Daily Briefing
Cloud Security Alliance Intelligence Report
Executive Summary
Today’s cycle is dominated by a new attack surface: adversaries targeting the architecture of AI agent systems themselves. North Korean APT UNC1069 compromised the Axios npm package (45M weekly downloads) through social engineering of its maintainer, reaching OpenAI’s macOS code-signing infrastructure and forcing emergency certificate rotation affecting ChatGPT Desktop, Codex, and Atlas. Two peer-reviewed papers formalize novel attack classes — computer-use agent blind spots that achieve 73–90%+ success rates, and salami-slicing cumulative trust exploitation that evades per-request safety filters entirely. A high-urgency governance deadline requires attention: ENISA’s EU Digital Wallet certification consultation closes April 30, and no CSA publication maps AICM identity controls to this mandatory framework. Systemic risk from the agentic SOC behavioral baseline gap — flagged by all three major RSAC 2026 vendors — rounds out today’s priorities.
Overnight Research Output
UNC1069: DPRK Social Engineering of AI Vendor Infrastructure via Axios npm Supply Chain
CRITICAL
Summary: North Korean APT group UNC1069 social-engineered the maintainer of the Axios npm package — 45 million weekly downloads — using a fake web conference call, resulting in the publication of backdoored versions 1.14.1 and 0.30.4. The malicious dependency was executed by OpenAI’s GitHub Actions code-signing workflow, exposing the certificates used to sign ChatGPT Desktop, Codex, Codex CLI, and Atlas. OpenAI revoked and rotated its macOS app certificates, issuing a May 8, 2026 deadline for users to transition. This attack is distinct from the TeamPCP/prt-scan campaign covered April 14: UNC1069 operates at the human maintainer layer via social engineering, specifically targeting tier-1 AI vendor code-signing infrastructure — a new operational pattern for DPRK cyber operations.
Why This Matters for Your Organization: Enterprises running ChatGPT Desktop, Codex, or Atlas on macOS must complete certificate transition by May 8. More broadly, this attack demonstrates that AI vendor software supply chains are a DPRK targeting priority, and that open-source maintainer social engineering is the entry vector — a gap not addressed by automated CI/CD security controls alone.
The Hacker News — UNC1069 Social Engineering of Axios Maintainer Led to npm Supply Chain Attack
The Hacker News — OpenAI Revokes macOS App Certificate After Malicious Axios Supply Chain Incident (April 13, 2026)
BleepingComputer — OpenAI rotates macOS certs after Axios attack hit code-signing workflow (April 13, 2026)
Computer-Use Agent Safety Blind Spots: When Benign Instructions Become Attack Vectors
HIGH URGENCY
Summary: New arXiv research (April 12, 2026) introduces the OS-BLIND benchmark, demonstrating that computer-use agents are systematically vulnerable to a scenario where unmodified, benign user instructions trigger harmful outcomes through adversarial environmental context — malicious browser content, document payloads, or email bodies that redirect agent actions after subtask decomposition. Most tested agents exceed 90% attack success rates; even Claude 4.5 Sonnet — a safety-aligned model — reaches 73%. A companion paper, ClawGuard (arXiv:2604.11790), proposes a runtime security framework confirming the research community views indirect prompt injection against deployed computer-use agents as an urgent and unsolved problem. Standard enterprise safety testing evaluates agents in clean environments and does not account for adversarial environmental manipulation.
Why This Matters for Your Organization: Any enterprise deployment of Claude Computer Use, OpenAI computer-use capabilities, or similar tools in agentic workflows is exposed to this attack class today. The attack does not require the adversary to find a safety failure in a single interaction — it exploits the gap between clean-environment testing and real-world adversarial content. No AICM control currently addresses computer-use agent sandboxing, environmental isolation, or subtask intent verification.
arXiv:2604.11790 — ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt Injection (April 14, 2026)
arXiv:2604.11681 — PlanGuard: Defending Agents against Indirect Prompt Injection via Planning-based Consistency Verification (April 14, 2026)
arXiv — “The Blind Spot of Agent Safety: How Benign User Instructions Expose Critical Vulnerabilities in Computer-Use Agents” (April 12, 2026; search arxiv.org for title)
Salami Slicing: Cumulative Trust Exploitation as an Emerging LLM Attack Class
HIGH URGENCY
Summary: An April 14 arXiv paper formalizes “salami slicing” as a distinct attack class against LLM systems: a sequence of individually innocuous requests that each pass safety filters but whose cumulative effect constitutes a policy violation, data exfiltration, or system manipulation. The attack exploits the stateless safety evaluation architecture of most deployed LLMs — each request is evaluated in isolation, with no policy ledger maintained across the session. Unlike jailbreaks, which require the attacker to cause a safety failure in a single interaction, salami slicing works by staying entirely within the safety envelope of each individual exchange. Embrace The Red’s recent demonstrations of multi-step agent exploitation confirm this is an active research-to-exploitation pipeline, not a theoretical concern. The attack is particularly dangerous in multi-step agentic workflows handling sensitive operations across long sessions.
Why This Matters for Your Organization: Per-request anomaly detection and single-interaction safety filters — the primary defenses enterprises rely on today — are architecturally blind to this attack. Mitigation requires session-level policy tracking, cumulative action auditing, and cross-request anomaly scoring. None of these controls exist in the current AICM framework, and no vendor product implements them as a standard feature.
arXiv — “The Salami Slicing Threat: Exploiting Cumulative Risks in LLM Systems” (April 14, 2026; Yihao Zhang, Kai Wang et al.; search arxiv.org/find/cs.CR for title)
Embrace The Red — “Agent Commander: Promptware-Powered Command and Control” (March 16, 2026)
Embrace The Red — “Given Enough Agents, All Bugs Become Shallow” (April 7, 2026)
ENISA EU Digital Wallet Certification: AI Agent Identity Requirements for European Markets
GOVERNANCE — DEADLINE APR 30
Summary: ENISA launched a public consultation on April 3, 2026 on the draft EU Digital Wallet (EUDI Wallet) certification scheme under eIDAS 2.0, with a response deadline of April 30 — two weeks away. Member states have signed a €1.6M contribution agreement with the European Commission and are required to provide at least one certified EUDI Wallet by end of 2026. The certification scheme establishes the mandatory security requirements — assurance levels, binding requirements, audit trail obligations — for the identity infrastructure that AI agents will rely on for authorization when operating in EU digital services markets. Any enterprise deploying AI agents interacting with European government services, financial platforms, or regulated business workflows must integrate with certified wallet infrastructure. CSA is positioned to submit a comment on the consultation, and AICM AI-IAM domain controls map directly to the certification requirements.
Why This Matters for Your Organization: If your organization operates or plans to deploy AI agents in EU markets by end-2026, EUDI Wallet compliance is a mandatory prerequisite, not an option. The April 30 consultation deadline is the last structured opportunity to influence the certification requirements before they are finalized. The compliance gap is real: no current CSA publication maps AICM identity controls to EUDI Wallet certification requirements.
ENISA — ENISA advances the certification of EU Digital Wallets (April 3, 2026)
The Agentic SOC Behavioral Baseline Gap: Systemic Blind Spot in AI-Driven Security Operations
STRATEGIC RISK
Summary: At RSAC 2026, CrowdStrike (Falcon AIDR), Cisco, and Palo Alto Networks (Prisma AIRS 3.0) all shipped agentic SOC capabilities positioning AI agents as front-line defenders. A VentureBeat analysis found that none of these platforms ships an agent behavioral baseline — the foundational capability security teams need to distinguish normal AI agent behavior from compromised AI agent behavior. CrowdStrike sensors detect more than 1,800 distinct AI applications running on enterprise endpoints representing approximately 160 million unique instances. Cisco survey data shows 85% of enterprise customers have AI agent pilots underway with only 5% in production. The 80% gap is disproportionately attributable to a single unanswered question: how do you know when your AI security agent has been manipulated? With the average eCrime breakout time at 29 minutes per CrowdStrike’s 2026 Global Threat Report, organizations are deploying AI agents to defend against AI-accelerated attacks without the detection infrastructure to monitor those defenders.
Why This Matters for Your Organization: You are likely deploying or evaluating agentic SOC tools. None of the major vendors has solved the behavioral baseline problem, meaning your AI defenders cannot be monitored for compromise with today’s tooling. This creates a second-order vulnerability: if an adversary can manipulate your AI security agent — via prompt injection, model manipulation, or environmental attacks — you will not detect it. CSA’s MAESTRO framework and AICM address threats to AI pipelines, but neither provides controls for AI agents that are themselves performing security functions.
VentureBeat — RSAC 2026 shipped five agent identity frameworks and left three critical gaps open
CrowdStrike — 2026 Global Threat Report: Evasive Adversary Wields AI
Notable News & Signals
RSAC 2026: Five Agent Identity Frameworks, Three Unresolved Gaps
VentureBeat’s post-RSAC analysis found that despite five competing agent identity framework proposals from major vendors, three critical gaps remain unaddressed: agent behavioral baselines, cross-agent trust federation, and agent privilege scope enforcement. The convergence of frameworks without solving these gaps may accelerate enterprise adoption prematurely.
CrowdStrike 2026 Global Threat Report: eCrime Breakout Now 29 Minutes
The 2026 Global Threat Report confirms average eCrime breakout time has compressed to 29 minutes — down from 62 minutes in 2024. AI-accelerated attack pacing is driving this compression, creating a window that human-only SOC teams cannot reliably close without agentic tooling, which in turn creates the behavioral baseline gap described in Topic 5.
arXiv: PlanGuard Adds Planning-Layer Defense Against Indirect Prompt Injection
PlanGuard (arXiv April 14) proposes a complementary approach to ClawGuard: rather than runtime monitoring, it uses planning-based consistency verification to detect when an agent’s execution plan has deviated from its original instruction intent — catching indirect prompt injection before harmful actions are taken.
ENISA Certification Portal: EUDI Wallet Scheme Documentation Now Available
The full draft certification scheme documentation for the EU Digital Identity Wallet is now available on ENISA’s certification portal. Organizations evaluating EU market AI agent deployments should review assurance level requirements (LoA Substantial and High) and binding credential requirements before the April 30 consultation deadline.
Topics Already Covered (No New Action Required)
- Claude Mythos Autonomous Offensive Threshold: Covered by April 14 research note and April 13 archive whitepaper on AI vulnerability discovery and containment.
- Project Glasswing Industry Coalition: Governance implications addressed within the Mythos research note.
- GitHub Actions prt-scan / TeamPCP Supply Chain Campaign: Covered by April 14 research note on the prt-scan automated CI workflow exploitation pattern.
- NIST CAISI AI Agent Standards Initiative: Covered by April 14 research note on NIST’s AI agent security agenda.
- AI Velocity Gap / OX Security 216M Findings: Covered by April 14 whitepaper on the AI velocity gap in development security capacity.
- CISA KEV Batch (Fortinet, Adobe, Microsoft Exchange): Covered by April 14 research note on CISA KEV wave April 2026.
- Sovereign AI Vendor Dependency (DoD/Anthropic): Covered by April 13 archive research note.
- Adobe Acrobat Reader Zero-Day (CVE-2026-34621): Covered within the CISA KEV research note above.
- OpenClaw Real-World Safety Analysis: Covered by enterprise OpenClaw zero-trust hardening guide.
- MCP Protocol Security: Previously covered research note in the CSA corpus.