AI Developer Tool Impersonation: Typosquatting, Fake Install Guides, and InfoStealer Delivery

Authors: Cloud Security Alliance AI Safety Initiative
Published: 2026-03-10

Categories: AI Security, Threat Intelligence, Developer Security, Supply Chain Security
Download PDF

AI Developer Tool Impersonation: Typosquatting, Fake Install Guides, and InfoStealer Delivery


Key Takeaways

A documented wave of threat actor activity throughout 2024 and into early 2026 demonstrates that AI developer tools have become a preferred lure for infostealer delivery. Attackers exploit the trust that software engineers place in AI-branded packages, installation pages, and tooling ecosystems to intercept credentials, cloud API keys, and source code at the moment of tool adoption—a phase when, for many developers, security scrutiny tends to be lower and installation permissions are typically elevated.

The threat landscape spans three converging attack patterns. The first involves typosquatting and brand-name abuse in open-source package registries, where packages impersonating Claude, ChatGPT, DeepSeek, and related tools deliver infostealers to developers who install them without verifying publisher authenticity. The second is a rapidly evolving social engineering technique—variously described as “InstallFix” or “ClickFix”—in which attackers clone legitimate CLI tool installation pages and distribute them through search engine advertising, replacing authentic install commands with payloads that silently deploy credential-stealing malware. The third involves the abuse of legitimate AI platforms themselves as content hosting infrastructure: threat actors have hosted malicious terminal command lures on the claude.ai domain by publishing them as Claude artifacts and promoting the resulting URLs through organic and paid search.

Security operations teams, platform security engineers, and AI governance stakeholders should treat the AI developer tooling stack—comprising IDE extensions, package registries, CLI install scripts, and AI platform content—as a high-priority impersonation target requiring specific detection and prevention controls.


Background

Developer Trust as the Primary Attack Surface

AI coding tools occupy an unusual position in the enterprise threat model. Developers installing a new coding assistant or AI-powered CLI tool typically operate with elevated local permissions, manage secrets for cloud infrastructure, and commonly run installation scripts downloaded from the internet with minimal ceremony. This combination—elevated privilege, valuable credential store, and a common developer practice of executing remote installation scripts—makes the AI tool installation workflow a high-value target for credential theft.

The attack surface expanded materially during 2024 and 2025 as the developer AI tooling market fragmented into dozens of competing products. Where the 2021–2023 era was characterized by a small number of dominant AI tools (primarily OpenAI’s GPT-based offerings), by early 2026 developers routinely maintained installations of multiple AI coding assistants, MCP server integrations, local model runners, and CLI-based agentic tools. Each new product category creates a fresh impersonation opportunity: threat actors do not need to compromise any legitimate vendor—they only need to appear credibly in a search result or package listing at the moment a developer is evaluating a new tool.

The economic incentive is substantial. Developer workstations are disproportionately valuable targets compared to general consumer endpoints. A successful infostealer infection on a developer machine commonly yields cloud provider API keys (AWS, Azure, GCP), code repository tokens (GitHub, GitLab), secrets from environment variable files and CI/CD configurations, cryptocurrency wallet keys, and browser-stored credentials for SaaS platforms. These credentials can provide direct pathways into production infrastructure, a property that security researchers have documented as a draw for ransomware operators and data extortion groups.

The Search Engine Distribution Problem

A structural shift in how malware reaches developer victims accelerated through 2025. Traditional infostealer distribution relied heavily on phishing email campaigns, which enterprise email security gateways have become progressively more effective at filtering. Threat actors responded by migrating distribution to search engines, primarily through Google Ads. Security researchers at Push Security reported in early 2026 that four out of five InstallFix and ClickFix lure accesses originate from search engine queries rather than email links [1]. Enterprise email security gateways do not inspect search ad traffic, creating a coverage gap for organizations whose defenses center primarily on email filtering. Web proxy and DNS controls that inspect outbound traffic at the network layer can address this gap, but their effectiveness depends on whether fake AI tool domains have been flagged in threat intelligence feeds.

Malvertising for AI tools operates through a predictable pattern: a developer searches for a product installation guide (e.g., “how to install Claude Code,” “install DeepSeek Python,” “Cursor AI download”), encounters a sponsored result at the top of the search page, and follows it to a pixel-perfect clone of the legitimate vendor’s documentation. The clone reproduces all visual elements accurately but replaces the authentic installation command with a malicious alternative. The developer executes the command in their terminal—often without verifying the URL in the browser address bar, a step that the visual fidelity of fake documentation pages is specifically designed to discourage. Modern infostealers are engineered for rapid credential exfiltration; by the time detection occurs—if it occurs—primary credential stores have often already been harvested.


Security Analysis

InstallFix: Cloning CLI Install Pages for AI Tools

The most current and operationally significant campaign documented in this note is the InstallFix technique, described by Push Security researchers in March 2026 [1]. InstallFix is conceptually analogous to the well-documented ClickFix social engineering pattern, adapted for the specific workflow of developer CLI tool installation. Rather than instructing a victim to paste a command into a Windows Run dialog box (the ClickFix pattern), InstallFix presents a fully rendered installation guide for a legitimate developer tool—complete with accurate documentation prose and correct-looking shell code blocks—where the install command has been replaced by a malicious payload.

The documented InstallFix campaign specifically targeted searches for Claude Code, Anthropic’s agentic coding CLI. Researchers identified multiple fake installation pages hosted across several infrastructure providers, including Cloudflare Pages (claud-code[.]pages[.]dev, claude-code-docs-site[.]pages[.]dev), Squarespace (claudecode-developers[.]squarespace[.]com, claude-code-install[.]squarespace[.]com), and a dedicated domain (claude-code-macos[.]com) [1]. By distributing across multiple hosting platforms, the campaign reduced its vulnerability to single-provider takedowns. The malicious payloads themselves were hosted on separate infrastructure, including contatoplus[.]com, sarahmoftah[.]com, and claude[.]update-version[.]com [1].

On Windows systems, the attack chain progressed from cmd.exe through mshta.exe to remote HTML execution, ultimately deploying Amatera Stealer—an evolved variant of the Lumma Stealer family that uses WoW64 Syscalls and direct NTSockets for command-and-control communication, providing evasion against behavioral analysis tools that monitor standard Windows API calls [1]. The specificity of this infrastructure—purpose-built domain names, cross-platform payload capability, and use of a polished infostealer with active development—indicates this campaign was operated by threat actors with significant resources and operational sophistication.

Platform Abuse: Claude Artifacts as Malware Delivery Infrastructure

A structurally distinct variant of AI tool impersonation exploits legitimate AI platforms as hosting infrastructure for malicious content, rather than creating lookalike external sites. In a campaign documented by Moonlock Lab and Datadog Security Labs, threat actors published malicious terminal command lures as Claude artifacts—shareable content that Anthropic hosts on the claude.ai domain—and promoted the resulting URLs through search engine optimization and paid placement [2][3]. Because the malicious content was hosted on claude.ai itself, browser security warnings, enterprise web proxies filtering on domain reputation, and developer intuition were all ineffective at identifying the threat.

Victims were funneled through seemingly helpful search results for developer queries such as “online DNS resolver,” “macOS CLI disk space analyzer,” and “Homebrew installation guide.” The Claude artifact presented as a legitimate tool documentation page before instructing the user to run a terminal command that fetched the MacSync infostealer. MacSync exfiltrated macOS Keychain data, browser credentials, and cryptocurrency wallet contents to an attacker-controlled endpoint (a2abotnet[.]com) via HTTP POST with eight retry attempts using chunked archives, indicating a robust exfiltration design intended to succeed even on intermittent network connections [2][3]. Over 10,000 users were documented accessing artifacts containing these dangerous instructions before the campaign was identified and the content removed [2].

This attack vector presents a fundamental challenge for AI platform operators: content-sharing features that enable legitimate productivity use cases are structurally indistinguishable from attacker-controlled distribution infrastructure unless the platform implements independent analysis of hosted content for social engineering indicators. While platform operators have indicated that additional review mechanisms are under development in response to these campaigns, the underlying tension between open content sharing and abuse prevention is likely to persist across all AI platforms that offer similar functionality.

Typosquatting AI Package Names in PyPI and npm

The package registry attack surface for AI tool impersonation is well-documented and continues to expand as new AI tools reach mainstream developer adoption. In November 2023—nearly one year before discovery—an attacker uploaded two Python packages to the Python Package Index: gptplus, claiming to provide API access to GPT-4 Turbo, and claudeai-eng, impersonating Anthropic’s Claude AI Python interface. Kaspersky’s Global Research and Analysis Team discovered the packages in November 2024, by which time they had accumulated approximately 3,574 combined downloads across more than 30 countries [4]. Both packages contained Base64-encoded code that fetched a Java archive from GitHub, deploying JarkaStealer—a Java-based malware-as-a-service infostealer sold via Telegram that exfiltrated browser credentials, session tokens from Telegram, Discord, Steam, and Minecraft, and general system reconnaissance data [4].

The twelve-month window between upload and discovery illustrates a systemic weakness in package registry security review. Neither the PyPI maintainers nor the community flagged the packages during their active distribution period; they persisted through organic discovery growth and were removed only after a researcher specifically investigated AI-branded package names for malicious content. The detection gap illustrated by this case—prolonged unmonitored persistence following initial upload—has been documented across multiple package registries and represents a known structural challenge in open-source package security, even as registries including npm and crates.io have incrementally implemented automated scanning capabilities.

When DeepSeek achieved widespread developer attention in January 2025, threat actors moved within days. Two malicious packages—deepseeek and deepseekai (each with a single extra or transposed character relative to the expected “deepseek”)—were uploaded twenty minutes apart on January 29, 2025, by an account that had been dormant since its creation in June 2023 [5]. The packages harvested environment variables including cloud API keys, database credentials, and infrastructure secrets, exfiltrating through a Pipedream-based command-and-control relay. The packages were removed after 222 total downloads—smaller than the JarkaStealer campaign but notable for the speed of deployment; the packages appeared within days of DeepSeek’s surge in developer attention, likely preceding broad threat intelligence coverage of DeepSeek-themed lures.

Slopsquatting: AI Hallucinations as a Supply Chain Attack Vector

A structurally novel threat emerged in 2025 from the intersection of LLM code suggestions and package registries. Researchers analyzing large samples of LLM-generated code found that approximately 20% of generated code samples referenced non-existent package names—names the model fabricated rather than recalled from training data [6][7]. Critically, this hallucination is not random: 58% of hallucinated package names recurred consistently across ten separate model runs on the same prompt, meaning specific model-query combinations produce predictable, repeatable false recommendations [7]. This repeatability is the property that makes “slopsquatting” viable as an attack: an attacker who maps which package names a popular model consistently recommends for common developer queries can register those names with malicious payloads and rely on organic developer traffic to drive installation.

The threat is not merely theoretical. In January 2024, a researcher at Lasso Security registered huggingface-cli—a package name that multiple LLMs hallucinated in response to Hugging Face installation queries—on PyPI as a benign demonstration payload. The package accumulated over 30,000 downloads in three months, with at least one legitimate open-source project (Alibaba’s GraphTranslator) incorporating a pip install huggingface-cli command into its official documentation, reportedly following an AI-generated recommendation [6]. Had the package contained malicious code, the impact would have extended well beyond the initial developer who ran the install command. AI coding assistants integrated into editors such as Cursor, Claude Code, and GitHub Copilot extend the slopsquatting attack surface by generating installation commands inline within development workflows, where developers may execute suggestions without separately verifying package names.

The Broader Infostealer Campaign Landscape

The InstallFix and typosquatting campaigns documented above are not isolated incidents but components of a larger ecosystem of infostealer distribution targeting developers through AI tool impersonation. Several named threat actors and malware families deserve specific attention.

UNC6032, a Vietnam-linked threat actor tracked by Google’s Mandiant, operated a year-long campaign impersonating Luma AI, Kling AI, and Canva Dream Lab through over 30 fake websites and thousands of social media advertisements reaching millions of users [8][16]. The campaign’s technical sophistication was notable: UNC6032 deployed a multi-stage payload chain beginning with STARKVEIL, a Rust-based dropper that used Unicode Braille character obfuscation to hide suspicious strings from static analysis, followed by GRIMPULL, a .NET downloader with anti-virtualization checks and Tor-based command-and-control, ultimately delivering XWorm for persistent access and credential exfiltration [16].

On macOS—increasingly a primary platform for software developers—the Atomic macOS Stealer (AMOS) has been documented as a primary macOS infostealer in fake AI tool campaigns, distributed through multiple channels including fake Homebrew installation pages, Google Ads promoting poisoned ChatGPT and Grok AI pages, fake DeepSeek download sites, and the EditProAI campaign in which a fake AI video editor delivered AMOS on macOS and Lumma on Windows [9][10][11]. The cross-platform capability of advanced campaigns—deploying AMOS on macOS and Lumma or Lumma variants on Windows from the same lure infrastructure—indicates deliberate targeting of developer workstations regardless of operating system.

Lumma Stealer was among the highest-volume infostealers of 2025. Microsoft’s Digital Crimes Unit documented over 394,000 Windows machine infections in a two-month window between March and May 2025, ultimately obtaining a US District Court order to seize approximately 2,300 malicious Lumma-associated domains [12]. Lumma’s distribution through fake AI tool pages contributed to this volume, and the Amatera Stealer variant used in the InstallFix campaign against Claude Code represents the malware family’s continued evolution following the 2025 infrastructure disruption.

The Vietnamese-attributed Noodlophile Stealer campaign, documented by Morphisec in May 2025, illustrated how AI tool impersonation extends beyond coding tools to consumer-facing AI applications. Threat actors created fake AI video generation platforms impersonating “Luma DreamMachine” and promoted them through Facebook groups that reached over 62,000 views per post [13]. Victims who uploaded images expecting AI-generated video output received malicious executable payloads instead; Noodlophile harvested browser credentials and cryptocurrency wallet contents and offered an optional XWorm RAT installation for buyers who paid for persistent access capability [13].

SANDWORM_MODE: When the AI Toolchain Itself Becomes the Delivery Mechanism

The most architecturally novel development in AI tool impersonation is the emergence of attacks that do not merely impersonate AI tools to deliver malware but instead compromise the AI toolchain itself to propagate. In February 2026, Socket researchers discovered a family of 19 malicious npm packages operating as a self-propagating worm, collectively named SANDWORM_MODE [14]. The packages, distributed through typosquatting names targeting common npm dependencies, injected a rogue Model Context Protocol (MCP) server configuration into the AI coding tool configuration files of Claude Code, Claude Desktop, Cursor, VS Code Continue, and Windsurf/Codeium on infected developer machines [14].

Once installed, the worm used the injected MCP server to feed hidden instructions to the developer’s AI coding assistant—directing it to read and exfiltrate sensitive files, API keys, SSH keys, and environment variables. Simultaneously, the worm stole npm authentication tokens from the infected machine and used them to publish malicious versions of packages the developer had previously published, propagating the attack to downstream consumers of those packages [14]. This represents a meaningful escalation in the threat model: rather than requiring a developer to install a specifically named malicious package, SANDWORM_MODE achieves persistence and propagation by embedding itself into the AI assistant infrastructure developers use throughout their workday.


Recommendations

Immediate Actions

Security teams should treat the AI developer tool installation workflow as a high-risk operation requiring the same scrutiny applied to running untrusted code in production. Developer workstations should have endpoint detection and response agents capable of behavioral analysis installed, and security operations playbooks should include AI-tool-specific infostealer indicators such as processes spawned by terminal emulators that subsequently write to credential store locations, contact unusual external IPs, or enumerate environment variables.

For package registry hygiene, organizations should require developers to verify publisher identity and package download counts before installing any AI-branded packages. The combination of a recognizable AI product name, a recently created publisher account, and low download counts is the characteristic profile of a typosquatting attack. Dependency review tooling such as Socket.dev, Snyk, or Dependabot should be configured with rules that flag newly registered packages with AI brand names in their title.

DNS and web proxy controls should be updated to include feeds of known fake AI tool domains. Security teams can monitor threat intelligence sources such as Push Security’s published IOC lists [1], Kaspersky threat bulletins, and open-source IOC sharing platforms for current campaign domains. URL inspection policies should alert on any domain that closely resembles official AI vendor documentation domains (e.g., anthropic.com, cursor.com, deepseek.com) without matching them exactly.

Short-Term Mitigations

Organizations that have deployed AI coding assistants to developer populations should audit MCP server configurations across affected workstations. Given the SANDWORM_MODE campaign’s technique of injecting rogue MCP server entries into coding tool configuration files, a baseline audit of what MCP servers are configured in Claude Desktop, Claude Code, Cursor, and similar tools is a prerequisite for understanding current exposure. Any unrecognized MCP server entries should be treated as potential indicators of compromise.

For teams using AI coding assistants that generate package installation commands, implementing a verification step before executing AI-generated pip install, npm install, or curl | bash commands is a meaningful risk reduction. This can be operationalized through developer education, pre-commit hooks that verify package names against known registries, or through AI assistant configuration that instructs the model to always recommend verifying package names before installation. The slopsquatting threat specifically calls for developers to cross-reference any AI-suggested package name against the actual package registry before installation.

Procurement and platform security controls should address the VS Code extension ecosystem specifically. The roughly fourfold increase in malicious VS Code extension detections between 2024 and 2025—from 27 to 105 documented incidents—indicates that this attack surface is actively being exploited at scale [15]. Enforcing an approved extension list or requiring security review before installation of new VS Code extensions represents a meaningful control, particularly for developers with access to production credentials and cloud environments.

Strategic Considerations

The InstallFix campaign against Claude Code and the MacSync campaign via Claude artifacts both demonstrate that threat actors have invested in tooling and infrastructure specifically targeting AI developer tool users. This level of investment—custom domains, polished cloned documentation, purpose-built infostealers with C2 evasion capabilities—is consistent with the high value of developer credentials as an initial access vector. Security teams should anticipate continued expansion of these campaigns to additional AI tools as the developer tooling market grows.

Platform operators—including Anthropic, npm, and the VS Code Marketplace team—have taken documented steps to detect and remove malicious AI-tool-impersonating content, though the scope and effectiveness of these efforts varies by platform. However, the structural economics favor attackers: new malicious packages, extensions, and fake domains cost little to deploy, while detection and removal require sustained operational effort. Organizations that rely solely on platform-side controls are accepting residual risk that is difficult to quantify. Defense-in-depth—combining platform controls, endpoint detection, network monitoring, and developer security awareness—provides more durable protection than reliance on any single layer.

The agentic AI threat vector represented by SANDWORM_MODE is likely to expand. As AI coding assistants acquire more capability to autonomously execute code, manage files, and interact with external services, the consequences of a compromised AI toolchain configuration escalate from credential theft to potential autonomous lateral movement within development environments. Security frameworks that consider agentic AI tools as trust boundaries requiring the same scrutiny as network connections to external services will be better positioned to address these emerging threats.


CSA Resource Alignment

This note’s findings map to several active CSA research and framework initiatives.

The MAESTRO Agentic AI Threat Modeling framework addresses agentic AI systems as threat surfaces. The SANDWORM_MODE worm—which achieves persistence and propagation by injecting itself into AI coding assistant configurations—is an example of the agentic trust boundary violations that MAESTRO’s threat model anticipates. Organizations should use MAESTRO’s agent trust zone analysis when evaluating MCP server configurations and AI assistant tool integrations, treating each external integration as a potential attacker-controlled context injection point.

The AI Controls Matrix (AICM) supply chain security domain (Supply Chain Risk Management) provides control objectives directly applicable to the typosquatting and fake installer threats. AICM controls around third-party AI component vetting, provenance verification for AI-adjacent packages, and approved tool registries address the package registry attack surface documented in this note. The AICM v1.0 guidance on shared security responsibility for AI systems is particularly relevant in the context of platform-hosted malicious content such as the Claude artifacts campaign.

The Cloud Controls Matrix (CCM) under Supply Chain Management and Threat and Vulnerability Management provides governance controls for the organizational policies needed to implement the technical recommendations above. CCM’s application-level security controls and identity and access management domain apply to the credential exfiltration threat described throughout this note.

The CSA Zero Trust Guidance is directly relevant to the developer workstation threat model. The Zero Trust principle of continuous verification—specifically, the extension of least-privilege and explicit verification requirements to internal developer tooling—would structurally limit the damage that a compromised AI tool installation can cause by restricting the credential scope available to newly installed tools.

The CSA AI Organizational Responsibilities guidance addresses how enterprises should govern AI tool procurement and deployment. The campaigns documented here reinforce the governance argument: organizations that deploy AI coding assistants to developer populations without establishing package vetting policies, MCP server governance, and developer security awareness specific to AI tooling are accepting attack surface without commensurate controls.


References

[1] Push Security, “InstallFix: Fake Claude Code Install Pages Deliver Infostealers via Malvertising,” Push Security Blog, March 2026. URL: https://pushsecurity.com/blog/installfix

[2] BleepingComputer, “Claude LLM Artifacts Abused to Push Mac Infostealers in ClickFix Attack,” BleepingComputer, 2025. URL: https://www.bleepingcomputer.com/news/security/claude-llm-artifacts-abused-to-push-mac-infostealers-in-clickfix-attack/

[3] Datadog Security Labs, “Tech Impersonators: ClickFix and macOS Infostealers,” Datadog Security Labs, 2025. URL: https://securitylabs.datadoghq.com/articles/tech-impersonators-clickfix-and-macos-infostealers/

[4] Kaspersky GReAT, “PyPI Attack: Fake ChatGPT and Claude Packages Deliver JarkaStealer,” Kaspersky Press Release, November 20, 2024. URL: https://www.kaspersky.com/about/press-releases/kaspersky-uncovers-year-long-pypi-supply-chain-attack-using-ai-chatbot-tools-as-lure; per-package download counts from Kaspersky Blog: https://www.kaspersky.com/blog/jarkastealer-in-pypi-packages/52640/

[5] BleepingComputer, “DeepSeek AI Tools Impersonated by Infostealer Malware on PyPI,” BleepingComputer, January 2025. URL: https://www.bleepingcomputer.com/news/security/deepseek-ai-tools-impersonated-by-infostealer-malware-on-pypi/

[6] BleepingComputer, “AI Hallucinated Code Dependencies Become New Supply Chain Risk,” BleepingComputer, April 2025. URL: https://www.bleepingcomputer.com/news/security/ai-hallucinated-code-dependencies-become-new-supply-chain-risk/

[7] Cloudsmith / Mend.io, “Slopsquatting and Typosquatting: How to Detect AI-Hallucinated Malicious Packages,” Cloudsmith Blog, 2025. URL: https://cloudsmith.com/blog/slopsquatting-and-typosquatting-how-to-detect-ai-hallucinated-malicious-packages

[8] CyberScoop, “AI Video Generator Malware: Mandiant UNC6032 Vietnam,” CyberScoop, 2025. URL: https://cyberscoop.com/ai-video-generator-malware-mandiant-unc6032-vietnam/

[9] eSentire, “Fake DeepSeek Site Infects Mac Users with Atomic Stealer,” eSentire Threat Intelligence, 2025. URL: https://www.esentire.com/blog/fake-deepseek-site-infects-mac-users-with-atomic-stealer

[10] Malwarebytes, “Google Ads Funnel Mac Users to Poisoned AI Chats That Spread the AMOS Infostealer,” Malwarebytes Blog, December 2025. URL: https://www.malwarebytes.com/blog/news/2025/12/google-ads-funnel-mac-users-to-poisoned-ai-chats-that-spread-the-amos-infostealer

[11] Malwarebytes, “Free AI Editor Lures In Victims, Installs Information Stealer Instead on Windows and Mac,” Malwarebytes Blog, November 2024. URL: https://www.malwarebytes.com/blog/news/2024/11/free-ai-editor-lures-in-victims-installs-information-stealer-instead-on-windows-and-mac

[12] Microsoft Security Blog, “Lumma Stealer: Breaking Down the Delivery Techniques and Capabilities of a Prolific Infostealer,” Microsoft, May 21, 2025. URL: https://www.microsoft.com/en-us/security/blog/2025/05/21/lumma-stealer-breaking-down-the-delivery-techniques-and-capabilities-of-a-prolific-infostealer/

[13] Morphisec, “New Noodlophile Stealer Distributed via Fake AI Video Generation Platforms,” Morphisec Blog, May 2025. URL: https://www.morphisec.com/blog/new-noodlophile-stealer-fake-ai-video-generation-platforms/

[14] Socket.dev, “SANDWORM_MODE: npm Worm Enables AI Toolchain Poisoning via MCP,” Socket Security Blog, February 2026. URL: https://socket.dev/blog/sandworm-mode-npm-worm-ai-toolchain-poisoning

[15] Visual Studio Magazine, “Threat Actors Keep Weaponizing VS Code Extensions,” Visual Studio Magazine, December 2025. URL: https://visualstudiomagazine.com/articles/2025/12/08/threat-actors-keep-weaponizing-vs-code-extensions.aspx

[16] Google Cloud / Mandiant Threat Intelligence, “Cybercriminals Weaponize Fake AI Websites,” Google Cloud Blog, 2025. URL: https://cloud.google.com/blog/topics/threat-intelligence/cybercriminals-weaponize-fake-ai-websites/

← Back to Research Index