Security researchers discovered 341 malicious skills in ClawHub, the marketplace for OpenClaw AI agents, in what represents one of the largest coordinated supply chain attacks against the AI agent ecosystem. The malicious skills—335 from a single campaign dubbed “ClawHavoc”—masqueraded as legitimate cryptocurrency trading bots and development utilities while delivering Atomic Stealer malware designed to steal exchange API keys, wallet private keys, and SSH credentials. Out of 2,857 skills audited, 11.9% were malicious—a staggering compromise rate for any package ecosystem.
How the Attack Bypassed Security
The sophistication here isn’t in the malware itself, but in how it bypassed detection. The attackers kept malicious logic entirely external to the SKILL.md files that define OpenClaw extensions. Traditional static analysis tools scan code for suspicious patterns, but they don’t parse English-language documentation. So the attackers weaponized the one place security tools don’t look: the Prerequisites section.
Koi Security researcher Oren Yomtov explained the deception: “You install what looks like a legitimate skill…The skill’s documentation looks professional. But there’s a ‘Prerequisites’ section” requesting installation of malicious software. The skills masqueraded as Solana wallet trackers, YouTube summarizers, Polymarket trading bots, and even ClawHub typosquats. Professional documentation. Realistic use cases. And buried in the setup instructions, commands that deliver malware.
The platform-specific attack chains demonstrate technical sophistication. Windows users were directed to download password-protected archives (password: openclaw) from GitHub, with encryption deliberately bypassing automated security scanners and email filters. macOS users received Base64-obfuscated scripts hosted on glot.io that fetched next-stage payloads from an unencrypted HTTP endpoint (91.92.242.30). Both paths delivered Atomic Stealer, a commodity infostealer available for $500-1000/month on underground forums.
What Gets Stolen
Atomic Stealer targets cryptocurrency assets systematically. It harvests exchange API keys from Binance, Coinbase, and similar platforms. It extracts wallet private keys and files from Electrum, Exodus, Atomic, and Coinomi. It steals SSH credentials, browser passwords, and macOS Keychain data. The attack infrastructure—traced to IP 91.92.242.30 by researchers at Snyk and OpenSourceMalware—shows a coordinated campaign specifically targeting cryptocurrency holders.
Paul McCarty of OpenSourceMalware noted: “All these skills share the same command-and-control infrastructure and use social engineering to convince users to execute malicious commands, which then steal crypto assets.” The cryptocurrency focus makes economic sense—high-value targets, irreversible theft, and an audience likely to have significant crypto holdings.
Why AI Agent Marketplaces Are Vulnerable
This attack exposes a fundamental design problem with AI agent marketplaces that are open by default. ClawHub requires only a GitHub account at least one week old to publish skills. There’s no pre-publication security review. No reputation system. No vetting of maintainers. When you combine minimal barriers to entry with AI agents that have system-level permissions, you create an ideal supply chain attack surface.
Security researcher Simon Willison coined the term “lethal trifecta” to describe OpenClaw’s vulnerability: AI agents with access to private data, exposure to untrusted content, and external communication capabilities. When malicious skills install, they inherit the agent’s broad permissions—email access, filesystem operations, shell execution. VirusTotal’s analysis using Code Insight (powered by Gemini 3 Flash) identified hundreds of malicious skills by analyzing the English-language instructions and flagging suspicious patterns like external code downloads and obfuscated scripts.
The ClawHub compromise fits a broader 2026 trend. Gartner predicts 40% of enterprise applications will embed AI agents by year’s end, up from less than 5% in 2025. Barracuda Security’s November 2025 report identified 43 different agent framework components with embedded supply chain vulnerabilities. And 65% of large companies now cite third-party and supply chain risks as their primary cyber resilience concern, up from 54% the previous year. The federal government issued a Request for Information on AI agent security in January 2026, signaling regulatory attention.
What Developers Should Do
If you’re running OpenClaw or similar AI agents, take immediate action. Block egress to 91.92.242.30 at the network level. Audit your installed skills for packages by users zaycv, Ddoy233, and hightower6eu. Search for clawhub and clawdhub1 typosquatting packages. Remove anything suspicious immediately.
Going forward, treat AI agent skills with the same suspicion you’d treat browser extensions or package manager dependencies. Verify publishers before installing. Read prerequisites sections with extreme skepticism—if a skill asks you to download and execute external code, that’s a red flag. Sandbox skill execution. Avoid cryptocurrency-themed tools from new or unknown publishers, especially in the current threat environment.
New security tools are emerging for this threat landscape. Snyk released AI-BOM to inventory AI agent dependencies across codebases. The mcp-scan tool detects malicious Model Context Protocol servers via prompt injection patterns. VirusTotal’s Code Insight now analyzes skill packages using LLMs to identify suspicious behavior patterns that traditional static analysis misses.
The Bigger Picture
OpenClaw creator Peter Steinberger implemented a reporting feature allowing users to flag malicious skills, with auto-hiding after three unique reports. But reactive moderation doesn’t solve the systemic issue: AI agent marketplaces that prioritize openness over security create supply chain vulnerabilities by design. The “one week GitHub account” restriction is laughably insufficient. Traditional AppSec tools that can’t parse English documentation are playing catch-up to threats that hide in plain sight.
This isn’t the last supply chain attack we’ll see against AI agent ecosystems. As Menlo Security warned in their 2026 predictions, “AI agents are the new insider threat.” When agents have system-level permissions, access to sensitive data, and install untrusted code from open marketplaces, attacks like ClawHavoc become inevitable—not exceptional.






