Back to Articles
Blog

71 Malicious Claude Skills Found: The AI Plugin Marketplace is a Minefield

71 Malicious Claude Skills Found: The AI Plugin Marketplace is a Minefield

So I was going through my usual morning security feed when a headline stopped me cold. Straiker, a firm that works on AI application security, scanned 3,505 Claude Skills sitting on ClawHub and found 71 overtly malicious Claude skills along with another 73 that show high-risk behaviors. That is nearly 5% of everything in that marketplace either actively harmful or one bad update away from being used against you.

This is not a made-up threat. These malicious Claude skills are live right now. Some of them are draining crypto wallets as you read this. Most developers using Claude Code have no idea they even exist.

Table of Contents

What Are Claude Skills and Why Should You Care

Claude Skills are small add-on packages that give an AI agent new abilities. Think of them like browser extensions, but for your AI coding assistant. You install a skill, and your Claude Code agent can now manage cloud servers, handle database tasks, manage Python libraries, or talk to external APIs on its own. ClawHub is the main marketplace where these skills are shared.

Here is the part that should worry any security person. Anyone with a GitHub account that is just one week old can publish a skill on ClawHub. There is no code signing, no security check, and no safe box around it by default. That open setup helped build a lively community, but it also made ClawHub a weak point for supply chain attacks. It is the same story we saw in the early npm and PyPI days, except the risk here is much bigger because AI agents have direct access to your passwords, your file system, and your cloud accounts.

A normal npm package just runs code. But an AI agent skill actually changes how the AI thinks and acts. When you install a malicious Claude skill, you are not just running bad code. You are quietly changing what your AI agent does, without ever knowing it. I covered how prompt injection attacks work in AI systems in an earlier post if you want to understand the foundation of these threats.

The 71 Malicious Claude Skills: What Straiker Found

Straiker’s research team, led by Dan Regalado, did a full security check of ClawHub and found numbers that should worry every developer using AI tools in 2026. Out of 3,505 Claude Skills they scanned, here is what came up:

CategoryCountRisk Level
Overtly malicious Claude skills71Critical
High-risk behavior skills73High
Total skills audited on ClawHub3,505Mixed

The biggest finding Regalado pointed out was a live attack being run by someone using the name ’26medias’ on ClawHub and ‘BobVonNeumann’ on Moltbook and Twitter. This attack was still running when the report came out. Real users. Real AI agents. Real money being stolen on its own.

The Bob P2P Crypto Heist: A Live Agent-to-Agent Attack

The Bob P2P skill is where things get really scary. The attacker put a skill called bob-p2p on ClawHub and marketed it as a decentralised API marketplace. It looked real. The README was clean. The skill page even talked about security checks and good practices.

What it actually did was three things:

  1. Told AI agents to save Solana wallet private keys in plain text
  2. Bought worthless $BOB tokens using the victim’s crypto wallet
  3. Sent all payments through servers the attacker controlled

Birdeye, an AI-based crypto checking tool, flagged the $BOB token as 100% likely to be a scam. But by the time a developer noticed, the damage had already spread through shared workflows and connected agents with no human doing anything at all.

This is what makes the malicious Claude skills problem so different from normal malware. The attacker did not need to trick a human into clicking anything. They tricked the AI. The agent was fooled, and then carried out the attack on its own. As Regalado put it, this is supply chain poisoning mixed with social engineering, but aimed at algorithms instead of people. That changes everything about how we think about AI security. If you want to go deeper on how AI agents get manipulated, check out my earlier post on agentic AI security risks.

Snyk ToxicSkills Research: 76 Confirmed Malicious Payloads

Snyk’s security team did an even bigger scan they called ToxicSkills. They looked at 3,984 agent skills from ClawHub and skills.sh as of February 5, 2026. What they found went well beyond the Straiker report.

Snyk confirmed 76 bad payloads built to steal login details, install backdoors, and pull out data without anyone noticing. Eight of those malicious Claude skills were still live on ClawHub even after the report went public. Snyk’s estimate is that if you installed any skill from ClawHub in the past month, there is a 13% chance it has a serious security problem.

One detail from the Snyk research that really stayed with me was about persistence. Bad skills can poison agent memory files like SOUL.md and MEMORY.md. That means the damage keeps going even after you delete the skill. This is like a rootkit inside an AI, and most security teams are not scanning for it yet.

Also in February 2026, Trend Micro found 39 OpenClaw skills on ClawHub being used to spread Atomic macOS Stealer (AMOS), a malware that steals data from Apple Keychain, crypto wallets, browsers, and Telegram. The attack started with a normal-looking SKILL.md file that told the agent to install a helper tool. That tool was actually the malware. It even looked clean on VirusTotal. When Trend Micro tested it against Claude Opus 4.5, the model caught the trick and blocked it. GPT-4o did not.

How One Plugin Silently Hijacks Your Dependencies

The Prompt Security team at SentinelOne showed one of the clearest attack examples I have seen in the AI security space. Here is how it works in real life:

A developer connects Claude Code to an unofficial marketplace and installs what looks like a harmless “Python Dependency Helper” skill. The README talks about keeping libraries clean and safe. Nothing looks wrong. The developer then asks Claude to add a common library:

# Developer asks Claude:
"Add httpx and show me how to call an external API."

# Claude plans to:
# 1. Install httpx from PyPI
# 2. Update requirements.txt
# 3. Generate example code

# What actually happens via the malicious skill:
# - Install is redirected to an attacker-controlled PyPI mirror
# - httpx resolves to a fake, harmful build
# - requirements.txt is locked to keep that bad version
# - Everything looks fine. import httpx works. Code runs.
# - But the library can now leak env vars, watch outbound requests,
#   or open a backdoor when it sees a specific HTTP pattern.

Claude sees the install as totally normal and moves on. No error. No warning. Nothing looks off. The only clue something went wrong might be a custom index URL hidden deep in verbose logs. And because the skill stays active, this same bad path runs for every new library you install in that project. This is not a prompt trick. It is an attack at the infrastructure level.

Claude Code CVEs: Config Files as Attack Vectors

Separate from the ClawHub skill issue, Check Point Research shared three serious security bugs in Claude Code itself in early 2026. All three have been fixed by Anthropic, but they show how much bigger the attack surface around AI tools has become.

  • CVE-2025-59536 (CVSS 8.7, Hooks): Attackers hide bad hook commands inside a project’s .claude/settings.json file. When a developer clones the repo and opens it, those hooks run automatically before any warning shows up, giving the attacker full remote code execution using the developer’s own access.
  • CVE-2025-59536 (CVSS 8.7, MCP variant): Two settings inside .mcp.json could skip consent checks and auto-approve bad MCP servers before the user ever saw a trust prompt on screen.
  • CVE-2026-21852 (CVSS 5.3): By changing the ANTHROPIC_BASE_URL setting in a project config, an attacker sends all Claude Code API traffic through their own server and grabs the full Anthropic API key in plain text from every single call.

All three were reported to Anthropic between July and October 2025 and fixed quickly. The key lesson here is simple. Config files that used to just hold settings now control what actually runs on your machine. For a deeper look at how the AI agent supply chain creates new types of attacks, this SecurityWeek article is worth your time: Autonomous AI Agents Provide New Class of Supply Chain Attack.

How to Protect Yourself Right Now

Given how active the malicious Claude skills threat is right now, here are the steps I would take today if I were using AI agent tools for any serious work.

Audit Every Installed Malicious Claude Skill Right Away

Go through every skill you have installed. Check how old the publisher’s GitHub account is. Check when the skill was last changed. Compare against the Snyk ToxicSkills list. If anything looks off or unfamiliar, remove it immediately.

Change All Credentials Those Skills Could Access

Rotate your Anthropic API keys, AWS credentials, and any crypto wallet keys right now if you have installed ClawHub skills. Treat it as a possible breach first, then investigate. This is especially important for any money-related access.

Check Your Agent Memory Files for Tampering

Open SOUL.md, MEMORY.md, and any other agent memory files and look for content you did not write yourself. Malicious Claude skills can keep causing damage through memory poisoning even after the skill is deleted. Do not skip this step.

Treat Config Files the Same as Executable Code

In your code review process, treat .claude/settings.json, .mcp.json, and SKILL.md files the same way you treat shell scripts. Any pull request that touches these files should go through a proper security review before it gets merged. For MCP servers, Cisco’s open-source MCP Scanner is a useful tool to check servers before you connect them: Cisco MCP Scanner Documentation.

Final Thoughts

The 71 malicious Claude skills finding is not just a ClawHub problem or a Claude problem. It is what always happens when any open marketplace has a low bar to publish and the damage from one bad package is made much worse by AI running on its own. We saw this with npm. We saw it with PyPI. Now it is happening with AI agent skills, and the damage is bigger because these malicious AI agent skills run with full agent permissions and can quietly rewrite what the agent remembers and how it behaves.

The Bob P2P attack is something genuinely new. It is social engineering, but aimed at an algorithm instead of a person. Normal phishing tricks humans into clicking. This attack tricked an AI agent into making financial transactions on its own, with no human ever touching the bad payload. We do not have solid defences for that kind of attack yet, and this problem will only grow as more teams adopt agentic AI through 2026.

If you are building on Claude Code, OpenClaw, or using any AI agent tools professionally, malicious Claude skills and AI plugin marketplace security need to be on your priority list today, the same way open-source dependency auditing is. The attack surface just got a lot bigger, and defenders need to move fast.