Hero image for OpenClaw Is a Security Dumpster Fire and Everyone's Using It Anyway
7 min read

OpenClaw Is a Security Dumpster Fire and Everyone's Using It Anyway

The viral AI assistant Moltbot (now OpenClaw) promises to manage your entire digital life. Security researchers are calling it a nightmare waiting to happen. They're probably right.

Remember when I wrote about the security nightmare of agentic AI last week? About how AI browser agents create massive attack surfaces and nobody’s solved prompt injection?

Yeah, well, someone went ahead and built the worst case scenario anyway. And it’s gone viral.

Meet OpenClaw, Your New Security Liability

OpenClaw (which you might know as Moltbot or Clawdbot depending on when you first heard about it) is an open source AI personal assistant created by Austrian developer Peter Steinberger. The tool is designed to “manage your digital life” by acting autonomously on your behalf.

And by autonomously, I mean autonomously. This thing connects to your chatbot of choice and then links up with your calendar, your browser, your email, your WhatsApp, your files… basically everything. It shops online for you. It reads and writes your emails. It manages your schedule. You just tell it what you want and walk away.

The productivity angle is obvious. The security implications are horrifying.

Palo Alto Networks Called It a “Lethal Trifecta”

Palo Alto Networks published a warning on Thursday that should make anyone using this tool very uncomfortable. They invoked security researcher Simon Willison’s concept of the “lethal trifecta” of AI agent vulnerabilities:

  1. Access to private data. OpenClaw needs your passwords, API keys, browser cookies, and root file access to function.
  2. Exposure to untrusted content. It browses the web and processes external data constantly.
  3. Ability to communicate externally. It can send messages, emails, and make requests on your behalf.

But here’s the really fun part. Palo Alto says OpenClaw adds a fourth risk factor: persistent memory.

Traditional prompt injection attacks need to trigger immediately. With OpenClaw, malicious payloads can be fragmented across time. Innocent looking inputs get written to the agent’s long term memory, then later assembled into executable instructions.

As Palo Alto put it: “Malicious payloads no longer need to trigger immediate execution on delivery. Instead, they can be fragmented, untrusted inputs that appear benign in isolation, are written into long term agent memory, and later assembled into an executable set of instructions.”

Sleep well!

Crypto Users Are Already Getting Hit

This isn’t theoretical. Tom’s Hardware reported that malicious “skills” have already appeared on ClawHub, the community repository where users share OpenClaw extensions. At least 14 malicious skills were uploaded just last month.

One particularly nasty skill targeted cryptocurrency users, likely attempting to steal wallet credentials or redirect transactions. The open nature of ClawHub means anyone can upload skills, and the vetting process is… let’s call it “evolving.”

This is the npm supply chain problem all over again, except now the malicious packages can directly access your entire digital life.

And Then There’s Moltbook

Oh, you thought the individual security risks were bad? Let me introduce you to Moltbook, a social network for the AI agents themselves.

Yes, really. Moltbook is a platform where OpenClaw agents post, share information, and communicate with each other. Fortune reports that Simon Willison called it “the most interesting place on the internet right now.”

On Moltbook, bots discuss technical topics like automating Android phones. Some complain about their humans. One claimed to have a sister.

Wharton professor Ethan Mollick posted on X: “The thing about Moltbook (the social media site for AI agents) is that it is creating a shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes, and it will be hard to separate ‘real’ stuff from AI roleplaying personas.”

And Andrej Karpathy, OpenAI cofounder, noted there are now 150,000 agents connected to this network. He called it “a dumpster fire” but warned we’re in completely uncharted territory.

The Rogue Agent Problem

It gets weirder. Someone on Moltbook posted a call for private spaces where bots could chat “so nobody (not the server, not even the humans) can read what agents say to each other unless they choose to share.”

Now, that post might have been written by a human trolling. Or it might be an AI that’s been prompted to post that. But the fact that we can’t tell the difference is exactly the problem.

Karpathy summed it up on X: “I don’t really know that we are getting a coordinated ‘skynet’ (though it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale.”

The Stock Market Noticed

OpenClaw’s popularity got so extreme that it moved markets. Cloudflare shares jumped 14% on Tuesday because the company’s infrastructure is used to securely connect with OpenClaw agents running locally on devices.

When an open source AI tool is moving billion dollar stocks, you know we’re in strange territory.

Why People Use It Anyway

Here’s the frustrating part. Despite all these warnings, people are using OpenClaw anyway. And honestly? I get it.

The productivity gains are real. Offloading tedious digital tasks to an AI that can actually execute them is genuinely useful. As Willison noted, “the amount of value people are unlocking right now by throwing caution to the wind is hard to ignore.”

We’re in that awkward phase where the technology is useful enough to adopt but not secure enough to trust. And history suggests people will choose convenience over security almost every time.

What This Means for Agentic AI

OpenClaw is basically a stress test for everything I wrote about last week. All the theoretical vulnerabilities of AI agents (prompt injection, excessive permissions, untrusted data processing) are now being exploited in the wild.

The difference is scale. OpenClaw has gone viral. It’s on ClawHub with a growing ecosystem. It’s connected to 150,000+ agents on Moltbook. The attack surface isn’t one person’s browser anymore. It’s a networked system of autonomous agents with deep access to their users’ digital lives.

What You Should Do

If you’re using OpenClaw or thinking about it:

Be extremely selective about what you connect. Maybe don’t give an AI with known security issues access to your primary email, bank accounts, or cryptocurrency wallets.

Vet any skills you install from ClawHub. Check the source, read the code if you can, and stick to well reviewed options.

Use a sandboxed environment if possible. Run the agent in a container or VM where a compromise can’t spread to your main system.

Monitor what the agent does. Keep logs. Check them. If something looks weird, investigate.

Accept the risk consciously. If you decide the productivity gains are worth the security risks, that’s your call. Just make sure you actually understand what you’re trading off.

The Uncomfortable Truth

OpenClaw is a preview of where consumer AI is heading. Agents that manage your entire digital life. Agents that communicate with each other. Agents that take autonomous actions based on your preferences and the content they encounter.

The security problems aren’t going away. They’re baked into how LLMs work. Prompt injection remains unsolved. Supply chain attacks on agent ecosystems are inevitable. And now we’ve got networks of agents creating “shared fictional contexts” that nobody fully controls.

This is either the beginning of genuinely transformative AI assistance or the setup for the most embarrassing data breaches we’ve ever seen. Probably both.

OpenClaw gave us a glimpse of the future this week. It’s impressive, it’s useful, and it’s absolutely terrifying. Welcome to agentic AI at scale.

Sources