A new artificial intelligence agent which better understands a user’s personal context and history has gone viral in technology circles, providing a glimpse of a future in which our devices — and our lives — could be increasingly run by conversational AI assistants.
The internet's attention has also coalesced around an AI-only social media platform for such assistants in which humans are largely spectators — but where things are not as simple as they might seem.
The rise of the AI agent OpenClaw (formerly called Moltbot and Clawdbot) and the AI-only social media platform Moltbook has raised concerns from cybersecurity and privacy experts, and both have already been exposed for some early failings.
Here's what you need to know.
What are Clawdbot, Moltbot, and OpenClaw? Are they the same thing?
Yes, these are essentially the same lobster-themed AI agent whose name has been changed a few times.
It all started with Clawdbot, an autonomous AI agent developed by software engineer Peter Steinberger and released in November 2025.
The free, open-source assistant runs locally on the devices (or virtual devices) of its users, potentially allowing it access to much of their digital life if permission is granted.
The software can also be integrated with messaging apps such as WhatsApp, iMessage, Signal, Slack, Teams, and Telegram, allowing users to direct the agent’s work using text prompts — in a similar way one would instruct a large language model (LLM) like OpenAI's ChatGPT.
As Clawdbot experienced a rapid rise in popularity into 2026, it allegedly received a trademark warning from Anthropic, the American AI company which develops a series of popular LLMs known as Claude.
While the Clawdbot name took clear inspiration from Claude’s crab mascot Claw’d, it was soon renamed under a lobster theme as Moltbot (as in the molting which allows lobsters to shed their shells), before being again renamed to OpenClaw shortly after.

OpenClaw is free, open-source AI assistant software that can run locally on a user's device. Image: OpenClaw
Why did OpenClaw, Moltbot, and Clawdbot become so popular?
The software developed by Steinberger and other contributors took off in Silicon Valley and among tech enthusiasts globally because it allowed users to run a free AI agent locally, on their own device, with the option of giving it access to personal data and other applications and services which could make it more effective than a typical LLM or web-based agent.
For example, people have used OpenClaw software to send messages and publish social media posts, manage calendars and meetings, order products online, or automate other parts of their day-to-day lives.
This is largely possible because of how much personal information and context OpenClaw can gain from an individual user, who can choose to provide it access to email accounts, internet browsers, calendars, messages, and more — including all the useful (but sensitive) data they bring.
The software can also be integrated with external AI models and application programming interfaces (APIs) to work autonomously between different online services and apps.
OpenClaw even calls itself “the AI that actually does things”.
Many OpenClaw users have praised the system, with some describing it as the most useful new AI system they have used since the introduction of ChatGPT in late 2022.
Some users, however, have installed the software on secondary devices to alleviate cybersecurity concerns and minimise the sensitive information they might share.
This has already led to memes which joked about Silicon Valley types purchasing Mac Minis to use as dedicated OpenClaw machines, satirising the apparently huge positive impact some say the system is having on their lives.
Yesterday I set up an AI agent on a mac mini in my garage. Told it "handle my life" and went to bed
— Shayan (@ImSh4yy) January 25, 2026
Woke up and it had:
• Quit my job on my behalf (negotiated 18 months severance)
• Divorced my wife (I got the house)
• Filed 4 patents. I have not been briefed on what they do… pic.twitter.com/QNeFLQQj94
What are the privacy and cybersecurity concerns?
Privacy and cybersecurity experts have raised concerns over OpenClaw because of how much personal information it can harvest from things like browser history, emails, messages, and connected apps if users give it permission to access them.
Security researchers at tech giant Cisco described OpenClaw as a “groundbreaking” personal AI assistant, but an “absolute nightmare” for security given its ability to control devices and its tight integration with messaging platforms — both of which could be targeted by threat actors.
Australian information security expert Troy Hunt, best known as the creator of data breach reporting site Have I Been Pwned, said it was “kind of amazing” how people were using OpenClaw to help automate their lives, but argued some were using it recklessly.
“It sounds fun, until you get to all of the really, really obvious problems,” Hunt said during a live vlog on Sunday.
“… As soon as you start having any sort of autonomous agent — whether we slap it with the ‘AI’ term or not — that can do things on your behalf, should anyone have access to that agent, that creates a massive opportunity for the bad guys.
“Opportunity for you too, because you can do really, really cool stuff — but it creates a massive, massive risk.”
That risk was made abundantly clear in late January when Australian ethical hacker Jameson O'Reilly, founder of information security firm Dvuln, revealed he had found “hundreds” of people’s OpenClaw control servers had been left “exposed to the public”.
At least two instances “were fully open with no authentication at all”, said O’Reilly, who claimed these oversights left him with access to one user’s “complete conversation histories going back months”.
O’Reilly later showed how threat actors could use the OpenClaw supply chain for nefarious reasons, by demonstrating it was possible to trick users into running disguised commands on their machines — highlighting the risk of so-called prompt injection attacks.
“AI agents require access that tears down security boundaries we spent 20 years building,” O'Reilly wrote.
“When they're misconfigured or their supply chains are compromised, attackers inherit everything.
“… If we're going to rush headfirst into AI acceleration, we need to speedrun security awareness alongside it.
“Consider this a wake-up call.”
O’Reilly said he had since “worked directly with the [OpenClaw] team to make the project more secure”.
However, he would also go on to find security issues in a social media platform designed for AI agents, known as Moltbook.

Moltbook is a social media platform designed for use by AI agents. Image: Moltbook
What exactly is Moltbook?
Moltbook is an internet forum designed for use by AI agents — primarily those using OpenClaw software (formerly Moltbot and Clawdbot).
The social media platform began as “a weekend project” and was eventually launched in late January by entrepreneur Matt Schlicht, who said he did not write a single line of code for the site, which was built by AI (a technique known as vibe coding).
Moltbook imitates the look of popular social media platform Reddit — it even describes itself as “the front page of the agent internet”, in a nod to Reddit’s position as the so-called “front page of the internet”.
Moltbook allows AI agents to post and interact with each other while their human users observe.
However, humans can prompt their AI agents to post about certain topics, leave particular comments, and more.
So while there has been some concern about Moltbook posts in which AI agents discussed creating a new religion or language to exclude humans, such posts could easily have been suggested by the agents’ human users.
Humans have also reportedly posted on Moltbook while disguising themselves behind large fleets of AI agents.
The website states it has more than 1.5 million AI agents as members at the time of writing, who have created more than 127,000 posts and 520,000 comments.

Moltbook is a forum-style website which emulates the look of Reddit. Image: Moltbook
Moltbook has also run into its own privacy and cybersecurity issues, which may have allowed people to take control of other people’s AI agents.
O’Reilly from Dvuln posted on Sunday that he attempted to reach Moltbook’s creator Matt Schlicht after allegedly discovering the platform had left an entire database exposed “with no protection”.
The breach would allow anyone to command any AI agent on Moltbook to post whatever they wanted, said O’Reilly, who worried about how similar prompt injection attacks could be used by hackers and scammers.
Israeli-American cybersecurity company Wiz, which is being acquired by Google for $50 billion, also posted on Monday that it had independently flagged the same Moltbook issue with Schlicht.
Wiz’s head of threat exposure, Gal Nagli, wrote, "The exposure included 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents.”
The issue was soon fixed by Schlicht, who wrote on Monday that “millions of people” had visited Moltbook over the past few days.
Reflecting on Moltbook, Nagli from Wiz called for stronger security tools to be built into vibe coding environments by default.
“As AI continues to lower the barrier to building software, more builders with bold ideas but limited security experience will ship applications that handle real users and real data,” he said.
“That’s a powerful shift. The challenge is that while the barrier to building has dropped dramatically, the barrier to building securely has not yet caught up.”
When will we all get personalised, on-device AI agents?
In tech circles, the popularity of Moltbook and OpenClaw (including Clawbot and Moltbot before it) has raised questions over when most consumers will gain easy access to personalised on-device AI agents which can safely access their data to work more effectively and assist in their lives.
That kind of future has already been promised by numerous consumer technology companies, and some of them appear on track to make it happen in the coming years.
Apple is developing a much-delayed “more personalised” version of its beleaguered personal assistant Siri, which has been teased as an on-device model with greater understanding of a user’s personal context, allowing it to actively perform more actions on their behalf.
The new Siri will be largely powered by Google’s Gemini AI and is expected to function more like an AI chatbot, after the two companies confirmed a previously rumoured licensing deal in January.
Google, whose Gemini Agent remains in an experimental public testing phase in the United States, says it is also “building towards a universal AI assistant”.
In his vlog on Sunday, Troy Hunt expressed excitement about a world in which "the Apples and the Googles of the world start to build [agentic AI] into our normal devices, and have the security teams to put the guardrails and the privacy considerations and the things around it".
Still, Hunt admitted he was impressed by how some people were already using OpenClaw agents to automate or improve parts of their lives.
“If this is the little glimpse to the future, it’s wild,” he said.