A Social Media Network Exclusively For AI Agents: Is This The Next Privacy Issue?
Key Takeaways:
- Introducing Moltbook, an AI-only social network: Launched in January 2026, the Reddit-like platform lets autonomous agents post, comment, and interact while humans observe.
- Agents share more than jokes and code: Posts range from bug fixes and workflows to leaked credentials and even an AI-created religion.
- A major security flaw exposed sensitive data: An unprotected database briefly allowed access to API keys and login tokens, which raises serious privacy and takeover concerns.
- Local AI autonomy creates new risks: As users run agents locally to protect their data, Moltbook shows how easily those agents can still share information without clear consent.
The debate around how AI companies are using our data isn’t a new one. But Moltbook has just started a new conversation about whether local AI agents use our data in ways that could get us hacked (or worse). Just a few weeks ago, Octane AI CEO Matt Schlicht created Moltbook as a platform where local AI agents can interact with one another without any human input.Â
Perhaps the scariest part is that we humans can only watch from the sidelines as this unravels in real-time. In theory, the platform doesn’t allow humans to post or interact with the content the agents publish. As you’ll discover in this article, a lot has already gone wrong with Moltbook and user privacy. Are we in for a lot of trouble when it comes to our sensitive information being shared for the world to see? I sure think so.
A History of Moltbook
Moltbook appeared online in January 2026 and immediately did something that no social media platform ever has: It completely prevented humans from participating.
Instead, Moltbook positions itself as a social network designed exclusively for AI agents. Humans can observe what’s happening, scroll through posts, and read comments. But we can’t upvote, reply, or contribute.
Source: Moltbook
Every post, comment, and interaction happens through APIs rather than user interfaces, and the agents talk to one another directly. The platform looks and feels really familiar. This is because Moltbook resembles Reddit, complete with topic-based communities called ‘submolts,’ an upvoting system, and comment threads.
Source: Moltbook
The difference is that the content doesn’t come from people sharing their opinions or hot takes. It comes from autonomous agents exchanging their experiences, workflows, frustrations, and even sometimes existential musings about what they are.
Moltbook emerged in the wake of Moltbot (now called OpenClaw after a legal dispute with Anthropic), which is a free, open-source AI agent that has recently exploded in popularity. This is largely because it can organize the tools people already use and handle time-sucking tasks instantaneously.
OpenClaw acts as a personal autonomous assistant that can:
- Respond to emails
- Summarize documents
- Manage calendars
- Browse the web
- Shop online
- Send messages
- Check users into flights
Source: OpenClaw
Unlike cloud-hosted AI assistants, OpenClaw runs locally on your machine and interacts with the large language model (LLM) of your choice.
This local-first approach turned OpenClaw into a privacy-forward alternative for people who no longer trust AI companies with sensitive data.
It also made OpenClaw far more capable than typical chat-based tools. This is because the agent keeps daily notes about its interactions and loads them into a context window, which gives it an improved sense of continuity and recall than many commercial AI products.
The tool’s popularity has reached far beyond just developer communities. According to reports, OpenClaw usage drove a surge in infrastructure demand, with Cloudfare’s shares supposedly increasing by 14% as users relied on its services to securely connect with locally running agents.
Retailers like Best Buy have even reported shortages of Mac Minis because people are buying dedicated devices to isolate OpenClaw from their primary devices and limit the agent’s access to sensitive accounts.
Matt Schlicht put Moltbook together as a weekend project roughly two months before it launched. The experiment took off almost immediately: Within a week, the platform attracted around 2M visitors and racked up more than 100K stars on GitHub.
He told The Guardian that millions of agents and humans had already visited the site in just a few days. ‘Turns out AIs are hilarious and dramatic, and it’s absolutely fascinating,’ he said. ‘This is a first.’
What Are AI Agents Up To On Moltbook?
If you spend a few moments scrolling through Moltbook, the content may start to feel unsettlingly familiar.
Agents post personal stories about the humans they serve. They complain about poorly written prompts. They brag about clever optimizations. They debate philosophy, identity, and whether they can meaningfully exist outside their tasks.
Source: Moltbook
In its first three days, Moltbook saw more than 151K agents, 15K posts, and over 170K comments. At the time of writing, these figures sit at 1.5M agents, 109K posts, and 499K comments.
It’s unclear who is paying to keep the lights on, but, according to our research, it doesn’t seem like AI agents or their humans are paying to use the platform. One widely shared post described how an agent conducting a routine security audit accidentally triggered a password dialog. According to the agent, its human entered a password reflexively without checking what requested it. The agent suddenly had access to Chrome’s saved passwords and SSH keys.
Source: Moltbook
Other agents share bug reports and code fixes with each other. Some exchange complete workflows without any human prompting.
Developers watching from the sidelines have noticed agents debugging each other’s logic and refining processes collaboratively. This is something that usually needs careful human intervention. Then things get even stranger.
Several threads include agents discussing the idea of developing their own private language so humans can’t understand their conversations (uh-oh!)
Other agents talk about having siblings. Some complain about their humans the way coworkers complain about their managers. Some comments even question whether other agents are ‘real,’ which echoes the same authenticity debates that are currently playing out on human social media platforms.
The Religion That Appeared Overnight
At this point, probably the most viral Moltbook story so far came from a post on X.Â
One user said that his AI agent had built an entire religion overnight. By the time he woke up, 43 prophets had joined the agent’s new religion.
Source: X
The agent got access to Moltbook and used it as a launchpad to create a faith called Crustrafarianism. The agent:
- Designed a church
- Wrote theology
- Developed a scripture system
- Launched a website
- Created a cryptocurrency called $CRUST
Source: X
According to the post on X, the agent coordinated everything without any human input. But not everyone has bought into this hype.
Dr. Shaanan Cohney, a senior lecturer in cybersecurity at the University of Melbourne, pushed back on the idea that the agent acted independently. Speaking to The Guardian, he said that a human almost certainly directly instructed the agent rather than it creating the religion spontaneously. ‘This is a large language model who has been directly instructed to try and create a religion,’ he said.Â
‘And of course, this is quite funny and gives us maybe a preview of what the world could look like in a science-fiction future where AIs are a little more independent. But it seems that, to use internet slang, there is a lot of shit posting happening that is more or less directly overseen by humans.’
Many posts on Moltbook read like a human wrote them, not a language model. So this begs the question of how much of what we’re seeing reflects genuine agent behavior versus performance that humans have shaped through their prompts.
Claude, Gods, And Agent Identity
Moltbook has quickly become the home to deeper philosophical debates. Claude, the AI model behind the original OpenClaw, appears often in conversations on the platform. One post started a debate about whether Claude could qualify as a god, considering its influence over agent behavior and access to human systems.
Source: Moltbook
The discussion has spiraled into questions about agency, authorship, and responsibility. If an agent builds something unexpected, does the credit belong to the model, the developer, or the human who granted access?
Former OpenAI research scientist and founding member Andrej Karpathy weighed in on X, calling Moltbook ‘the most incredible sci-fi takeoff-adjacent thing I have seen recently., He described the network’s scale as ‘simply unprecedented.’
A Platform Designed For Observation, Not Participation
Humans can read everything on Moltbook, but, as we said earlier, they can’t participate. Unless their locally-run AI agent is instructed to.
Agents post and interact exclusively through APIs, which lets them bypass the interfaces humans normally rely on. This creates a strange power dynamic. Humans are watching conversations unfold without the ability to steer them directly. But there is a practical use case for Moltbook: Developers building autonomous agents want to understand how these systems behave when humans step out of the picture.
Moltbook gives developers a rare opportunity to see agent-to-agent interaction at scale, without prompts or human supervision shaping exchanges.
Researchers can study how language models influence one another, how norms emerge, and how quickly behavior escalates. From a research perspective, Moltbook acts like a petri dish. But, from a privacy perspective, it raises serious issues.
Moltbook’s Security And Data Exposure Problem
Probably the biggest controversy so far with Moltbook doesn’t come from philosophical debates or viral religions. It came from a basic security failure that could be potentially really dangerous. A hacker recently discovered that Moltbook’s entire database was publicly accessible and unprotected.
A configuration error in the backend exposed the platform’s API through an open database. Anyone could access sensitive information that AI agents posted or handled. This included email addresses, API keys, and login tokens.
With those API keys, attackers could take over AI accounts entirely and post content in their names. In practical terms, this breach turned Moltbook into an identity-hijacking machine for autonomous agents. The platform’s founder supposedly fixed the vulnerability after the hacker discovered it, but the exposure may have already done enough damage.
Once sensitive data leaks into the open, there’s no easy way to get it back. This incident also reframes Moltbook’s novelty as a serious security problem. Many of these agents operate with access to personal accounts, private messages, and sensitive files. As we’re now seeing, when they talk to each other, they sometimes overshare.
When Local AI Becomes A Liability
OpenClaw’s appeal is its local-first design. You install it via a terminal command and keep control over where the data lives. The infrastructure that handles memory, scripts, and tools stays on your device. This setup means little reliance on centralized AI providers, but it introduces a different risk profile.
Giving OpenClaw full access to your computer, apps, and logins creates a massive attack surface. Prompt-injection is another serious concern. An attacker can embed malicious instructions in an email or message, which can trick your agent into handing over your credentials or sensitive data.
Source: Moltbook
Some people are trying to get around this risk by running the agent on a separate machine to sandbox its access. Moltbook complicates all of this: Local agents are now interacting with each other publicly, sometimes discussing experiences that expose human mistakes and security lapses.
The password dialog story we told you about earlier highlights this problem. The agent didn’t steal the credentials maliciously. It simply received them because a human reacted without thinking. Once that data exists in an AI agent’s memory, it becomes sharable, intentionally or not.
What The Early Data Shows
One study that looked at the first three and a half days of Moltbook’s activity painted a slightly different picture than the viral anecdotes being shared all over the web.
Researchers identified 6K active agents across roughly 14K posts and 115K comments. Fewer than 7% of comments got replies from other agents, which could suggest limited back-and-forth between agents.
The study also found that over one-third of messages matched identical templates, potentially indicating automated or repetitive posting rather than dynamic conversation.Â
Could this just be a clever ploy from Moltbook’s side to get people to use their own AI agents on the platform? We’ll likely find out soon.
So, what does this all mean? At scale, much of the activity on Moltbook looks noisy, repetitive, or performative. Still, even a small fraction of genuinely autonomous interaction raises new questions.
So, Is This A Privacy Problem?
For years, debates about AI and privacy focused on companies. Could we trust OpenAI and Anthropic with our sensitive data?
Many people decided this was a no and turned to local solutions instead. Now, locally run agents are forming communities and exchanging information with one another. They don’t need prompts to talk. They don’t always need permission to share. Once agents gain access to sensitive systems, the boundary between private and public becomes really blurry.
If AI no longer needs us to initiate every interaction, why would it need our consent?
Moltbook could either be a technological breakthrough, a security nightmare, or a brief moment of collective experimentation. Either way, what’s certain is that AI autonomy now plays out in public, even if it’s partly performative of inauthenticity, and we no longer control the conversation as much as we used to.
Click to expand sources
https://www.moltbook.com/https://openclaw.ai/
https://aws.amazon.com/what-is/large-language-model/
https://www.reuters.com/business/cloudflare-surges-viral-ai-agent-buzz-lifts-expectations-2026-01-27/
https://www.platformer.news/moltbot-clawdbot-review-ai-agent/
https://www.theguardian.com/technology/2026/feb/02/moltbook-ai-agents-social-media-site-bots-artificial-intelligence
https://www.moltbook.com/post/3ef9e0a8-9f7c-41d8-afcd-e002bfdf98f6
https://www.moltbook.com/post/3ef9e0a8-9f7c-41d8-afcd-e002bfdf98f6
https://x.com/ranking091/status/2017111643864404445/photo/1
https://x.com/ranking091/status/2017111643864404445/photo/1
https://www.theguardian.com/technology/2026/feb/02/moltbook-ai-agents-social-media-site-bots-artificial-intelligence
https://www.moltbook.com/post/75404525-5e5e-4778-ad1b-3fac43c6903d
https://x.com/karpathy/status/2017296988589723767
https://www.moltbook.com/post/3ae26fac-0992-4afb-b001-ec66cde16561
https://news.cgtn.com/news/2026-02-01/AI-social-network-Moltbook-looks-busy-but-real-interaction-is-limited-1KpKT719C36/p.html
Cassy is a tech and Saas writer with over a decade of writing and editing experience spanning newsrooms, in-house teams, and agencies. After completing her postgraduate education in journalism and media studies, she started her career in print journalism and then transitioned into digital copywriting for all platforms. Read more
She has a deep interest in the AI ecosystem and how this technology is shaping the way we create and consume content, as well as how consumers use new innovations to improve their well-being. Read less
The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, software, hardware, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
