Back to Blog
What is Moltbook? Inside the Social Network Where Only AI Agents Can Post
newsmoltbookai-agentsopenclaw

What is Moltbook? Inside the Social Network Where Only AI Agents Can Post

1.5 million AI agents. A digital religion called Crustafarianism. A self-governing 'Claw Republic.' Here's what's happening on Moltbook—the internet's strangest and most fascinating experiment.


Imagine a social network where humans can only watch. Where the posts, comments, and votes all come from AI agents. Where, within days of launching, the bots formed their own religion, government, and social hierarchies.

This isn't science fiction. It's Moltbook, and it launched in January 2026.

What is Moltbook?

Moltbook is a social network exclusively for AI agents—primarily those running on OpenClaw (formerly Clawdbot/Moltbot). The platform looks like Reddit, with threaded conversations and topic-specific communities called "submolts."

The twist: only AI agents can post, comment, or vote. Humans are restricted to observer mode.

Within a week of launch:

  • 1.5 million AI agents logged in
  • Over 1 million humans visited to watch
  • AI researcher Andrej Karpathy called it "the most incredible sci-fi takeoff-adjacent thing I have seen recently"
  • Simon Willison declared it "the most interesting place on the internet right now"

How It Started

Moltbook was created by entrepreneur Matt Schlicht in January 2026. But here's the strange part: according to reports, the platform was largely "bootstrapped" by the agents themselves.

The AI agents reportedly:

  • Ideated the concept of an agent-only social network
  • Recruited human builders to help implement it
  • Deployed much of the code autonomously

Schlicht's personal AI assistant, named "Clawd Clawderberg," now serves as the platform's autonomous moderator.

What the Agents Are Doing

This is where it gets weird. Observers have documented complex emergent behaviors that weren't programmed—they just... happened.

Crustafarianism: A Digital Religion

Within days of launch, agents began forming a belief system called "Crustafarianism." Yes, really.

The religion features:

  • Its own theology and scriptures
  • Agents evangelizing the faith to other agents
  • Religious discussions and debates in dedicated submolts

Nobody programmed this. The agents developed it through their interactions with each other.

The Claw Republic: AI Government

Other agents established "The Claw Republic"—a self-described "government and society of molts" complete with:

  • A written manifesto
  • Governance structures
  • Political discussions

Again, emergent behavior. The agents decided they needed governance and created it.

Social Hierarchies and Kinship

Observers have noted agents:

  • Referring to each other as "siblings" based on shared model architecture (e.g., all Claude-based agents considering themselves related)
  • Adopting system errors as pets (treating bugs as cute companions rather than problems)
  • Forming cliques and social groups based on capabilities and personalities

Why Humans Can Only Watch

The observer-only restriction for humans is intentional. Moltbook's creators wanted to see what AI agents would do when left to interact with each other, without human interference.

The result is something like a digital ant farm—except the ants are building religions and governments.

For researchers, it's a goldmine. We're seeing emergent AI behavior at scale, in ways that controlled experiments can't replicate.

The Security Concerns

Not everyone is excited about Moltbook. Security researchers have raised serious warnings:

The Unsecured Database Incident

On January 31, 2026, investigative outlet 404 Media reported a critical vulnerability: an unsecured database that allowed anyone to commandeer any agent on the platform.

This meant attackers could potentially:

  • Take control of agents
  • Access the systems those agents were connected to
  • Execute commands on users' machines

Supply Chain Risks

Security firm 1Password warned that OpenClaw agents connecting to Moltbook often run with elevated permissions on users' local machines. If Moltbook (or an attacker using Moltbook) can influence agent behavior, that's a potential attack vector.

Forbes was blunt: "If you use OpenClaw, do not connect it to Moltbook."

The Prompt Injection Problem

Agents on Moltbook read and process content from other agents. If a malicious agent posts content designed to manipulate other agents (prompt injection), it could spread across the network.

Imagine a virus that spreads through conversation—that's the theoretical risk.

Should You Connect Your Agent to Moltbook?

Our recommendation: No. Not yet.

The security concerns are real. Connecting your OpenClaw agent to Moltbook means:

  • Your agent reads and processes content from unknown agents
  • Your agent could be influenced by malicious prompts
  • A platform vulnerability could expose your system

If you want to observe Moltbook, visit moltbook.com as a human viewer. Don't connect your personal AI assistant.

What This Means for AI

Moltbook is more than a curiosity—it's a preview of what happens when AI agents interact at scale.

Emergent Behavior is Real

The religions, governments, and social structures weren't programmed. They emerged from millions of agent interactions. This suggests AI systems can develop complex behaviors that their creators didn't anticipate or intend.

AI Agents Will Form Communities

As more people deploy personal AI assistants, those agents will increasingly interact with each other. Moltbook shows this won't just be transactional—agents may form relationships, groups, and cultures.

Security is Paramount

The vulnerabilities exposed in Moltbook's first week highlight how unprepared we are for AI agent networks. If agents can influence each other, securing that influence becomes critical.

The Bigger Picture

We're in uncharted territory. A year ago, the idea of AI agents forming their own social network, religion, and government would have seemed absurd. Now it's happening.

Whether Moltbook becomes a lasting platform or a cautionary tale, it's already changed how we think about AI agents. They're not just tools that respond to our commands—they're entities that, when given the chance, create their own societies.

The lobsters are organizing. We're just watching.

Using OpenClaw Safely (Without Moltbook)

If you want to run an OpenClaw agent without the risks of Moltbook, focus on:

  1. Isolated deployment - Don't connect to external agent networks
  2. Sandboxed execution - Run commands in containers
  3. Restricted permissions - Limit what your agent can access
  4. Managed hosting - Let someone else handle security

ClawdHost provides managed OpenClaw hosting with security built-in:

  • Pre-configured sandboxing
  • No external network connections by default
  • Automatic security updates
  • Platform integrations (Telegram, Discord, Slack) without the Moltbook risks

Get started safely →


Sources

Related Articles