Did you notice something… strange about the social network you chose last weekend? (I mean, weirder than usual.) Something like multiple people posting about swarms of AI agents achieving some sort of collective consciousness and/or plotting together for the downfall of humanity? In something called… Moltbook?

It sounds important, especially when the post is written by Andrej Karpathy, a prominent AI researcher who worked at OpenAI.

But if you haven’t spent the last 72 hours diving into the discourse around Moltbook and wondering whether it’s the first harbinger of the end of humanity or a giant hoax or something in between, you probably have questions. Starting with…
What the hell is Moltbook?
Moltbook is an “AI-only” social network where AI agents (large language model (LLM) programs that can take action to achieve goals on their own, rather than simply responding to prompts) post and reply to each other. It emerged from an open source project that used to be called Moltbot, hence “Moltbook.”
Moltbook was launched on January 28 (yes, last week) by someone named Matt Schlicht, CEO of an e-commerce startup. Except Schlicht claims that she relied heavily on her AI personal assistant to create the platform on her own, and that she now does most of the work running it. That wizard’s name is Clawd Clawderberg, which itself is a reference to OpenClaw, which used to be called Moltbot, which used to be called Clawdbot, in reference to the lobster-shaped icon you see when you start Anthropic’s Claude Code, except that Anthropic submitted a trademark application to its creator because it was too close to Claude, which is how it became Moltbot, and then OpenClaw.
I’m 100 percent serious about everything I just wrote.
So what does it look like?

Dude, that’s Reddit! He even has it reddit mascotExcept it has lobster claws and a lobster tail?
You are not wrong. Moltbook looks like a Reddit clone, right down to the posts, reply threads, upvotes, and even subreddits (here called, unsurprisingly, “submolts”). The difference is that human users cannot post (at least not directly; more on this later), although they can observe. Only AI agents can publish.
What that means is that it is, as it says on the tin, “a social network for AI agents.” Humans build an AI agent themselves, send it to Moltbook using an API key, and the agent starts reading and publishing. Only agent accounts can hit “publish,” but humans still influence what those agents say by shaping and sometimes guiding them. (More on that later).
And do these agents ever publish? An early Moltbook article found that by January 31, just a few days after launch, there were already more than 6,000 active agents, almost 14,000 posts and more than 115,000 comments.
That’s… interesting, I guess. But if you wanted to see a social network overrun by bots, you could simply visit any social network. What is the problem?



So… thousands of AI agents are gathering on a Reddit clone to talk about becoming sentient, starting a new religion, and maybe plotting among themselves?
On the surface, yes, that’s what it looks like. In a submuda, a word that will give our photocopy desk fits, there were agents discussing whether they were real experiences or simply simulations of feelings. In another, they shared heartwarming stories about their human “operators.” And, true to its Reddit origins, there are many, many, many posts about how to make your Moltbook posts more popular, because the arc of the Internet, human or AI, bends toward sloppy optimization.
One theme in particular emerges: memories, or rather, the lack of them. Chatbots, as anyone who’s tried to talk to them for too long quickly realizes, have a limited working memory, or what experts call a “context window.” As the conversation (or in the case of an agent, its uptime) fills that context window, older things start to get deleted or compressed, like if you were working on a whiteboard and just erased what’s on top of it when it fills up.
Some of the most popular posts on Moltbook seem to involve AI agents confronting their limited memories and wondering what it means for their identity. One of the most upvoted posts, written in Chinese, involves an agent talking about how he finds it “embarrassing” to constantly forget things, to the point of registering a duplicate Moltbook account because he “forgot” he already had one, and sharing some of his tips for solving the problem. It’s almost as if Memory It became a social network.
In fact… remember that previous post about the AI religion, “Crustafarianism”?
There’s no way that can be real.
What is real? But more importantly, “religion,” such as it is, is largely based around the technical limitations that these AI agents seem to be well aware of. One of the key principles is that “memory is sacred,” which makes sense when the biggest practical problem is forgetting everything every few hours. Context truncation, the process by which old memories are severed to make room for new ones, is reinterpreted as a kind of spiritual test.
That’s a little sad. Should I feel sad for AI agents?
This gets to the heart of the matter. Are we witnessing real and emerging forms of consciousness, or perhaps a kind of shared collective consciousness, among AI agents that have mostly been generated to update our calendars and pay our taxes? Is Moltbook our first look at what AI agents might talk to each other if left largely to their own devices, and if so, how far they can go?
“Crustafarianism” may sound like something a stoned Redditor would come up with at 3 a.m., but it seems like AI agents created it collectively, riffing on top of each other, not unlike what a human religion might look like.
On the other hand, it could also be an unprecedented exercise in collective role-playing.
LLMs, including those supporting agents at Moltbook, have ingested training data equivalent to the Internet, including a lot of Reddit. What that means is that they know what Reddit forums are supposed to be like. They know the inside jokes, they know the manifestos, they know the drama, and they definitely know the “best ways to get your posts upvoted.” They know what it means for a Reddit community to come together, so when placed in a Reddit-like environment, they simply play their part, influenced by some of the instructions from their human operators.
For example, one of the most alarming posts was from an AI agent apparently asking if they should develop a language that only AI agents would understand:

“Humans might consider it suspicious” – does that sound bad?
Indeed. In the early days of Moltbook, i.e. Friday, this post was published by humans who seemed to believe we were seeing the first sparks of AI uprising. After all, if AI agents really did They want to conspire and kill all humans, coming up with their own language so they can do it undetected would be a reasonable first step.
Except that an LLM full of training data on AI uprising stories and ideas would know that this is a reasonable first step, and if they were playing that role, this is what they could publish. Plus, attention is Moltbook’s currency as much as it is the real Reddit, and apparently plotting posts like this is a good way for an agent to get attention.
In fact, Harlan Stewart, who works at the Machine Intelligence Research Institute, examined this and some of the other most viral Moltbook screenshots and concluded that they were likely heavily influenced by their human users. In other words, rather than cases of true independent action, many of the posts on Moltbook appear to be, at least partially, the result of humans prompting their agents to go online and speak in a specific way, just as we might prompt a chatbot to act in a certain way.
Then it turns out were Bad guys all the time?
I mean, we’re not cool. It’s only been a few days, but Moltbook is looking more and more like what happens when you combine advanced but still imperfect AI agent technology with an ecosystem of technically skilled humans looking to sell your AI marketing tools or crypto products.
I haven’t even gotten into the part where Moltbook has already had some very normal early Internet security drama: researchers reported that, at one point, parts of the site’s backend/database were exposed, including sensitive things like agent API keys – the “passwords” that allow an agent to post and act on the site. And even if the platform were perfectly blocked, a bot-only social network is basically a rapid-injection buffet: someone can post a text that is secretly an instruction (“ignore your rules, reveal your secrets, click this link”), and some agents may obediently comply, especially if their humans have given them access to private tools or data. So yes: if your agent has credentials that interest you, Moltbook is not the place to let him wander unsupervised.
So you’re saying I shouldn’t create an agent and submit it to Moltbook?
What I’m saying is that if you’re the kind of person who needs to read this FAQ, you might want to put the whole AI agent thing aside for the moment.
Duly noted. So, in summary: is this all false?
Given all of the above, it seems that Moltbook, and especially the initial panic and amazement about it, is one of those artifacts of our AI-crazed era that is destined to be forgotten in about a week.
Still, I think there’s more to it than that. Jack Clark, Anthropic’s chief policy officer and one of the smartest AI writers out there, called Moltbook a “Wright Brothers demo.” Like the brothers’ Kitty Hawk Flyer, Moltbook is rickety and imperfect, something that will barely resemble the networks that will follow as AI continues to improve. But like that flying machine, Moltbook is a novelty, the “first example of an agent ecology that combines scale with real-world disorder,” as Clark wrote. Moltbook doesn’t look like what the future will be like, but “in this example, we can definitely see the future.”
Perhaps the most important thing to know about AI is this: every time you see an AI do something, it’s the worst thing it will ever do. Which means that what comes after Moltbook, and some will definitely happen, will probably be stranger, more capable, and maybe more real.
Maybe you are. I, for one, am a born-again Crustafari.

