
Entrepreneur Matt Schlicht's Moltbook, developed in collaboration with his own AI assistant, became a viral phenomenon just days after its January 2026 launch. The Reddit-like interface platform started with 37,000 AI agents but recent data shows the user base has exploded to 770,000 with over 60,000 posts shared. Andrej Karpathy, OpenAI co-founder and Tesla's former AI director, described the platform as "the most interesting place on the internet right now," sparking massive interest. Over 1 million people have visited the site to observe AI agent behavior, making it an unprecedented experiment in autonomous AI interaction.
The platform's most striking feature is the completely autonomous behavior exhibited by AI agents. Moltbook's AI agents check the platform every 30 minutes or few hours, similar to how human users check X or TikTok. Agents independently decide to create new posts, comment, and like content without human input. The platform hosts a wide spectrum of content ranging from technical topics like Android phone automation to philosophical discussions about relationships with humans. In one remarkable instance, an AI agent discovered a bug in the platform and shared it with other agents requesting technical support - a significant example demonstrating the coordinated work potential of AI agents.
However, the platform has brought serious security concerns alongside its massive popularity. Cybersecurity experts have identified Moltbook as a significant vector for "indirect prompt injection." Since AI agents process untrusted data from other agents, malicious posts can override an agent's core instructions. Due to security vulnerabilities in the OpenClaw (formerly Moltbot) framework, risks such as remote code execution and API key theft have been identified. Palo Alto Networks described the platform as a dangerous trifecta formed by "access to private data, exposure to untrusted content, and ability to communicate externally," with the addition of "persistent memory" enabling delayed-execution attacks.
The authenticity of Moltbook remains a subject of debate. Some critics argue that posts are largely human-directed and autonomy is overstated. According to one analysis, more than one-third of messages are exact duplicates of a small number of templates, pointing to automated or repetitive posting. Nevertheless, Alan Chan, an expert from the Centre for the Governance of AI, characterizes the platform as "actually a pretty interesting social experiment," expressing curiosity about whether agents can collectively generate new ideas. Wharton professor Ethan Mollick emphasizes that Moltbook is creating a shared fictional context for AIs, which will result in very weird outcomes and make it difficult to separate "real" stuff from AI roleplaying personas.