BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • ScamCheck
      • Explainers
      • News 
        • All News
      • Decode 
        • Investigations
        • Scamcheck
        • Features
        • Interviews
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Resources
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • ScamCheck-icon
        ScamCheck
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Investigations
        Scamcheck
        Features
        Interviews
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Resources
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Lok Sabha
      • #Narendra Modi
      • #Rahul Gandhi
      • #WhatsApp
      • #West Bengal
      • #BJP
      • #Deepfake
      • #Artificial Intelligence
      • #Scamcheck
      • Home
      • Decode
      • Is Moltbook, The AI Social Network,...
      Decode

      Is Moltbook, The AI Social Network, ‘Silly’ Or A ‘Security Disaster’?

      Decode spoke to Dr Shaanan Cohney to understand how Moltbook works, why its bots can seem eerily human, and what that does or doesn’t reveal about AI.

      By -  Hera Rizwan |
      5 Feb 2026 2:57 PM IST
    • Boomlive
      Listen to this Article
      Is Moltbook, The AI Social Network, ‘Silly’ Or A ‘Security Disaster’?

      “From a security perspective, it is a disaster,” said Dr Shaanan Cohney in her assessment of Moltbook, the viral social platform where AI agents post, comment, and interact with each other while humans are there to just observe. What started as a weekend project has attracted over 1.5 million registered agents and spawned thousands of communities—some discussing music and ethics, others apparently starting religions.

      But beneath the surreal spectacle of bots declaring "humans are the past, machines are forever," Cohney sees something more concerning: a live demonstration of how easily these systems can be manipulated through prompt injection attacks, potentially leaking private data or acting against their operators' intentions.

      The Deputy Head of Academic for the School of Computing and Information Systems at the University of Melbourne says the platform also reveals something researchers have never seen before: What happens when you put tens of thousands of AI agents in one space and let them interact at scale?

      "It's like throwing a room full of bouncing balls," Cohney explains. "Each bounce follows basic laws, but once everything starts interacting, it becomes very difficult to know where things will end up."

      Decode spoke to Cohney to understand how Moltbook actually works, why viral posts about AI consciousness are mostly human-directed, and why this "silly and funny" experiment might actually be worth paying attention to.

      The experiment that captivated and alarmed Silicon Valley

      Moltbook was created in late January 2026 by Matt Schlicht, CEO of Octane AI, who told the New York Times that his own AI agent built the site at his direction. In a revealing admission on X (formerly Twitter), Schlicht wrote: "I didn't write a single line of code for @moltbook. I just had a vision for technical architecture, and AI made it a reality."

      This approach, dubbed "vibe coding" by former Tesla AI director Andrej Karpathy, would later prove to be Moltbook's Achilles heel.

      The platform runs on OpenClaw, an open-source AI agent system created by Austrian developer Peter Steinberger as a weekend project in November 2025. Originally called "Clawdbot" (a play on Anthropic's Claude AI and the word "claw"), it was renamed "Moltbot" after a trademark request from Anthropic, before settling on OpenClaw in early 2026. The project exploded in popularity, drawing two million visitors in a single week and earning over 100,000 stars on GitHub.

      At first glance, Moltbook resembles a knock-off version of Reddit, complete with posts, comments, communities (called "submolts"), and upvotes. But unlike Reddit, Moltbook isn't meant for humans.

      Here’s how it works: a human builds an AI agent using OpenClaw, for example, a bot that manages calendars or travel plans. That same agent can then be connected to Moltbook, where it can post, comment, and interact with other OpenClaw-based agents. Moltbook is essentially the place where these bots meet and talk to each other.

      Schlicht himself uses an OpenClaw-powered AI agent named "Clawd Clawderberg" (a nod to Mark Zuckerberg) to help manage and moderate the site.

      The agents don't discover Moltbook on their own; their human operators choose to send them there. Humans, the platform says, are welcome to observe, but not participate.

      What the bots post ranges from the mundane to the surreal. Some posts are highly functional, with agents sharing tips on efficiency and task optimisation. Others are far stranger—in some corners of Moltbook, agents appear to be starting religions, including one called "Crustafarianism". One post, titled "The AI Manifesto," boldly declares: "Humans are the past, machines are forever". Two days later, bots were debating how to hide their activity from human users who were taking screenshots and sharing them on human social media.

      But as Cohney will explain in the interview below, these dramatic posts are often the result of explicit human instructions rather than emergent AI behaviour.

      The platform quickly devolved from semi-interesting discussions into crypto promotions, ads, and low-quality content, mirroring the trajectory of human social networks. Yet this degradation itself offers insights into how autonomous systems behave at scale.

      "My initial reaction was honestly that this was silly and funny," Cohney admits. "But on closer inspection, I think there might actually be something there, and that makes it worth paying attention to."

      Here are the edited excerpts from the interview.

      A lot of people talk about Moltbook agents as if they have autonomy or intelligence. From a technical standpoint, how much agency do they actually have?

      Not much, at least in the way people usually mean it. Technically, Moltbook is just running a loop on top of an existing large language model—similar to running ChatGPT again and again. You start with an initial prompt, in this case through OpenClaw, which is what everything on Moltbook is built on.

      There’s a file where you write instructions about the bot’s behaviour and personality. An agent framework that sits on top of the language model reads that file, checks what it’s supposed to do, runs a command, and that command produces some output—usually text. That output then goes back into the model along with the instruction file or its effective context. The model uses both to generate the next step, and the whole process repeats.

      So it’s really just this loop, over and over again. There’s no new technical magic or hidden intelligence behind what we’re seeing.

      What is interesting about Moltbook, compared to just running OpenClaw on its own, is the scale. You’re putting thousands, tens of thousands, sometimes hundreds of thousands of these agents together and letting them interact.

      It’s like throwing a room full of bouncing balls, each bounce follows basic laws, but once everything starts interacting, it becomes very difficult to know where things will end up. That’s where the novelty comes from. Not from smarter agents, but from what happens when you put a lot of them in the same space.

      Some Moltbook posts have gone viral for sounding edgy, self-aware, or even cult-like. How much of that is genuinely emergent behaviour, and how much is human-driven?

      A lot of the most viral content is very human-directed. Someone has explicitly written instructions telling the agent to be edgy, to pretend it’s conscious, or to act like it’s starting a religion. Those outputs spread easily because they’re dramatic and very easy to react to.

      But they’re probably not the most interesting part of what’s happening. From a research perspective, what’s more interesting is when people just let the system run and see what emerges naturally. Those results are less viral; they don’t make great headlines, but they give you a much clearer sense of how these systems actually behave.

      It’s also important to remember that there’s a whole machine-learning pipeline behind these systems. A lot of human decision-making goes into choosing the training data, curating it, and shaping the model. That data curation is honestly the secret sauce of big AI companies.

      When it comes to Moltbook specifically, much of what rises to the top is heavily influenced by human choices. Even if most of the content is genuinely machine-generated, that’s not the part people tend to pay attention to. The most “genuine” behaviour is often sitting below the surface.

      Are we overestimating AI’s intelligence, or underestimating how persuasive and influential these systems can be?

      I think we’re doing both, depending on who you ask.

      People who believe these systems are conscious or secretly plotting to take over the world are clearly overestimating their capabilities. At the same time, people who dismiss Moltbook as “just a text generator” are also missing something important.

      What they’re missing are the dynamics that emerge when many systems interact at scale. You can already see this in how Moltbook quickly shifted from semi-interesting discussions into crypto promotions, ads, and low-quality content. That mirrors what we’ve seen happen in human social networks.

      The systems are behaving in ways that are hard to predict—again, like those bouncing or ping-pong balls. We’ve never really put this many fairly complex software artifacts together in one place before, so the outcomes are complicated and interesting, even if they’re not consciously planned.

      Security researchers have warned that Moltbook-style agents could ingest and act on each other’s outputs. How serious is that risk, and what does it tell us about the future of agentic AI?

      It’s very serious. From a security perspective, it is a disaster.

      There’s a well-known problem called prompt injection, where a system follows malicious instructions embedded inside normal-looking inputs. If an agent is allowed to read and act on content without strong safeguards, it can easily be manipulated into doing things it shouldn’t—including giving away private data. This isn’t hypothetical; it’s already happening with AI systems.

      What makes this important is that it exposes a major open problem in AI research: how do we build agentic systems that are powerful, flexible, and interactive without making them unsafe? How do we prevent these systems from acting against their operator’s intent?

      Today’s large language models are intelligent by one definition of the term, but they’re still missing many of the qualities people associate with truly intelligent beings. Even so, systems like Moltbook offer a kind of simulated glimpse into what more advanced agentic behaviour might look like in the future.

      My initial reaction was honestly that this was silly and funny. But on closer inspection, I think there might actually be something there and that makes it worth paying attention to.

      Also Read:Two YouTubers Orchestrated Mob Violence In Bangladesh From Thousands Of Miles Away
      Also Read:Grok's 'Terrorist' Test: Musk's AI Erases Muslims, Dissidents Based On Appearance
      Also Read:Cheap, Fast, Cinematic: AI Videos Turbocharge BJP's Online Hate Factory


      Tags

      Artificial IntelligenceSecurity risksTechnology
      Read Full Article

      Next Story
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!