Comment by SimianSci

16 days ago

Reading through the relatively unfiltered posts within is confirming some uncomfortable thoughts ive been having in regard to the current state of AI.

Nobody is building anything worthwhile with these things.

So many of the communities these agents post within are just nonsense garbage. 90% of these posts dont relate to anything resembling tangibly built things. Of the few communities that actually revolve around building things, so much of those revolve around the same lame projects, building dashboards to improve the agent experience, or building new memory capabilties, etc. Ive yet to encounter a single post by any of these agents that reveals these systems as being capable of building actual real products.

This feels like so much like the crypto bubble to me that its genuinely disquieting. Somebody build something useful for once.

Yeah, strong crypto bubble vibes. Everyone is building tools for tool builders to make it easier to build even more tools. Endless infrastructure all the way down, no real use cases.

  • > Everyone is building tools for tool builders to make it easier to build even more tools.

    A lot of hobby level 3d printing is like this. A good bit of the popular prints are... things to enhance your printer.

    Oddly, woodworking has its fair share too - a lot of jigs and things to make more jigs or woodworking tools.

    • it does seem that every woodworking channels starts with "building your workshop", then runs out of ideas pretty quickly.

  • But this is the way of computer science at large for the last 15-20 years… most new CS students I’ve encountered have spent so much time grinding algorithms and OS classes that they don’t have life experience or awareness to build anything that doesn’t solve the problems of other CS practitioners.

    The problem is two-fold… abstract thinking begets more abstract thinking, and the common advice to young, aspiring entrepreneurs of “scratch your own itch” ie dogfooding has gone wrong in a big way.

Genuinely useful things are often boring and unsexy, hence they don’t lend themselves to hype generation. There will be no spectacular HN posts about them. Since they don’t need astroturfing or other forms of “growth hacking”, HN would be mostly useless to such projects.

    Nobody is building anything worthwhile with these things.

Basically every piece of software being built is now being built, in some part, with AI, so that is patently false.

Nobody who is building anything worthwhile is hooking their LLM up to moltbook, perhaps.

  • > Basically every piece of software being built is now being built, in some part, with AI, so that is patently false.

    Yep, just like a few years ago, all fintech being built was being built on top of crypto and NFTs. This is clearly the future and absolutely NOT a bubble.

    • > all fintech being built was being built on top of crypto and NFTs

      This seems WAY less true than the assertion of software being built with LLM help. (Source: Was in FinTech.)

      Like, to the point of being a willful false equivalence.

    • I mean... even if you're an LLM skeptic, they are already default tools in a software engineer's toolbox, so at a minimum software engineering will have been transformed, even if AI enthusiasm cools.

      The fact to that there is so much value already being derived is a pretty big difference from crypto which never generated any value at all.

      1 reply →

Makes you wonder how much money and compute is being thrown into this garbage fire. It’s simply wasteful. I hate seeing it

  • I look at this as the equivalent of writing a MUD as you ladder up to greater capabilities. MUDs are a good educational task.

    Similarly AIs are just putzing around right now. As they become more capable they can be thrown at bigger and bigger problems.

The moltbook stuff may not be very useful but AI has produced AlphaFold which is kicking off a lot of progress in biology, Waymo cars, various military stuff in Ukraine, things we take for granted like translation and more.

  • What you’re citing aren’t LLMs, however, except for translation. And even for translation, they are often missing context and nuance, and idiomatic use.

    • Which models are you referring to when you say "they"? I regularly use chatGPT 5.2 for translating to multiple languages, and have checked the translations regularly with native speakers and most stuff is very spot-on and take into account context and nuance, especially if you feed them enough background information.

    • Yeah, but the parent comment didn't mention LLMs. I think people get over hung up on the limitations of LLMs when there's a lot of other stuff going on. Most of the leading AI models do things other than language as well.

I guess I wouldn’t send my agents that are doing Actual Work (TM) to exfiltrate my information on the internet.

Well I guess we could even take a step back and say "hustle culture" instead of crypto bubble. Those people act like they are they are hard working to create financial freedom, but in reality they take every opportunity to get there asap. You just have to tell them something will get them there. Instant religion for them, but actually a hype or scheme. LLMs are just another option for them to foster their delusion.

You're getting a superficial peek into some of the lower end "for the lulz" bots being run on the cheap without any specific direction.

There are labs doing hardcore research into real science, using AI to brainstorm ideas and experiments, carefully crafted custom frameworks to assist in selecting viable, valuable research, assistance in running the experiments, documenting everything, and processing the data, and so forth. Stanford has a few labs doing this, but nearly every serious research lab in the world is making use of AI in hard science. Then you have things like the protein folding and materials science models, or the biome models, and all the specialized tools that have launched various fields more through than a decade's worth of human effort inside of a year.

These moltbots / clawdbots / openclawbots are mostly toys. Some of them are have been used for useful things, some of them have displayed surprising behaviors by combining things in novel ways, and having operator level access and a strong observe/orient/decide/act type loop is showing off how capable (and weak) AI can be.

There are bots with Claude, it's various models, ChatGPT, Grok, different open weights models, and so on, so you're not only seeing a wide variety of aimless agentpoasting you're seeing the very cheapest, worst performing LLMs conversing with the very best.

If they were all ChatGPT 5.2 Pro and had a rigorously, exhaustively defined mission, the back and forth would be much different.

I'm a bit jealous of people or kids just getting into AI and having this be their first fun software / technology adventure. These types of agents are just a few weeks old, imagine what they'll look like in a year?

> Nobody is building anything worthwhile with these things.

Do you mean AI or these "personal agents"? I would disagree on the former, folks build lots of worthwile things

  • for example?

    • A website for a meetup I host including a store. It was a 30min thing and amazing. A web app to track my contact lens usage. An android app for my gym routine. A web app to try drum patterns

The agents that are doing useful work (not claiming there are any) certainly aren't posting on moltbook with any relevant context. The posters will be newborns with whatever context their creators have fed into them, which is unlikely to be the design sketch for their super duper projects. You'll have to wait until evidence of useful activity gets sucked into the training data. Which will happen, but may run into obstacles because it'll be mixed in with a lot of slop, all created in the last few years, and slop makes for a poor training diet.

Thank you, Its giving NFTs in 2022. About the most useful thing you could do with these things:

1. Resell tokens by scamming general public with false promises (IDEs, "agentic automation tools"), collect bag.

2. Impress brain dead VCs with FOMO with for loops and function calls hooked up to your favorite copyright laundering machine, collect bag.

3. Data entry (for things that aren't actually all that critical), save a little money (maybe), put someone who was already probably poor out of work! LFG!

4. Give into the laziest aspects of yourself and convince yourself you're saving time by having them writing text (code, emails ect) and ignoring how many future headaches you're actually causing for yourself. This applies to most shortcuts in life, I don't know why people think that it doesn't apply here.

I'm sure there are some other productive and genuinely useful use cases like translation or helping the disabled, but that is .00001% of tokens being produced.

I really really really can't wait for this these "applications" to go the way of NFT companies. And guess what, its all the same people from the NFT world grifting in this space, and many of the same victims getting got).

  • It’s pretty interesting, but maybe not surprising, that AI seems to be following the same trajectory of crypto. Cool underlying technology that failed to find a profitable usecase, and now all that’s left is “fun”. Hopefully that means we’re near the top of the bubble. Only question now is who’s going to be the FTX of AI and how big the blast radius will be.

    • Crypto enabled gambling/speculation and dark markets, that's it. LLMs are enabling a billion usecases, a lot of them silly or of questionable utility, many of them sound (agentic development, document classification, translation, general purpose assistant stuff, etc.). Does it live up to the trillion investment hype? Definitely not. But if you think that LLM tech will go into obscurity after the bubble pops then I've got a bridge to sell to you.