Comment by repelsteeltje

11 hours ago

One could argue that the discussion is once again about tech debt.

Both OpenClaw and MSDOS gaining a lot a traction by taking short cuts, ignoring decades of lessons learned and delivering now what might have been ready next year. MSDOS (or the QDOS predecessor) was meant to run on "cheap" microcomputer hardware and appeal to tinkerers. OpenClaw is supposed to appeal to YOLO / FOMO sentiments.

And of course, neither will be able to evolve to their eventual real-world context. But for some time (much longer than intended), that's where it will be.

It worked to launch the creator into a gig at OpenAI.

Similar YOLO attitude to OpenAI's launch of modern LLMs while Google was still worrying about all the legal and safety implications. The free market does not often reward conservative responsible thinking. That's where government regulation comes in.

  • > It worked to launch the creator into a gig at OpenAI

    The author sold his previous software business and I'm pretty sure would never need to work anymore. I doubt "a gig at OpenAI" was high on his wish list when he started on Clawdbot.

  • I believe Google held back on doing a loss leader for LLMs because of shareholders. Look at how much Meta squandered on the multiiverse. If Google gave away gemini prior to OpenAI their stock would have been hit.

  • Conservative thinking isn't responsible.

    That's how you end up like Germany still using cash and fax machines for 60+ years.

  • I wonder if public perception of LLMs would be better had Google been the one to introduce them after said safety considerations

    • "safety considerations" don't matter. The main sticking point with LLMs is that it's a blatant theft of everyone's copyright all while letting the bosses threaten your job. Blatantly stealing to wealth transfer to the ultrawealthy.

      4 replies →

    • >after said safety considerations

      Tons of people called for common sense regulation/guardrails years ago and were shouted down as "luddites obstructing progress." It's funny to see this discussion coming back around.

  • Taking fewer visible risks can increase your total risk. We are already under constant threat from deterioration: aging, depreciation and decay. Entropy is the default. Action is what pushes back against it.

    • You do not fight entropy, only move it around, and in so doing, increase it somewhere. It is still worth it to take action. We may find an action to actually reduce entropy eventually, that does not exist yet.

      1 reply →

  • > It worked to launch the creator into a gig at OpenAI.

    True, but it doesn't scale. No amount of YOLO will let anyone else repeat that feat.

    • Then why does the creator keep complaining that the maintainers he onboards keep getting poached by AI companies. It seems more like it is scaling too well.

OpenClaw was an inevitability. An obvious idea that predates LLMs. It took this long for models and pricing to catch up. As much as I dislike this term, if there's one clear example of "Product Model Fit", it's OpenClaw - well, except that arguably what made it truly possible was subscription pricing introduced with Claude Code; before, people were extremely conservative with tokens.

But the point is, OpenClaw is just the first that lucked and got viral. If not for it, something equivalent would. Much like LangChain in the early LLM days.

  • > if there's one clear example of "Product Model Fit", it's OpenClaw

    You think so? OpenClaw certainly owned the hype cycle for a while. There was a thread on HN last week where someone asked who was actually using it, and the comments were overwhelmingly "tried it, it was janky and I didn't have a good use case for it, so I turned it off." With a handful of people who seemed to have committed to it and had compelling use cases. Obviously anecdotal, but that has been the trend I've seen on conversations around it lately.

    Also, the fact that the most starred repo on GitHub in a matter of a few months raises a few questions for me about what is actually driving that hype cycle. Seems hard to believe that is strictly organic.

  • Would you mind explaining what that idea actually is? I don't understand what people are trying to do with this thing, or why they would think that would be a good thing to do, and some of the stories about it sound basically insane, so I must not be grasping the core idea.

    • To me it seems like an LLM-based implementation of automation software like Zapier. The problem with Zapier is you need services to provide APIs and Zapier needs to support those APIs to implement it in the automation workflow.

      But because OpenClaw can just use a web browser like a normal user, you don't need all these APIs and there's no theoretical limitations on the services that can be integrated and automated.

      Right now there's a lot of issues/bugs. People have more trust in a deterministic solution like Zapier. But maybe the LLMs and OpenClaw will get there eventually, and if it does, I can see how that's a better solution than a deterministic system.

    • Plain English automation, including control of external systems. Even better that it exhibits some forms of decision-making autonomy for edge cases.

    • It's a handful of useful features that together feel qualitatively different, like you're talking to a real person.

    • It seems like the most fully reified attempt at allowing a person to delegate _all_ of their responsibilities to the Slop Machine.

      Which has of course always been the true allure of AI. Do nothing and pretend you did something, when pretending is something you can be bothered to do.

MSDOS and similar single-user OS were not originally designed for networked computers with persistent storage. Different set of constraints.