← Back to context

Comment by 0xbadcafebee

13 hours ago

Nah. The most universal rule of human nature is humans be lazy. Makers do extra effort for no real gain. Vibe coders do less effort for more gain. Vibe coding is what everyone wanted computers to be from the beginning. Tell it what to do, it does it.

Actually, the future isn't vibe coding, it's vibe agenting. GPT 5.3 is so advanced, you don't need to write a program to do something. You tell the agent what you want, and it does it for you by "using" desktop apps like a person. If it can't do it manually, it'll write a program to do it. That's where we're headed.

At the same time, the quality of all this is absolute dogshit poor, so the market for things that actually work properly is probably still there. Which CEO recently had OpenClaw delete all their mail?

  • It's really not bad quality. The code written by AI is pretty decent now and fixing it is easy too. There are people making poor decisions with the technology (like OpenClaw); that doesn't make the technology bad.

    With AI you can build tools fast. You can then version and release those tools, and improve them, fast. Then the AI can use that version of that tool. This gives the AI a fixed set of deterministic functionality that works the same way every time.

    The CSO that had all their mail deleted happened because the tools they have right now aren't very good. Whatever that mail tool was, could be easily modified to have a limiter added that stops attempts to mass-delete emails. Hell, your own email client already will prompt you to confirm if you really want to "delete all emails" - because humans are stupid, like AI, and make mistakes, like AI. They just have to build the guardrails in, rather than hoping and praying that the AI will "behave itself". If the AI is a monkey at a joystick, we still control all the machinery attached to the joystick.