← Back to context

Comment by abalone

7 days ago

I think this comment misses that OpenAI hired the guy, not the project.

"This guy was able to vibe code a major thing" is exactly the reason they hired him. Like it or not, so-called vibe coding is the new norm for productive software development and probably what got their attention is that this guy is more or less in the top tier of vibe coders. And laser focused on helpful agents.

The open source project, which will supposedly remain open source and able to be "easily done" by anyone else in any case, isn't the play here. The whole premise of the comment about "squashing" open source is misplaced and logically inconsistent. Per its own logic, anyone can pick up this project and continue to vibe out on it. If it falls into obscurity it's precisely because the guy doing the vibe coding was doing something personally unique.

“”” Like it or not, so-called vibe coding is the new norm for productive software development”””

Alright

Not only that, his output is insane, he has more active projects than I bother to count and more than 70k commits last year. He's probably one of, if not the best, vibe coding evangelist.

https://github.com/steipete

It also probably didn't hurt that he favors Codex over Claude.

  • he favors Codex?

    The original name of his ai assistant tool was 'clawdbot' until Anthropic C&D'ed him. All the examples and blog posts walking thru new user setup on a mac mini or VPS were assuming a claude code max account.

    I know he uses many llms for his actual software dev.. - right tool for the job. But the origins of openclaw seem to me more rooted in claude code than codex.

    Which does give the whole story an interesting angle when you consider the safety/alignment angle that Anthropic pledges to (publicly) and OpenAI pretty much ignores (publicly). Which is ironic, as configuring codex cli to 'full yolo mode' feels more burdensome and scary than in Claude Code. But I'm pretty sure that speaks more to eng/product decisions, and not CEO & biz strategy choices.

  • It looks like most of Peter's projects are just simple API wrappers.

    Peter's been running agents overnight 24/7 for almost a year using free tokens from his influencer payments to promote AI startups and multiple subscription accounts.

      Hi, my name is Peter and I’m a Claudoholic. I’m addicted to agentic engineering. And sometimes I just vibe-code. ...  I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens. If I’d use API calls, that’d cost my around 10x more. Don’t nail me on this math, I used some token counting tools like ccusage and it’s all somewhat imprecise, but even if it’s just 5x it’s a damn good deal.
    
      ...  Sometimes [GPT-5-Codex] refactors for half an hour and then panics and reverts everything, and you need to re-run and soothen it like a child to tell it that it has enough time. Sometimes it forgets that it can do bash commands and it requires some encouragement. Sometimes it replies in russian or korean. Sometimes the monster slips and sends raw thinking to bash.

So creating unsafe software is the new norm?

  • I’d bet good money that at leasy 2/3 of all software ever made, the decision makers couldn’t care less about security beyond "let’s get that checkbox to show we care in case we get sued". Higher velocity >> tech debt and bugginess unless you work at nasa or you're writing software for a defibrillator, especially in the current "nothing matters more than next quarter results".

  • I have worked over two decades creating government software, and I can say that this is not new.

    Security (and accessibility) are reluctant minimum effort check boxes at best. However, my experience is focused on court management software, so maybe these aspects are taken more seriously in other areas of government software.