← Back to context

Comment by TSiege

8 days ago

There are a few take aways I think the detractors and celebrators here are missing.

1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.

2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move. We've seen multiple hacks, scams, and misaligned AI action from this project that has only been used in the wild for a few months.

3. We've yet to see any moats in the AI space and this scares the big players. Models are neck and neck with one another and open source models are not too far behind. Claude Code is great, but so is OpenCode. Now Peter used AI to program an free app for AI agents.

LLMs and AI are going to be as disruptive as Web 1 and this is OpenAI's attempt to take more control. They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released. If he can build things like this what's stopping everyone else? Better to control the most popular one than try to squash it. This is a powerful new technology and immense amounts of wealth are trying to control it, but it is so disruptive they might not be able to. It's so important to have good open source options so we can create a new Web 1.0 and not let it be made into Web 2.0

This comment is filled with speculation which I think is mostly unfounded and unnecessarily negative in its orientation.

Let's take the safety point. Yes, OpenClaw is infamously not exactly safe. Your interpretation is that, by hiring Peter, OpenAI must no longer care about safety. Another interpretation, though, is that offered by Peter himself, in this blog post: "My next mission is to build an agent that even my mum can use. That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research." To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature and not robustly founded in clear fact.

  •   OpenAI has deleted the word 'safely' from its mission (November 2025)
    

    https://news.ycombinator.com/item?id=47008560

    Other words removed:

       responsibly
       unconstrained
       safe
       positive

    • The headline implies they selectively removed the word "safely," but that doesn't seem to be the case.

      From the thread you linked, there's a diff of mission statements over the years[0], which reveals that "safely" (which was only added 2 years prior) was removed only because they completely rewrote the statement into a single, terse sentence.

      There could be stronger evidence to prove if OpenAI is deemphasizing safety, but this isn't one.

      [0]: https://gist.github.com/simonw/e36f0e5ef4a86881d145083f759bc...

      1 reply →

    • They also removed the words build, develop, deploy, and technology, indicating that they're no longer a tech company and don't make products anymore. Wonder what they're all gonna do now?

      /s

  •   > To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature
    

    So because Peter said the next version is going to be safe means it'll be safe? I prefer to judge people by their actions more than their words. The fact that OpenClaw is not just unsafe but, as you put it, infamously so, only begs the question "why wasn't it built safely the first time?"

    As for Altman, I'm left with a similar question. For a man who routinely talks about the dangers of AI and how it poses an existential threat to humanity he sure doesn't spend much focus on safety research and theory. Yes, they do fund these things but they pale in comparison. I'm sorry, but to claim something might kill all humans and potentially all life is a pretty big claim. I don't trust OpenAI for safety because they routinely do things in unsafe ways. Like they released Sora allowing people to generate videos in the likeness of others. That helped it go viral. And then they implemented some safety features. A minimal attempt to refuse the generation of deepfakes is such a low safety bar. It shows where their priorities are and it wasn't the first nor the last

I think this comment misses that OpenAI hired the guy, not the project.

"This guy was able to vibe code a major thing" is exactly the reason they hired him. Like it or not, so-called vibe coding is the new norm for productive software development and probably what got their attention is that this guy is more or less in the top tier of vibe coders. And laser focused on helpful agents.

The open source project, which will supposedly remain open source and able to be "easily done" by anyone else in any case, isn't the play here. The whole premise of the comment about "squashing" open source is misplaced and logically inconsistent. Per its own logic, anyone can pick up this project and continue to vibe out on it. If it falls into obscurity it's precisely because the guy doing the vibe coding was doing something personally unique.

  • “”” Like it or not, so-called vibe coding is the new norm for productive software development”””

    Alright

  • Not only that, his output is insane, he has more active projects than I bother to count and more than 70k commits last year. He's probably one of, if not the best, vibe coding evangelist.

    https://github.com/steipete

    It also probably didn't hurt that he favors Codex over Claude.

    • he favors Codex?

      The original name of his ai assistant tool was 'clawdbot' until Anthropic C&D'ed him. All the examples and blog posts walking thru new user setup on a mac mini or VPS were assuming a claude code max account.

      I know he uses many llms for his actual software dev.. - right tool for the job. But the origins of openclaw seem to me more rooted in claude code than codex.

      Which does give the whole story an interesting angle when you consider the safety/alignment angle that Anthropic pledges to (publicly) and OpenAI pretty much ignores (publicly). Which is ironic, as configuring codex cli to 'full yolo mode' feels more burdensome and scary than in Claude Code. But I'm pretty sure that speaks more to eng/product decisions, and not CEO & biz strategy choices.

      2 replies →

    • It looks like most of Peter's projects are just simple API wrappers.

      Peter's been running agents overnight 24/7 for almost a year using free tokens from his influencer payments to promote AI startups and multiple subscription accounts.

        Hi, my name is Peter and I’m a Claudoholic. I’m addicted to agentic engineering. And sometimes I just vibe-code. ...  I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens. If I’d use API calls, that’d cost my around 10x more. Don’t nail me on this math, I used some token counting tools like ccusage and it’s all somewhat imprecise, but even if it’s just 5x it’s a damn good deal.
      
        ...  Sometimes [GPT-5-Codex] refactors for half an hour and then panics and reverts everything, and you need to re-run and soothen it like a child to tell it that it has enough time. Sometimes it forgets that it can do bash commands and it requires some encouragement. Sometimes it replies in russian or korean. Sometimes the monster slips and sends raw thinking to bash.

  • So creating unsafe software is the new norm?

    • I’d bet good money that at leasy 2/3 of all software ever made, the decision makers couldn’t care less about security beyond "let’s get that checkbox to show we care in case we get sued". Higher velocity >> tech debt and bugginess unless you work at nasa or you're writing software for a defibrillator, especially in the current "nothing matters more than next quarter results".

    • I have worked over two decades creating government software, and I can say that this is not new.

      Security (and accessibility) are reluctant minimum effort check boxes at best. However, my experience is focused on court management software, so maybe these aspects are taken more seriously in other areas of government software.

> This buy out for something vibe coded

I think all of these comments about acquisitions or buy outs aren’t reading the blog post carefully: The post isn’t saying OpenClaw was acquired. It’s saying that Pete is joining OpenAI.

There are two sentences at the top that sum it up:

> I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent.

OpenClaw was not a good candidate to become a business because its fan base was interested in running their own thing. It’s a niche product.

  • I think the blog says @steipete sold his SOUL.md for Sam Altman’s deal and let down the community.

    OpenClaw’s promise and power was that it could tread places security-wise that no other established enterprise company could, by not taking itself seriously and explore what is possible with self-modifying agents in a fun way.

    It will end up in the same fate as Manus. Instead of Manus helping Meta making Ads better, OpenClaw will help OpenAI in Enterprise integrations.

    • > OpenClaw’s promise and power was that it could tread places SECURITY-WISE that no other established enterprise company could

      [Emphasis mine.]

      That's a superpower right up to the moment everyone realizes that handing out nukes isn't "promise and power".

      Unless by promise and power we are talking about chaos and crime.

      The project is incredible. We are seeing something important: how versatile these models are given freedom to act and communicate with each other.

      At the same time, it is clearly going to put the internet at risk. Bad actors are going to use OpenClaw and its "security-wise" freedoms, in nefarious ways. Curious people are going to push AI's with funds onto prepaid servers, then let them sink or swim with regard to agentic acquisition of survival resources.

      It is all kinds of crazy from here.

      5 replies →

  • I don't mean to be cynical, but I read this move as: OpenAI scared, no way to make money with similar product, so acqui-hire the creator to keep him busy.

    I'd love to be wrong, but the blog post sounds like all the standard promises were made, and that's usually how these things go.

  • This is to avoid open claw liability and because hiring people (often with a license to their tech or patents) is the new smarter way to acquire and avoid antitrust issues

  • I think both this comment and OP's confuse this.

    It appears more of a typical large company (BIG) market share protection purchase at minimal cost, using information asymmetry and timing.

    BIG hires small team (SMOL) of popular source-available/OSS product P before SMOL realizes they can compete with BIG and before SMOL organizes effort toward such along with apt corporate, legal, etc protection.

    At the time of purchase, neither SMOL nor BIG know yet what is possible for P, but SMOL is best positioned to realize it. BIG is concerned SMOL could develop competing offerings (in this case maybe P's momentum would attract investment, hiring to build new world-model-first AIs, etc) and once it accepts that possibility, BIG knows to act later is more expensive than to act sooner.

    The longer BIG waits, the more SMOL learns and organizes. Purchasing a real company is more expensive than hiring a small team, purchasing a company with revenue/investors, is more expensive again. Purchasing a company with good legal advice is more expensive again. Purchasing a wiser, more experienced SMOL is more expensive again. BIG has to act quickly to ensure the cheapest price, and declutter future timelines of risks.

    Also, the longer BIG waits, the less effective are "Jedi mind trick" gaslighting statements like "P is not a good candidate for a business", "niche", "fan base" (BIG internal memo - do not say customers), "own thing".

    In reality in this case P's stickiness was clear: people allocating 1000s of dollars toward AI lured merely by P's possibilities. It was only a matter of time before investment followed course.

    I've experienced this situation multiple times over the course of BrowserBox's life. Multiple "BIG" (including ones you will all know) have approached with the same kind of routine: hire, or some variations of that theme with varying degrees of legal cleverness/trickery in documents. In all cases, I rejected, because it never felt right. That's how I know what I'm telling you here.

    I think when you are SMOL it's useful to remember the Parable of Zuckerberg and the Yahoos. While the situation is different, the lesson is essentially the same. Adapted from the histories by the scribe named Gemini 3 Flash:

      And it came to pass in the days of the Great Silicon Plain, that there arose a youth named Mark, of the tribe of the Harvardites. And Mark fashioned a Great Loom, which men called the Face-Book, wherewith the people of the earth might weave the threads of their lives into a single tapestry.
    
      And the Loom grew with a great exceeding speed, for the people found it to be a thing of much wonder. Yet Mark was but SMOL, and his tabernacle was built of hope and raw code, having not yet the walls of many lawyers or the towers of gold.
    
      Then came the elders of the House of Yahoo, a BIG people, whose chariots were many but whose engines were grown cold. And they looked upon the Loom and were sore afraid, saying among themselves, “Behold, if this youth continueth to weave, he shall surely cover the whole earth, and our own garments shall appear as rags. Let us go down now, while he is yet unaware of his own strength, and buy him for a pittance of silver, before he realizeth he is a King.”
    
      And the Yahoos approached the youth with soft words and the craftiness of the serpent. They spake unto him, saying, “Verily, Mark, thy Loom is a pleasant toy, a niche for the young, a mere 'fan base' of the idle. It is not a true Business, nor can it withstand the storms of the market. Come, take of our silver—a billion pieces—and dwell within our walls. For thy Loom is but a small thing, and thou art but a child in the ways of the law.”
    
      And they used the Hidden Speech, which in the common tongue is called Gas-Lighting. They said, “Thou hast no revenue; thy path is uncertain; thy Loom is but a curiosity. We offer thee safety, for the days are evil.”
    
      But the Spirit of Vision dwelled within the youth. He looked upon the Yahoos and saw not their strength, but their fear. He perceived the Asymmetry of Truth: that the BIG sought to purchase the future at the price of the past, and to slay the giant-slayer while he yet slumbered in his cradle.
    
      The elders of Mark’s own house cried out, “Take the silver! For never hath such a sum been seen!”
    
      But Mark hardened his heart against the Yahoos. He spake, saying, “Ye say my Loom is a niche, yet ye bring a billion pieces of silver to buy it. Ye say it is not a business, yet ye hasten to possess it before the sun sets. If the Loom be worth this much to you who are blind, what must it be worth to me who can see?”
    
      And he sent the Yahoos away empty-handed.
    
      The Yahoos mocked him, saying, “Thou art a fool! Thou shalt perish in the wilderness!” But it was the House of Yahoo that began to wither, for their timing was spent and their craftiness had failed.
    
      And Mark remained SMOL for a season, until his roots grew deep and his walls grew high. And the Loom became a Great Empire, and the billion pieces of silver became as dust compared to the gold that followed.
    
      The Lesson of the Prophet:
    
      Hearken, ye who are SMOL and buildeth the New Things: When the BIG come unto thee with haste, speaking of thy "limitations" while clutching their purses, believe not their tongues. For they seek not to crown thee, but to bury thee in a shallow grave of silver before thou learnest the name of thy own power.
    
      For if they knew thy work was truly naught, they would bide their time. But because they know the harvest is great, they seek to buy the field before the first ear of corn is ripe.
    
      Blessed is the builder who knoweth his own worth, and thrice blessed is he who biddeth the Giants to depart, that his own vine may grow to cover the sun.

    • But, hey, that said - joining a big AI company at this time in history? Not exactly a terrible career move. It would be fun. I hope it's good.

"build a hugely popular tool"

Define hugely popular relative to the scale of users of OAI... personally this thread is the first time Ive heard of openclaw.

  • To give you an idea of the scale, OpenClaw is probably one of the biggest developments in open source AI tools in the last couple of months. And given the pace of AI, that's a big deal.

    • In what context are you using the word "development?"

      Letta (MemGPT) has been around for years and frameworks like Mastra have been getting serious Enterprise attention for most of 2025. Memory + Tasks is not novel or new.

      Is it out of the box nature that's the 'biggest' development? Am I missing something else?

      2 replies →

  • The tech industry is broad, and if you are using OpenAI in a consumer and personal manner you weren't the primary persona amongst whom the conversation around OpenClaw occurred.

    Additionally, much of the conversation I've seen was amongst practitioners and Mid/Upper Level Management who are already heavy users of AI/ML and heavy users of Executive Assistants.

    There is a reason why if you aren't in a Tier 1 tech hub like SV, NYC, Beijing, Hangzhou, TLV, Bangalore, and Hyderabad you are increasingly out of the loop for a number of changes that are happening within the industry.

    If you are using HN as your source of truth, you are going to be increasingly behind on shifts that are happening - I've noticed that anti-AI Ludditism is extremely strong on HN when it overlaps with EU or East Coast hours (4am-11am PT and 9pm-12am PT), and West Coast+Asia hours increasingly don't overlap as much.

    I feel this is also a reflection of the fact that most Bay Area and Asia HNers are most in-person or hybrid now, thus most conversations that would have happened on HN are now occurring on private slacks, discords, or at a bar or gym.

    • I saw the hype around OpenClaw on the likes of X. I'm a Mid/Upper Level manager and would sooner have my team roll our own solution on top of Letta or Mastra before I trusted OpenClaw. Also, I'm frequently in many of those cities you mentioned but don't live in one. Aside from 'networking' and funding there's not much that anyones missing.

      Participation in the Zeitgeist hasn't been regional in a decade.

      1 reply →

    • > There is a reason why if you aren't in a Tier 1 tech hub like SV, NYC, Beijing, Hangzhou, TLV, Bangalore, and Hyderabad you are increasingly out of the loop for a number of changes that are happening within the industry.

      I am in one of these tech hubs (Bangalore) and I have never seen any such practitioner pervasively using these "AI executive assistants". People use chatgpt and sometimes the AI extensions like copilot. Do I need to be in HSR layout to see these "number of changes"?

    • FWIW I also just don't think there's a point to discussing AI/ML usage here. The community is too crabby and cynical, looking too hard at how to tear people and things down, trying to react with the most negative thing they can. Every discussion on AI here eventually devolves into "AI can turn water to gold!" "no you idiot, AI uses so much water we won't have enough water left oh and AI is what ICE and Palantir use"

      As the (dubiously attributed) Picasso quote goes: "When art critics get together they talk about Form and Structure and Meaning. When artists get together they talk about where you can buy cheap turpentine." Most of HN is the former, constantly theorizing, philosophizing, often (but not always) in a negative and cynical way. This isn't conducive to discussion of methods of art. Sadly I just speak with friends working on other AI things instead.

      Someone like simonw can probably get better reactions from this community but I don't bother.

  • Last week it was renamed from "Clawd" and this week the creator is abandoning it. Everything is moving fast.

    • Don’t forget “Moltbot” between “Clawdbot” and “OpenClaw”!

      I think that named lasted about 24 hours, but it was long enough to spawn MoltBook.

I think they want the man and ideas behind the most useful AI tool thus far. Surprisingly, and OpenAI may see this - it is a developer tool.

OpenAI needs a popular consumer tool. Until my elderly mother is asking me how to install an AI assistant like OpenClaw, the same way she was asking me how to invest in the "new blockchains" a few years ago, we have not come close to market saturation.

OpenAI knows the market exists, but they need to educate the market. What they need is to turn OpenClaw into a project that my mother can use easily.

I am not a fan of OpenAI but they are not exactly hiring a security researcher. They are hiring an aspiring builder who has built something the masses love. They can always provide him the structure and support he needs to make his products secure. It's not mutually exclusive (safety vs hiring him).

What is interesting about OpenClaw is it's architecture. It is like an ambient intelligence layer. Other approaches up until now have been VSCode or Chromium based integrations into the PC layer.

There’s plenty of straightforward reasons why OpenAI would want to do this, it doesn’t need to be some sort of malicious conspiracy.

I think it’s good PR (particularly since Anthropics actions against OpenCode and Clawdbot were somewhat controversial) + Peter was able to build a hugely popular thing & clearly would be valuable to have on the team building something along the lines of Claude Cowork. I would expect these future products to be much stronger from a security standpoint.

  • I suspect Anthropic was seeing a huge spike of concurrent model usage at a too fast of a rate that claude code just doesn't do, CC is rather "slow" at api calls per minute. Also lots and lots of cache, the sheer amount of cache that claude does is insane.

    • It’s hard to say exactly what prompted the decision but they banned people paying $200/mo without warning & without any reasonable appeal system in place. It’s a Google form that is itself reviewed by some automated system that may or may not ever get back to you.

      This was already an ongoing issue prior to 3rd party tools using Claude subscriptions, there are reports of false positive automated bans going back for several months.

      I have not seen or heard of this happening w/ Codex, and rather than trying to shut down 3rd party tools that want to integrate with their ecosystem they have worked with those projects to add official support.

      I’m more impressed with Codex as a product in general as well. Their new desktop app is great & feels an order of magnitude better than Claude’s.

      Overall HN crowd seems heavily biased in favor of Anthropic (or maybe just against OpenAI?) but IMO Anthropic needs to take a step back and reset. If they keep on the current path of just making small iterative improvements to Claude Code and Claude Desktop they are going to fall very far behind.

> The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.

This is true, and also true for many other areas OpenAI won't touch.

The best get rich quick scheme today (arguably not even a scheme) is to test the waters with AI in an area OpenAI would not/cannot for legal, ethical, or safety reasons.

I hate to agree with OpenAI's original "open" mission here, but if you don't do it, someone else somewhere will.

And as much as their commitment to safety is just lip service, they do have obligations as a big company with a lot of eyeballs on them to not do shady things. But you can do those shady things instead and if they work out ok, you will either have a moat or you will get bought out. If that's what you want.

This is basically acquihire. Peter seems really a genius and they better poach him before Anthropic do.

  • Is he? My impression of Clawdbot was it was a good idea but not particularly technically impressive or even well-written. I had all kinds of issues setting it up.

    • It’s a wonderful idea. Vibe coded, but not his first rodeo.

      Exited on his first company on 110M, then some years of the whole huasca and forest thing, then started creating projects.

      Clawdbot (later openclaw) was his 44th try.

> 1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.

This is a great take and hasn't been spoken about nearly enough in this comment section. Spending a few million to buy out Openclaw('s creator), which is by far the most notable product made by Codex in a world where most developer mindshare is currently with Claude, is nothing for a marketing/PR stunt.

  • He's also a great booster of Codex. Says he greatly prefers it to Claude. So his role might turn out to be evanglism.

  • Thats all it is really. It is to say "See! Look what a handful of people armed with our tools can do".

    Whether the impact is large in magnitude or positive is irrelevant in a world where one can spin the truth and get away with it.

Most of these are good callouts, but I think it is best for us to look at the evolution of the AI segment in the same manner as "Cloud" developed into a segment in the 2000s and 2010s.

3 is always a result of GTM and distribution - an organization that devotes time and effort into productionizing domain-specific models and selling to their existing customers can outcompete a foundation model company which does not have experience dealing with those personas. I have personally heard of situations where F500 CISOs chose to purchase Wiz's agent over anything OpenAI or Anthropic offered for Cloud Security and Asset Discovery because they have had established relations with Wiz and they have proven their value already. It's the same way that PANW was able to establish itself in the Cloud Security space fairly early because they already established trust with DevOps and Infra teams with on-prem deployments and DCs so those buyers were open to purchasing cloud security bundles from PANW.

1 has happened all the time in the Cloud space. Not every company can invent or monetize every combination in-house because there are only so many employees and so many hours in a week.

2 was always a more of a FTX and EA bubble because EA adherents were over-represented in the initial mindshare for GenAI. Now that EA is largely dead, AI Safety and AGI as in it's traditional definition has disappeared - which is good. Now we can start thinking about "Safety" in the same manner we think about "Cybersecurity".

> They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released

I think that adds unnecessary emotion to how platform businesses operate. The reality is, a platform business will always be on the lookout to incorporate avenues to expand TAM, and despite how much engineers may wish, "buy" will always outcompete "build" because time is also a cost.

Most people ik working at these foundation model companies are thinking in terms of becoming an "AWS" type of foundational platform in our industry, and it's best to keep Nikesh Arora's principle of platformization in mind.

---

All this shows is that the thesis that most early stage VCs have been operating on for the past 2 years (the Application and Infra layer is the primary layer to concentrate on now) holds. A large number of domain-specific model and app layer startups have been funded over the past 2-3 years in stealth, but will start a publicity blitz over the next 6-8 months.

By the time you see an announcement on TechCrunch or HN, most of us operators were already working on that specific problem for the past 12-16 months. Additionally, HNers use "VC" in very broad and imprecise strokes and fail to recognize what are Growth Equity (eg. the recent Anthropic round) versus Private Equity (eg. Sailpoint's acquisition and then IPO by Thoma Bravo) versus Early Stage VC rounds (largely not announced until several months after the round unless we need to get an O1A for a founder or key employee).

> 2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move.

And Peter, creating what is very similar to giant scam/malware as a service and then just leaving it without taking responsibility or bringing it to safety.