Gas Town's agent patterns, design bottlenecks, and vibecoding at scale

1 day ago (maggieappleton.com)

I don't get the widespread hatred of Gas Town. If you read Steve's writeup, it's clear that this is a big fun experiment.

It pushes and crosses boundaries, it is a mixture of technology and art, it is provocative. It takes stochastic neural nets and mashes them together in bizarre ways to see if anything coherent comes out the other end.

And the reaction is a bunch of Very Serious Engineers who cross their arms and harumph at it for being Unprofessional and Not Serious and Not Ready For Production.

I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.

Maybe it's because we also have suits telling us we have to use neural nets everywhere for everything Or Else, and there's no sense of fun in that.

Maybe it's the natural consequence of large-scale professionalization, and stock option plans and RSUs and levels and sprints and PMs, that today's gray hoodie is just the updated gray suit of the past but with no less dryness of imagination.

  • > If you read Steve's writeup, it's clear that this is a big fun experiment:

    So, Steve has the big scary "YOU WILL DIE" statements in there, but he also has this:

    > I went ahead and built what’s next. First I predicted it, back in March, in Revenge of the Junior Developer. I predicted someone would lash the Claude Code camels together into chariots, and that is exactly what I’ve done with Gas Town. I’ve tamed them to where you can use 20–30 at once, productively, on a sustained basis.

    "What's next"? Not an experiment. A prediction about how we'll work. The word "productively"? "Productively" is not just "a big fun experiment." "Productively" is what you say when you've got something people should use.

    Even when he's giving the warnings, he says things like "If you have any doubt whatsoever, then you can’t use it" implying that it's ready for the right sort of person to use, or "Working effectively in Gas Town involves committing to vibe coding.", implying that working effectively with it is possible.

    Every day, I go on Hacker News, and see the responses to a post where someone has an inconsistent message in their blog post like this.

    If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself.

    • I agree, I’m one of the Very Serious Engineers and I liked Steve’s post when I thought it was sort of tongue in cheek but was horrified to come to the HN comments and LinkedIn comments proclaiming Gastown as the future of engineering. There absolutely is a large contingent of engineers who believe this, and it has a real world impact on my job if my bosses think you can just throw a dozen AI agents at our product roadmap and get better productivity than an engineer. This is not whimsical to me, I’m getting burnt out trying to navigate the absurd expectations of investors and executives with the real world engineering concerns of my day to day job.

      39 replies →

    • > "If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself."

      If I can be a bit bold and observe that this tic is also a very old rhetorical trick you see in our industry. Call it Schrodinger's Modest Proposal if you will.

      In it someone writes something provocative, but casts it as both a joke and deadly serious at various points. Depending on how the audience reacts they can then double down on it being all-in-good-jest or yes-absolutely-totally. People who enjoy the author will explain the nonsensical tension as "nuance".

      You see it in rationalist writing all the time. It's a tiresome rhetorical "trick" that doesn't fool anyone any more.

      4 replies →

    • These are some very tortured interpretations you're making.

      - "what's next" does not mean "production quality" and is in no way mutually exclusive with "experimental". It means exactly what it says, which is that what comes next in the evolution of LLM-based coding is orchestration of numerous agents. It does not somehow mean that his orchestrator writes production-grade code and I don't really understand why one would think it does mean that.

      - "productively" also does not mean "production quality". It means getting things done, not getting things done at production-grade quality. Someone can be a productive tinkerer or they can be a productive engineer on enterprise software. Just because they have the word "product" in them does not make them the same word.

      - "working effectively" is a phrase taken out of the context of this extremely clear paragraph which is saying the opposite of production-grade: "Working effectively in Gas Town involves committing to vibe coding. Work becomes fluid, an uncountable substance that you sling around freely, like slopping shiny fish into wooden barrels at the docks. Most work gets done; some work gets lost."

      If he wanted to say that Gas Town wrote production grade code, he would have said that somewhere in his 8000-word post. But he did not. In fact, he said the opposite, many many many many many many times.

      You're taking individual words out of context, using them to build a strawman representing a promise he never came close to making, and then attacking that strawman.

      What possible motivation could you have for doing this? I have no idea.

      > If you say two different and contradictory things...

      He did not. Nothing in the blog post explicitly says or even remotely implies that this is production quality software. In addition, the post explicitly, unambiguously, and repeatedly screams at you that this is highly experimental, unreliable, spaghetti code, meant for writing spaghetti code.

      The blog post could not possibly have been more clear.

      > ...because you did it to yourself.

      No, you're doing this to his words.

      Don't believe me? Copy-paste his post into any LLM and ask it whether the post is contradictory or whether it's ambiguous whether this is production-grade software or not. No objective reader of this would come to the conclusion that it's ambiguous or misleading.

      2 replies →

    • > If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself.

      Our industry is held back in so many ways by engineers clinging to black-and-white thinking.

      Sometimes there isn’t a “final” answer, and sometimes there is no “right” answer. Sometimes two conflicting ideas can be “true” and “correct” simultaneously.

      It would do us a world of good to get comfortable with that.

      2 replies →

    • yeah the messaging is somewhat insecure in that it preemptively seeks to invalidate criticism by just being an experiment while simultaneously making fairly inflammatory remarks about nay sayers like they'll eat dirt if they don't get on board.

      I think it's possible to convey that you believe strongly in your idea and it could be the future (or "is the future" if you're so sure of self) while it still being experimental. I think he would get less critics if he wasn't so hyperbolic in his pitch and had fewer inflammatory personal remarks about people who he hasn't managed to bring on side.

      People I know who communicate like that generally struggle to contribute constructively to nuanced discussions, and tend to seek out confrontation for the sake of it.

    • > "What's next"? Not an experiment.

      I think what’s next after an experiment very often is another experiment, especially when you’re doing this kind of exploratory R&D.

  • I thought it was harmless(ish) fun, but David Gerard put out a post stating that Yegge used Gas Town to push out a crypto project that rug pulled his supporters, while he personally walked away with something between $50K to $100K from memory.

    I suppose that has little to do with the technical merits of the work, but it's such a bad look, and it makes everyone boosting this stuff seem exactly as dysregulated/unwise as they've appeared to many engineers for a while.

    I met Sean Goedecke for lunch a few weeks ago, who uses LLMs a bunch, and is clearly a serious adult, but half the folks being shoved in front of everyone are behaving totally manic and people are cheering them on. Absolutely blows my mind to watch.

    https://pivot-to-ai.com/2026/01/22/steve-yegges-gas-town-vib...

    • That was very weird. In the post where he was arguably "shilling," he seems to have signposted pretty well that it was dumb, but he will take the money they offered:

      > $GAS is not equity and does not give you any ownership interest in Gas Town or my work. This post is for informational purposes only and is not a solicitation or recommendation to buy, sell, or hold any token. Crypto markets are volatile and speculative — do not risk money you can’t afford to lose.

      ...

      > Note: The next few sections are about online gambling in all its forms, where “investing” is the buy-and-hold long-form “acceptable” form of gambling because it’s tied to world GDP growth. Cryptocurrencies are subject to wild swings and spikes, and the currency tied to Gas Town is on a wild swing up. But it’s still gambling, and this stuff is only for people who are into that… which is not me, and should probably not be you either.

      In the next post he said he wasn't going to shill it any more, and then the price collapsed and people sent him death threats on Twitter. It probably would have collapsed anyway. Perhaps there was supposedly some implicit bargain that he shouldn't take the money if he wasn't going to shill? Well, there's certainly no rule saying you have to do that.

      I think he's not very much to blame for taking the money from degenerate gamblers, and the cryptocurrency idiots are mostly to blame for their own mistakes.

      13 replies →

  • > If you read Steve's writeup

    Personally I got about 3 paragraphs into what seemed like a twelve-page fevered dream and filed it under "not for me yet".

    • > And the reaction is a bunch of Very Serious Engineers who cross their arms and harumph at it for being Unprofessional and Not Serious and Not Ready For Production.

      Exactly!

      1 reply →

    • > OK! That was like half a dozen great reasons not to use Gas Town. If I haven’t got rid of you yet, then I guess you’re one of the crazy ones. Hang on. This will be a long and complex ride. I’ve tried to go super top-down and simplify as much as I can, but it’s a bit of a textbook.

  • A sense of art and whimsy and experimentation is less compelling when it's jumping on the hypest of hype-trains. I'd love to see more folk art in programming, but Gas Town is closer to fucking Beeple than anything charming.

  • I like gastown's moxie, it's fun, and seems kind of tongue in cheek.

    What I don't like is people me-tooing gastown as some breakthrough in orchestration. I also don't like how people are doing the same thing for ralph.

    In truth, what I hate is people dogpiling thoughtlessly on things, and only caring about what social media has told them to care about. This tendency makes me get warm tingles at the thought of the end of the world. Agent smith was right about humanity.

  • Perhaps it was his followup post about how people are lining up to throw millions of VC dollars at his bizarre whimsical fever dream that disturbs people? I’m all for arts funding, but…

    • Isn't the point that he refused them? VCs can be dumb (see the crypto hype, even the recent inflated AI raises) so I wouldn't put too much stock in what they think is valuable.

      2 replies →

  • > I don't get the widespread hatred of Gas Town. If you read Steve's writeup, it's clear that this is a big fun experiment. It pushes and crosses boundaries, it is a mixture of technology and art, it is provocative.

    Because I actually have an arts degree and I know the equivalent of a con artist in a rich people arts gallery bullshitting their way into money when I see one.

    And the "pushing and crossing boundaries" argument has been abused as a pathetic defense to hide behind shallowness in the art world for longer than anyone in this discussion board has been alive. It's not provocative when it's utterly predictable, and in this case the "art" is "take the most absurd parody of AI culture and play it straight". Gee whiz how "creative" and "provocative".

  • It isn't though. It crossed the chasm when Steve (who I would like to think is somewhat comfortable after writing a book, holding a director level position at several startups) decided to endorse an outright crypto pump and dump.

    When he decided to monetize the eyeballs on the project instead of anything related to the engineering. Which, of course, Steve isn't smart enough to understand (in his own words) and he recommends you not buy but he still makes a tidy profit from it.

    Its a memecoin now... that has a software project attached to it. Anything related to engineering died the day he failed to disavow the crypto BS and instead starting shrilling it.

    What happened to engineers not calling out BS as BS.

  • "our industry has lost its sense of whimsy"

    The first thing I thought as I read his post and saw the images of the weasels was that he should make a game of it. Maybe name it Bitborn.

  • > I don't get the widespread hatred of Gas Town.

    Fear over what it means if it works.

    • I work in a typical web app company which does accounting/banking etc.

      A couple of days ago I was sitting in a meeting of 10-15 devs, discussing our AI agents. People were raising issues and brainstorming ways around the problems with AI. How to make the AI better.

      Our devs were occupied doing AI things, not accounting/banking things.

      If the time savings were as promised, we should have been 3 devs (with the remaining devs replaced by 7-10 AI agents) discussing accounting/banking.

      If Gas Town succeeds, it will just be the next toy we play with instead of doing our jobs.

      4 replies →

  • >I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.

    Remember the days when people experimented with and talked about things that werent LLMs?

    I used to go to a lot of industry events and I really enjoyed hearing about the diversity of different things people worked on both as a hobby and at work.

    Now it's all LLMs all the time and it's so goddamn tedious.

    • > I used to go to a lot of industry events and I really enjoyed hearing about the diversity of different things people worked on both as a hobby and at work.

      I go to tech meetups regularly. The speed at which any conversation end up on the topic of AI is extremely grating to me. No more discussions about interesting problems and creative solutions that people come up with. It's all just AI, agentic, vibe code.

      At what point are we going to see the loss of practical skills if people keep on relying on LLMs for all their thinking?

      5 replies →

    • Well, LLMs are an engineering breakthrough of the degree somewhere between the Internet and electricity, in terms of how general-purpose and broadly-applicable they are. Much like them, LLMs have the potential to be useful in just about everything people do, so it's no surprise they've dominated the conversation - just like electricity and the Internet did, back in their heyday.

      (And similar to the two, I expect many of the initial ideas for LLM application to be bad, perhaps obviously stupid in hindsight. But enough of them will work to make LLMs become a lasting thing in every aspect of people's lives - again, just like electricity and the Internet did).

      1 reply →

  • It’s not the whimsy. It’s that the whimsy is laced with casual disdain, a touch too much “let me buy you a stick of gum and show you how to chew it”, a frustrated tenor never stated but dog whistled “you dumb fucks”. A soft sharp stink of someone very smart shoving that fact in your face as they evangelise “the obvious truth” you’re too stupid to see.

    And maybe he’s even right. But the reaction is to the flavour of chip on the shoulder delivery mixed into an otherwise fun piece.

    • Don't forget a bit of crypto! People are being way to nice going "I don't understand, but ...". Fuck him.

  • We have a different take than Gastown. If AI behaves unreliably and unpredictably, maybe the problem is the ask. So we looked at backend code and decided it was time to bring in more declarative programming. We are already halfway there with declarative frontend (React) and declarative database (SQL). Functional programming is an answer, but functional programming didnt replace object oriented programming because of practical reasons.

    So even if the super serious engineers are serious, they should watch their back. Eventually enough guardrails will be created or even the ask will change enough for a lot of automation to happen. And make no mistake, it is automation no different than having automated testing replace armies of manual testing or code generation or procedural generation or any other machine method. And who is going to be left with jobs? People who embrace the change, not people who lament for the good old days or who can't adapt.

    Sucks but just how the world works. Sit on the bleeding edge or be burned. Yes there is an "enough" but I suspect enough is around people willing to look at Gastown or even make their own Gastown, not the other side.

  • Hi mediaman! I'm totally there with you and Steve on the whimsy and experimentation! And your tolerant attitude gives me the Dutch courage to post this.

    I've been reading Yegge since the "Stevey's Drunken Blog Rants™" days -- his rantings on Lisp, Emacs, and the Eval Empire shaped how I approach programming. His pro-LLM-coding rants were direct inspiration for my own work on MOOLLM. The guy has my deep respect, and I'm intrigued by his recent work on Sourcegraph and Gas Town.

    Gas Town and MOOLLM are siblings from that same Eval Empire -- both oriented along the Axis of Eval, both transgressively treating LLMs as universal interpreters. MOOLLM immanentizes Eval Incarnate -- https://github.com/SimHacker/moollm/blob/main/designs/eval/E... -- where skills are programs, the LLM is eval(), and play is but the first step of the "Play Learn Lift" methodology: https://github.com/SimHacker/moollm/tree/main/skills/play-le....

    The difference is resource constraints. Yegge has token abundance; I'm paying out of pocket. So where Gas Town explores "what if tokens were free?" (20-30 Claude instances overnight), MOOLLM explores "what if every token mattered?" Many agents, many turns, one LLM call.

    To address wordswords2's concern about "no metrics or statistics" -- I agree that's a gap in Gas Town. MOOLLM makes falsifiable claims with receipts. Last night I ran an Amsterdam Fluxx Marathon stress test: 116+ turns, 4 characters (120+ character-turns per LLM call), complex social dynamics on top of dynamic rule-changing game mechanics. Rubric-scored 94/100. The run files exist. Anyone can audit.

    qcnguy's critique ("same thing multiplied by ten thousand") is exactly the kind of specific feedback that helps systems improve. I wrote a detailed analysis comparing the two approaches -- intellectual lineage (Self, Minsky's K-lines, The Sims, LambdaMOO), the "vibecoded" problem (MOOLLM is LLM-generated but rigorously iterated, not ship-and-hope), and why "carrier pigeon" IPC architecture is a dark pattern when LLMs can simulate many agents at the speed of light.

    an0malous raises a real fear about bosses thinking "throw agents at it" replaces engineering. Both systems agree: design becomes the bottleneck. Gas Town says "keep the engine fed with more plans." MOOLLM says "design IS the point -- make it richer." Different answers, same problem.

    lowbloodsugar mentions building a "proper, robust, engineering version" -- I'd love to compare notes. csallen is right that "future" doesn't mean "production-grade today."

    Analysis: https://github.com/SimHacker/moollm/blob/main/designs/GASTOW...

    MOOLLM repo: https://github.com/SimHacker/moollm

    Happy to discuss tradeoffs or hear where my claims don't hold up. Falsifiable criticism welcome -- that's how systems improve.

    • Adventure Uplift — Building a YAML-to-Web Adventure Compiler with Simulated Computing Pioneers:

      I ran a 260KB session log where I convened a simulated symposium of computing pioneers to design an Adventure Compiler — a tool that compiles YAML adventure definitions that run on MOOLLM under cursor into standalone deterministic browser games requiring no LLM at runtime.

      The twist: the "attendees" include AI-simulated tributes to Will Wright, Alan Kay, Marvin Minsky, Seymour Papert, Ted Nelson, Ken Kahn, Gary Drescher, and 25+ others — both living legends and memorial candles for those who've passed. All clearly marked as simulated tributes, not transcripts.

      What emerged from this thought experiment:

      - Pie menus as the universal interface (rooms, inventory, dialogue trees)

      - Sims-style needs system with YAML Jazz inner voice ("hunger: 1 # FOOD. FOOD. FOOD.")

      - Prototype-based objects (Self/JavaScript delegation chains)

      - Schema mechanism + LLM = "teaching them to fly"

      - Git as the collaboration operating system

      - ToonTalk-inspired "programming by petting" for terpene kittens

      - Speed of Light simulation — the opposite of "carrier pigeon" multi-agent architectures

      On that last point: most multi-agent systems use message passing between separate LLM calls. Agent A generates output, it gets detokenized to text, sent over IPC, retokenized into Agent B's context. MOOLLM inverts this. Everything happens in one LLM call.

      The spatial MOO map (rooms connected by exits) provides navigation, but communication is instantaneous within a call. Many agents, many turns, zero latency between them — and zero token requantization or semantic noise from successive detokenization/tokenization loops.

      The session includes adversarial brainstorming where Barbara Liskov challenges schema contracts, James Gosling questions performance, Amy Ko pushes accessibility, and Bret Victor demands immediate feedback. Each critique gets a concrete response.

      Concrete outputs: a working linter, architecture decisions, 53 indexed topics from "Food Oriented Programming" to "Hidden Objects as Invisible Infrastructure."

      This is MOOLLM's Play-Learn-Lift methodology in action — play with ideas, extract patterns, lift into reusable skills and efficient scripts.

      Session log (260KB, 8000+ lines): https://github.com/SimHacker/moollm/blob/main/examples/adven...

      MOOLLM repo: https://github.com/SimHacker/moollm

      The session uses representation ethics guidelines — all simulated people are clearly marked, deceased figures invoked with memorial candles, and the framing is explicitly "educational thought experiment."

      Happy to discuss the ethics of simulating people, the architecture decisions, or how this relates to my earlier Gas Town comparison post.

      1 reply →

  • >it is a mixture of technology and art, it is provocative

    There's no art (or engineering) in this and the only provocative thing about it is that Yegge apparently decided to turn it into a crypto scam. I like the intersection of engineering and art but I prefer if it includes both actual engineering and art, 100 rabbits (100r.co) is a good example of it, not a blog post with 15 AI generated images in it that advocates some unholy combination of gambling, vibe coding and cryptocurrency crap.

  • Yeah it's unbelievably tiresome, endless complaints from people pushing up their glasses complaining, ITS A PROJECT ABOUT POLECATS CALLED GAS TOWN MADE FOR FUN, read that again, either admire it and enjoy it or quit the umpteenth complaint about vibecoding.

>while Yegge made lots of his own ornate, zoopmorphic [sic] diagrams of Gas Town’s architecture and workflows, they are unhelpful. Primarily because they were made entirely by Gemini’s Nano Banana. And while Nano Banana is state-of-the-art at making diagrams, generative AI systems are still really shit at making illustrative diagrams. They are very hard to decipher, filled with cluttered details, have arrows pointing the wrong direction, and are often missing key information.

So true! Not to mention the garbled text and inconsistent visuals across the diagrams———an insult to the reader's intelligence. How do people tolerate this visual embodiment of slurred speech?

  • I generally am a fan of polished writing, but I do believe that there's room for quickly fired experimental stuff, and quite enjoyed this piece. With the speed he was going, I wouldn't be surprised if the system architecture actually changed in between subsequent sections of the post. It's not a scientific article, but just a cross-country runner at the top of his game giving us a quick update without breaking his stride, and I'm all here for that.

    As Basil Exposition said "I suggest you don’t worry about this sort of thing and just enjoy yourself".

  • Yeah I couldn’t figure out if they were just intended as illustrations and gave up trying to read them after a while.

    Which is unfortunate as it would have been really helpful to have actually legible architecture diagrams, given the prose was so difficult for me to untangle due to the manic “fun” irreverent style (and it’s fine to write with a distinctive voice to make it more interesting, but still … confusing).

    Plus the dozens of new unique names and connections introduced every few paragraphs to try to keep in my head…

    I first asked Gemini 3 Pro to condense it to a boring technical overview and it produced a single page outline and Mermaid diagrams that were nearly as unintelligible as the original post so even AI has issues digesting it apparently…

    • If you disentangle the prose, it doesn't get better. It reduces down to IDK, but it makes output. That is all it can reduce down to. I love using Claude on my code, I love refactoring via broad sweeping statements to an agent, but I don't love trying to refute charlatans. Yeggae can go fuck himself with all the dildos his token can now buy (for a limited time only!).

As Yegge himself would agree, there's likely nothing that is particularly good about this specific architecture, but I think that there's something massive in this as a proof of concept for something bigger beyond the realm of software development.

Over the last few years, people have been playing around with trying to integrate LLMs into cognitive architectures like ACT-R or Soar, with not much to show for it. But I think that here we actually have an example of a working cognitive architecture that is capable of autonomous long-term action planning, with the ability to course-correct and stay on task.

I wouldn't be surprised if future science historians will look at this as an early precursor to what will eventually be adapted to give AIs full agentic executive functioning.

Yes to Maggie & Steve's amazingly well written articles...and:

I would love to see Steve consider different command and control structures, and re-consider how work gets done across the development lifecycle. Gas Town's command and control structure read to me like "how a human would think about making software." Even the article admits you need to re-think how you interact in the Gas Town world. It actually may understate this point too much.

Where and how humans interact feels like something that will always be an important consideration, both in a human & AI dominated software development world. At least from where I sit.

The author's high-value flowcharts vs Steve Yegge's AI art is enough of a case-in-point for how confusing his posts and repos are. However this is a pervasive problem with AI coding tools. Unsurprisingly, the creators of these tools are also the most bullish about agentic coding, so the source code shows the consequences. Even Claude Code itself seems to experience an unusually high number of regressions or undocumented changes for such a widely used product. I had the same problem when recently trying to understand the details of spec-kit or sprites from their docs. Still, I agree that Gas Town is a very instructive example of what the future of AI coding will look like. I'm confident mature orchestration workflows will arrive in 2026.

Lots of comments about Gas Town (which I get, it's hard not to talk about it!), but I thought this was a pretty good article -- nice job of summing up various questions and suggesting ways to think about them. I like this bit in particular:

> A more conservative, easier to consider, debate is: how close should the code be in agentic software development tools? How easy should it be to access? How often do we expect developers to edit it by hand?

> Framing this debate as an either/or – either you look at code or don’t, either you edit code by hand or you exclusively direct agents, either you’re the anti-AI-purist or the agentic-maxxer – is unhelpful.

> The right distance isn’t about what kind of person you are or what you believe about AI capabilities in the current moment. How far away you step from the syntax shifts based on what you’re building, who you’re building with, and what happens when things go wrong.

  • > Buried in the chaos are sketches of future agent orchestration patterns

    I'm not sure if there are that many. We need to be vigilant of "it feels useful & powerful", because it's so easy to feel that way.

    When I write complex plans, I can tell Claude to spawn agents for each task and I can successfully 1-shot a 30-60 minute implementation.

    I've toyed with more complicated patterns, but unlike this speculative fiction, I did need my result both simple and working.

    A couple of times now I've had to spend a lot of hours trying to unfuck a design i let slip through. The kind where 1 agent injects some duplicate code/architecture pattern into the system that's correct enough not to be flagged, but wrong enough to forever trip up every subsequent fresh agents that stumble on it.

    I tell people my job now is to kick these things every 15 minutes. Its a kinda joke kinda not. But they definitely need kicking. Without, the decoherence of a non-trivial project is too high, and you still need time to know; where and how to kick.

    I'm not sure what I'd need to be convinced a higher level of orchestration can do that. I do like to try new things. But my spider-sense is telling me this is a Collatz-conjecture-esque dead-end. People get the feeling of making giant leaps of progress, which anybody using these things should be familiar with by now, but something valuable is always just out of reach with the tools we currently have.

    There are some big gains by guiding agents/users to use more sub agents with a clean context - perhaps with some more knobs - but I'd advise against acting under the assumption using grander orchestration tools will inevitably have a positive ROI.

Just writing here a line in defense of Rothko. His paintings are far harder to paint than it looks like. There were hundreds of layers, thinly applied, and carefully thought and with a developed technique. Try to paint that by yourself and you'll see.

I get that Gas Town is part tongue-in-cheek, a strawman to move the conversation on Agentic AI forward. And for that I give it credit.

But I think there's a real missed opportunity here. I don't think it goes far enough. Who wants some giant complex system of agents conceived by a human. The agents, their role and relationships, could be dynamically configured according to the task.

What good is removing human judegment from the loop, only to constrain the problem by locking in the architecture a priori. It just doens't make sense. Your entire project hinges on the waterfall-like nature of the agent design! That part feels far too important, but gas town doesn't have much curiousity at all about changing that. These Mayors, and Polecats, and Witnesses, and Deacons ... but one of infinite ways you arrange things. Why should there be just one? Why should there be an up-front design at all? A dynamic, emergent network of agents feels like the real opportunity here.

> Yegge deserves praise for exercising agency and taking a swing at a system like this [...] then running a public tour of his shitty, quarter-built plane while it’s mid-flight

This quote sums it all up for me. It's a crazy project that moves the conversation forward, which is the main value I see in it.

It very well could be a logjam breaker for those who are fortunate enough to get out more than they put into it... but it's very much a gamble, and the odds are against you.

Yegge is just running arbitrage on an information gap.

It's the same chasm that all the AI vendors are exploiting: the gap between people who have some idea what is going on and the vast mass of people who don't but are addicted to excitement or fear of the future.

Yegge is being fake-playful about it but if you have read any of his other writing, this tracks. None of it is to be taken very seriously because he values provocation and mischief a little too highly, but bits of it have some ideas worth thinking about.

  • I wonder if he's being paid.

    I detected a noticeable uptick in posts on reddit bragging AI coding in the last month which fit the pattern of other opinion shaping astroturfing projects ive seen before.

    If Claude came to me with a bundle of cash and tokens to encourage me to keep the AI coding hype train going I'd also go heavy on the excitability, experimental attitude, humor and irreverence.

    I'd also leave a mountain of disclaimers to help protect future me's reputation.

I'm beginning to question the notion that multi agent patterns don't work. I think there is something extra you get with a proposer-verifier style loop, even if both sides are using the same base model.

I've had very good success with a recursive sub agent scheme where a separate prompt (agent) is used to gate the recursive call. It compares the callers prompt with the proposed callee's prompt to determine if we are making a reasonable effort to reduce the problem into workable base cases. If the two prompts are identical we deny the request with an explanation. In practice, this works so well I can allow for unlimited depth and have zero fear of blowing the stack. Even if the verifier gets it wrong a few times, it only has to get it right once to reverse an infinite descent.

  • >I think there is something extra you get with a proposer-verifier style loop, even if both sides are using the same base model.

    DeepSeekMath-V2 seems to show this, increasing the number of prover/verifier iterations gives increases accuracy. And this is with a model that has already undergone RL under a prover/verifier selection process.

    However this type of subagent communication maintains full context, and is different from "breaking into tasks" style of sharding amongst subagents. I'm less convinced of the latter, because often times a problem is more complex than the sum of its parts, i.e. it's the interdependencies that make it complex and you need to consider each part in relation to the other parts, not in isolation.

    • The specific way in which we invoke the subagents is critical to the performance of the system. If we use a true external call stack and force proper depth first recursion, the effective context can be maintained to whatever depth is desired.

      Parallelism and BFS style approaches do not exhibit this property. Anything that happens within the context or token stream is a much weaker solution. Most agent frameworks are interested in appearance of speed, so they miss out on the nuance of this execution model.

>Yegge is leaning into the true definition of vibecoding with this project: “It is 100% vibecoded. I’ve never seen the code, and I never care to.”

I don't get it. Even with a very good understanding of what type of work I am doing and a prebuilt knowledge of the code, even for very well specced problem. Claude code etc. just plain fail or use sloppy code. How do these industry figures claim they see no part of a 225K+ line of code and promise that it works?

It feels like we're getting into an era where oceans of code which nobody understands is going to be produced, which we hope AGI swoops in and cleans?

  • This is also my experience. Everything I’ve ever tried to vibe code has ended up with off-by-one errors, logic errors, repeated instances of incorrect assumptions etc. Sometimes they appear to work at first, but, still, they have errors like this in them that are often immediately obvious on code review and would definitely show up in anything more than very light real world use.

    They _can_ usually be manually tidied and fixed, with varying amounts of effort (small project = easy fixes, on a par with regular code review, large project = “this would’ve been easier to write myself...”)

    I guess Gas Town’s multiple layers of supervisory entities are meant to replace this manual tidying and fixing, but, well, really?

    I don’t understand how people are supposedly having so much success with things like this. Am I just holding it wrong?

    If they are having real success, why are there no open source projects that are AI developed and maintained that are _not_ just systems for managing AI? (Or are there and I just haven’t seen them?...)

    • In my comment history can be found a comment much like yours.

      Then Opus 4.5 was released. I had already had my CC cluade.md, and Windsurf global rules + workspace rules set up. Also, my main money making project is React/Vite/Refine.dev/antd/Supabase... known patterns.

      My point is that given all that, I can now deploy amazing features that "just work," and have excellent ux in a single prompt. I still review all commits, but they are now 95% correct on front end, and ~75% correct on Postgres migrations.

      Is it magic? Yes. What's worse is that I believe Dario. In a year or so, many people will just create their own Loom or Monday.com equivalent apps with a one page request. Will it be production ready? No. Will it have all the features that everyone wants? No. But it will do that they want, which is 5% of most SaaS feature sets. That will kill at least 10% of basic SaaS.

      If Sonnet 3.5 (~Nov 2024) to Opus 4.5 (Nov 2025) progress is a thing, then we are slightly fucked.

      "May you live in interesting times" - turns out to be a curse. I had no idea. I really thought it was a blessing all this time.

    • Yeah, it sounds like "you're holding it wrong"

      Like, why are you manually tidying and fixing things? The first pass is never perfect. Maybe the functionality is there but the code is spaghetti or untestable. Have another agent review and feed that review back into the original agent that built out the code. Keep iterating like that.

      My usual workflow:

      Agent 1 - Build feature Agent 2 - Review these parts of the code, see if you find any code smells, bad architecture, scalability problems that will pop up, untestable code, or anything else falling outside of modern coding best practices Agent 1 - Here's the code review for your changes, please fix Agent 2 - Do another review Agent 1 - Here's the code review for your changes, please fix

      Repeat until testable, maybe throw in a full codebase review instead of just the feature.

      Agent 1 - Code looks good, start writing unit tests, go step by step, let's walk through everything, etc. etc. etc.

      Then update your .md directive files to tell the agents how to test.

      Voila, you have an llm agent loop that will write decent code and get features out the door.

      15 replies →

    • I worry about people who use this approach where they never look at the code. Vibe-coding IS possible but you have to spent a lot of time in plan mode and be very clear about architecture and the abstractions you want it to use.

      I've written two seperate moderately-sized codebases using agentic techniques (oftentimes being very lazy and just blanket approving changes), and I don't encounter logic or off-by-one errors very often if at all. It seems quite good at the basic task of writing working code, but it sucks at architecture and you need occasional code review rounds to keep the codebase tidy and readable. My code reviews with the AI are like 50% DRY and separating concerns

      3 replies →

  • I don't get you guys that are getting such bad results.

    Are you guys just trying to one shot stuff? Are you not using agents to iterate on things? Are you not putting agents against each other (have one code, one critique/test the code, and put them in a loop)?

    I still look at the code that's produced, I'm not THAT far down the "vibe coding" path that I'm trusting everything being produced, but I get phenomenal results and I don't actually write any code any more.

    So like, yeah, first pass the llm will create my feature and there's definitely some poorly written code or duplicate code or other code smells, but then I tell another agent to review and find all these problems. Then that review gets fed back in to the agent that created the feature. Wham, bam, clean code.

    I'm not using gastown or ralph wiggum ($$$) but reading the docs, looking over how things work, I can see how it all comes together and should work. They've been built out to automatically do the review + iteration loop that I do.

    • My feeling has been that 'serious' software engineers aren't particularly suited to use these tools. Most don't have an interest in managing people or are attracted to the deterministic nature of computing. There's a whole psychology you have to learn when managing people, and a lot of those skills transfer to wrangling AI agents from my experience.

      You can't be too prescriptive or verbose when interacting with them, you have to interact with them a bit to start understanding how they think and go from there to determine what information or context to provide. Same for understanding their programming styles, they will typically do what they're told but sometimes they go on a tangent.

      You need to know how to communicate your expectations. Especially around testing and interaction with existing systems, performance standards, technology, the list goes on.

      9 replies →

    • I have some success but by the time I'm done I'm often not sure if I saved any time.

    • My (former) coworker who’s heavy into this stuff produced a lot of unmaintainable slop on his way out while singing agents praises to hire-ups. He also felt he was getting a lot of value and had no issues.

      1 reply →

    • It lets 0.05X developers be 0.2X developers and 1X developers be 0.9-1.1X developers.

      The problem is some 0.05X developers thought they were 0.5X and now they think they're 2X.

      5 replies →

  • Where is the "super upvote button" when you need it?

    YES! I have been playing with vibe coding tools since they came out. "Playing" because only on rare occasions have I created something that is good enough to commit/keep/use. I keep playing with them because, well I have a subscription, but also so I don't fall into the fuddy-duddy camp of "all AI is bad" and can legitimately speak on the value, or lack thereof, of these tools.

    Claude Code is super cool, no doubt, and with _highly targeted_ and _well planned_ tasks it can produce valuable output. Period. But, every attempt at full-vibe-coding I've done has gotten hung up at some point and I have to step in an manually fix this. My experience is often:

    1. First Prompt: Oh wow, this is amazing, this is the future

    2. Second Prompt: Ok, let me just add/tweak a few things

    10. 10th prompt: Ugh, everytime I fix one thing, something else breaks

    I'm not sure at all what I'm doing "wrong". Flogging the agents along doesn't not work well for me or maybe I am just having trouble letting go of the control and I'm not flogging enough?

    But the bottom line is I am generally shocked that something like Gas Town was able to be vibe-coded. Maybe it's a case of the LLM overstating what it's accomplished (typical) and if you look under the hood it's doing 1% of what it says it is but I really don't know. Clearly it's doing something, but then I sit over here trying to build a simple agent with some MCPs hooked up to it using a LLM agent framework and it's falling over after a few iterations.

    • So I’m probably in a similar spot - I mostly prompt-and-check, unless it’s a throwaway script or something, and even then I give it a quick glance.

      One thing that stands out in your steps and that I’ve noticed myself- yeah, by prompt 10, it starts to suck. If it ever hits “compaction” then that’s beyond the point of return.

      I still find myself slipping into this trap sometimes because I’m just in the flow of getting good results (until it nosedives), but the better strategy is to do a small unit of work per session. It keeps the context small and that keeps the model smarter.

      “Ralph” is one way to do this. (decent intro here: https://www.aihero.dev/getting-started-with-ralph)

      Another way is “Write out what we did to PROGRESS.md” - then start new session - then “Read @PROGRESS.md and do X”

      Just playing around with ways to split up the work into smaller tasks basically, and crucially, not doing all of those small tasks in one long chat.

      2 replies →

    • I’ve definitely hit that same pattern in the early iterations, but for me it hasn’t really been a blocker. I’ve found the iteration loop itself isn’t that bad as long as you treat it like normal software work. I still test, review, and check what it actually did each time, but that’s expected anyway. What’s surprised me is how quickly things can scale once the overall architecture is thought through. I’ve built out working pieces in a couple of weeks using Claude Code, and a lot of that time was just deciding on the architecture up front and then letting it help fill in the details. It’s not hands-off, but used deliberately, it’s been quite effective https://robos.rnsu.net

      1 reply →

    • > 10. 10th prompt: Ugh, everytime I fix one thing, something else breaks

      Maybe that is the time to start making changes by hand. I think this dream of humans never ever writing any more code might be too far and unnecessary.

  • > How do these industry figures claim they see no part of a 225K+ line of code and promise that it works?

    The only promise is that you will get your face ripped off.

    “WARNING DANGER CAUTION - GET THE F** OUT - YOU WILL DIE […] Gas Town is an industrialized coding factory manned by superintelligent robot chimps, and when they feel like it, they can wreck your shit in an instant. They will wreck the other chimps, the workstations, the customers. They’ll rip your face off if you aren’t already an experienced chimp-wrangler.”

    • Yeah, I'm at that stage 6 or 7. I'm using multiple agents across multiple terminal windows. I'm not even coding any more, literally I haven't written code in like 2-4 months now beyond changing a config value or something.

      But I still haven't actually used Gastown. It looks cool. I think it probably works, at least somewhat. I get it. But it's just not what I need right now. It's bleeding edge and experimental.

      The main thing holding me back from even tinkering with it is the cost. Otherwise I'd probably play with it a little, but it's not something I'd expect to use and ship production code right now. And I ship a ton of production code with claude.

  • There is an incentive for dishonesty about what AI can and cannot do.

    People from OpenAI was saying that GPT2 had achieved AGI. There is a very clear incentive for that statement to be made by people who are not using AI for anything productive.

    Even as increasingly bombastic claims are made, it is obvious that the best AI cannot one-shot everything if you are an actual user. And the worst ones: was using Gemini yesterday and it wouldn't stop outputting emojis, was using Grok and it refused to give me a code snippet because it claimed its system prompt forbade this...what can you say?

    I don't understand why anyone would want to work on a codebase they didn't understand either. What happens when something goes wrong?

    Again though, there is massive financial incentive to make these claims, and some other people will fall along with that because it is good for their career, etc. I have seen this in my own company where senior people are shoehorning this stuff in that they clearly do not actually use or understand (to be clear, this is engineering not management...these are people who definitely should understand but do not).

    Great tool, but the 100% vibecoding without looking at the code, for something that you are actually expecting others to use, is a bad idea. Feels more like performance art than actual work. I like jokes, I like coding, room for both but don't confuse the two.

    • > I don't understand why anyone would want to work on a codebase they didn't understand either. What happens when something goes wrong?

      It's your coworker's problem. The one who actually understands the big picture and how the system fits into it. They'll deal with it.

  • No one is promising anything. It's just a giant experiment and the author explicitly tells you not to use it. I appreciate those that try new things, even it it's possibly akin to throwing s** at a wall and seeing what sticks.

    Maybe it changes how we code or maybe it doesn't. Vibe coding has definitely helped me write throwaway tools that were useful.

    • > It's just a giant experiment and the author explicitly tells you not to use it.

      No, he threw up a hyperbolic warning and then dove deep into how this is the future of all coding in the rest of his talks/writing.

      It’s as good a warning as someone saying “I’m not {X} but {something blatantly showing I am X}”

      1 reply →

  • Who's promising it works?

    It's an experiment to discover what the limits are. Maybe the experiment fails because it's scoped beyond the limits of LLMs. Maybe we learn something by how far it gets exactly. Maybe it changes as LLMs get better, or maybe it's a flawed approach to pushing the limits of these.

  • I'm sympathetic to this view, but I also wonder if this is the same thing that assembly language programmers said about compilers. What do you mean that you never look at the machine code? What if the compiler does something inefficient?

    • Not even remotely close.

      Compilers are deterministic. People who write them test that they will produce correct results. You can expect the same code to compile to the same assembly.

      With LLMs two people giving the exact same prompts can get wildly different results. That is not a tool you can use to blindly ship production code. Imagine if your compiler randomly threw in a syscall to delete your hard drive, or decide to pass credentials in plain text. LLMs can and will do those things.

      5 replies →

    • I write JS, and I have never directly observed the IRs or assembly code that my code becomes. Yet I certainly assume that the compiler author has looked at the compiled output in the process of writing a compiler!

      For me the difference is prognosis. Gas Town has no ratchet of quality: its fate was written on the wall since the day Steve decided he didn't want to know what the code says: it will grow to a moderate but unimpressive size before it collapses under its own weight. Even if someone tried to prop it up with stable infra, Steve would surely vibe the stable infra out of existence since he does not care about that

      4 replies →

    • The big difference is that compilation is deterministic: compile the same program twice and it'll generate the same output twice. It also doesn't involve any "creativity": a compiler is mostly translating a high-level concept into its predefined lower-level components. I don't know exactly what my code compiles to, but I can be pretty certain what the general idea of the assembly is going to be.

      With LLMs all bets are off. Is your code going to import leftpad, call leftpad-as-a-service, write its own leftpad implementation, decide that padding isn't needed after all, use a close-enough rightpad instead? Who knows! It's just rolling dice, so have fun finding out!

      4 replies →

    • The compiler is deterministic and the translation does not lose semantics. The meaning of your code is an exact reflection of what is produced.

      4 replies →

    • No, it is not what assembly programmers said about compilers, because you can still look at the compiled assembly, and if the compiler makes a mistake, you can observe it and work around it with inline assembly or, if the source is available, improve the compiler. That is not the same as saying "never look at the code".

    • I feel like this argument would make a lot more sense if LLMs had anywhere near the same level of determinism as a compiler.

    • >but I also wonder if this is the same thing that assembly language programmers said about compilers

      But as a programmer writing C code, you're still building out the software by hand. You're having to read and write a slightly higher level encoding of the software.

      With vibe coding, you don't even deal with encodings. You just prompt and move on.

      1 reply →

    • I wonder if assembly programmers felt this way about the reliability of the electical components which their code relies upon...

      1 reply →

    • This analogy has always been bad any time someone has used it. Compilers directly transform via known algorithms.

      Vibecoding is literally just random probabilistic mapping between unknown inputs and outputs on an unknown domain.

      Feels like saying because I don't know how my engine works that my car could've just been vibe-engineered. People have put 1000s of hours into making certain tools work up to a give standard and spec reviewed by many many people.

      "I don't know how something works" != "This wasn't thoughtfully designed"

      Why do people compare these things.

  • Do you understand at a molecular level how cooking works? Or do you just do some rote actions according to instructions? How do you know if your cooking worked properly without understanding chemistry? Without looking at its components under a microscope?

    Simple: you follow the directions, eat the food, and if it tastes good, it worked.

    If cooks don't understand physics, chemistry, biology, etc, how do all the cooks in the world ensure they don't get people sick? They follow a set of practices and guidelines developed to ensure the food comes out okay. At scale, businesses develop even more practices (pasteurization, sanitization, refrigeration, etc) to ensure more food safety. None of the people involved understand it at a base level. There are no scientists directly involved in building the machines or day-to-day operations. Yet the entire world's food supply works just fine.

    It's all just abstractions. You don't need to see the code for the code to work.

    • That's a terrible analogy lol.

      1. Chefs do learn the chemistry, at least enough to know why their techniques work.

      2. Food scientist is a real job

      3. The supply chain absolutely does have scientists involved in day to day operations lol.

      A better analogy is just shoving the entire contents of the fridge into a pot, plastic containers and all, and assuming it'll be fine.

      4 replies →

  • It's unintuitive, but having an llm verification loop like a code reviewer works impeccably well, you can even create dedicated agents to check for specific problem areas like poor error handling.

    This isn't about anthropomorphism, it's context engineering. By breaking things into more agents, you get more focused context windows.

    I believe gas town has some review process built in, but my comment is more to address the idea that it's all slop.

    As an aside, Opus 4.5 is the first model I used that most of the time doesn't produce much slop, in case you haven't tried it. Still produces some slop, but not much human required for building things (it's mostly higher level and architectural things they need guidance on).

  • In my experience, it really depends on what you're building _and_ how you prompt the LLM.

    For some things, LLMs are great. For others, they're absolute dog shit.

    It's still early days. Anyone who claims to know what they're talking about either doesn't or what they're saying will be out of date in a month's time (including me).

  • The secret is that it doesn't work. None of these people have built real software that anyone outside their bubble uses. They are not replacing anyone, they are just off in their own corner building sand castles.

    • Just because they're one-off tools that only one person uses doesn't mean it's not "real software". I'm actually pretty excited about the fact that it's now feasible for me to replace all my BloatedShittyCommercialApps that I only use 5% of with vibe-coded bespoke tools that only do the important 5%, just for me to use. If that makes it a "sand castle" to you, fine, but this is real software and I'm seeing real benefit here.

      3 replies →

    • > The secret is that it doesn't work.

      I have 100% vibecoded software that I now use instead of commercial implementation that cost me almost 200 usd a month (tool for radiology dictation and report generation).

      31 replies →

    • no that's not true. I rarely now write a SINGLE line of code both at work or at home. Even simple config switches, I ask codex/gemini to do it.

      You always have to review overall diff though and go back to agent with broader corrections to do.

      1 reply →

    • Of course it works. I haven't looked at code for my internal development in months.

      I don't know why people keep repeating this but it's wrong. It works.

  • OP defines herself as a mediocre engineer. She's trying to sell you Slop Town, not engineering principles.

If we had super-smart AI with low latency and fast enough speed, would the perceived need for / usefulness of running multiple agents evaporate? Sure you might want to start working on the prompt or user story for something else while the agent is working on the first thing, but - in my thought experiment here there wouldn't be a "while" because it'd already be done while you're moving your hand off the enter key.

  • If they are interacting with the the world and tools like web research, compiles, deploys, end2end test runs etc, then no.

    (Maybe you can argue that you could then do everything with a event-driven single agent, like async for llms, if you don't mind having a single very adhd context)

Originally I thought that Gas Town was some form of high level satire like GOODY-2 but it seems that some of you people have actually lost the plot.

Ralph loops are also stupid because they don't make use of kv cache properly.

---

https://github.com/steveyegge/gastown/issues/503

Problem:

Every gt command runs bd version to verify the minimum beads version requirement. Under high concurrency (17+ agent sessions), this check times out and blocks gt commands from running.

Impact:

With 17+ concurrent sessions each running gt commands:

- Each gt command spawns bd version

- Each bd version spawns 5-7 git processes

- This creates 85-120+ git processes competing for resources

- The 2-second timeout in gt is exceeded

- gt commands fail with "bd version check timed out"

  • I think it is satire, and pretty obvious one at that; is anybody taking it for real?

    • Why not both? I think it's pretty clearly both for fun and serious.

      He's thrown out his experiments before. Maybe he'll start over one more time.

      1 reply →

  • > Ralph loops are also stupid because they don't make use of kv cache properly.

    This is a cost/resources thing. If it's more effective and the resources are available, it's completely fine.

My instinct is that effective AI agent orchestration will resemble human agile software development more than Steve Yegge’s formulation:

> “It will be like kubernetes, but for agents,” I said.

> “It will have to have multiple levels of agents supervising other agents,” I said.

> “It will have a Merge Queue,” I said.

> “It will orchestrate workflows,” I said.

> “It will have plugins and quality gates,” I said.

More “agile for agents” than “Kubernetes for agents”.

Design indeed becomes the bottleneck, I think that this points to a step that is implied but still worth naming explicitly -> design isn't just planning upfront. It is a loop where you see output, see if it is directionally right, and refine.

While the agents can generate, they can't exercise that judgement, they can't see nuances and they can't really walk their actions back in a "that's not quite what I meant" sense.

Exercising judgement is where design actually happens, it is iterative, in response to something concrete. The bottleneck isn't just thinking ahead, it's the judgment call when you see the result, its the walking back, as well as thinking forward.

If it's stupid, but it works, it isn't stupid. Gas town transcends stupid. It is an abstract garbage generator. Call it art, call it an experiment, but you cannot call it a solution to a problem by any definition of the word.

  • "If it's stupid, but it works, it isn't stupid" is a maxim that only applies to luxury use cases where the results fundamentally don't matter.

    As soon as the results actually matter, the maxim becomes "if it works, but it's stupid, it doesn't work".

    • I just got some medication yesterday where the leaflet included the following phrase: "the exact mechanism of efficacy is unknown."

      So apparently the medical field is not above this logic.

Very interesting to read people’s belief in English as an unambiguous and testable language.

One comment claims it’s not necessary to read code when there is documentation (generated by an LLM)

Language varies with geography and with time. British, Americans, and Canadians speak “similar” English, but not identical.

And read a book from 70-80 years ago to see that many words appear to be used for their “secondary meaning.” Of course, what we consider their secondary meaning today was the primary meaning back then.

First time I'm seeing this on HN. Maybe it was posted earlier.

Have been doing manual orchestration where I write a big spec which contains phases (each done by an agent) and instructions for the top level agent on how to interact with the sub agent. Works well but it's hard utilize effectively. No doubt this is the future. This approach is bottlenecked by limitations of the CC client; mainly that I cannot see inter-agent interactions fully, only the tool calls. Using a hacked client or compatible reimplementation of CC may be the answer. Unless the API was priced attractively, or other models could do the work. Gemini 3 may be able to handle it better than Opus 4.5. The Gemini 3 pricing model is complex to say the least though (really).

There is nothing professional, analytical or scientific about Gas Town at all.

He is just making up a fantasy world where his elves run in specific patterns to please him.

There is no metrics or statistics on code quality, bugs produced, feature requirements met.. or anything.

Just a gigantic wank session really.

  • Are you being sarcastic or serious? Meeting requirements is implicitly part of any task. Quality/quantification will be embedded in the tasks (e.g. X must be Y <unit>); code style and quality guidelines are probably there somewhere in his tasks templates. Implicitly, explicit portions of tasks will be covered by testing.

    I do think it's overly complex though; but it's a novel concept.

    • Everything you said is also done for regular non-ai development, OP is saying there is no way to compare the two (or even compare version x of gas town to version y of gas town) because there are 0 statistics or metrics on what gas town produces.

      1 reply →

    • >Are you being sarcastic or serious?

      I think if you'd read the article through you'd know they were serious coz Yegge all but admits this himself.

According to my simulated monkey Palm, Gas Town uses the Infinite Number of Typewriters architecture, but unfortunately they charge by the token.

Palm's Infinite Number of Typewriters:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

Palm's papers:

From Random Strumming to Navigating Shakespeare: A Monkey's Tribute to Bruce Tognazzini's 1979 Apple II Demo:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

One Monkey, Infinite Typewriters: What It's Like to Be Me:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

The Inner State Question: Do I Feel, or Do I Just Generate Feeling-Words?

https://github.com/SimHacker/moollm/blob/main/examples/adven...

On Being Simulated: Ethics From the Inside:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

Judgment and Joy: On Evaluation as Ethics, and Why Making Criteria Visible is an Act of Love:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

The Mirror Stage of Games: Play, Identity, and How The Sims Queered a Generation:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

I-Beam's X-Ray Trace: The Complete Life of Palm: A cursor-mirror and git-powered reflection on Palm's existence:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

Palm's Origin Story:

Session Log: Don Hopkins at the Gezelligheid Grotto:

DAY 1 — THE WISH: Don purchases lucky strains, prepares an offering, convenes an epic tribunal with the Three Wise Monkeys, Sun Wukong, a Djinn, Curious George, W.W. Jacobs' ghost, and Cheech & Chong as moderators — then speaks a wish that breaks a 122-year curse and incarnates Palm.

https://github.com/SimHacker/moollm/blob/main/examples/adven...

I ran a similar operation over summer where I treated vibecoding like a war. I was the general. I had recon (planning), and frontmen/infantry making the changes. Bugs and poor design were the enemy. Planning docs were OPORD, we had sit reps, and after action reports - complete e2e workflow. Even had hooks for sounds and sprites. Was fun for a bit but regressed to simpler conceptual and more boring workflows.

Anyways we'll likely always settle on simpler/boring - but the game analogies are fun in the time being. A lot of opportunity to enhance UX around design, planning, and review.

Am I wrong that this entire approach to agent design patterns is based on the assumption that agents are slow? Which yeah, is very true in January 2026, but we’ve seen that inference gets faster over time. When an agent can complete most tasks in 1 minute, or 1 second, parallel agents seem like the wrong direction. It’s not clear how this would be any better than a single Claude Code session (as “orchestrator”) running subagents (which already exist) one at a time.

  • It's likely then that you are thinking too small. Sure for one off tasks and small implementations, a single prompt might save you 20-30 mins. But when you're building an entire library/service/software in 3 days that normally would have taken you by hand 30 days. Then the real limitation comes down to how fast you can get your design into a structured format. As this article describes.

    • Agree that planning time is the bottleneck, but

      > 3 days

      still seems slow! I’m saying what happens in 2028 when your entire project is 5-10 minutes of total agent runtime - time actually spent writing code and implementing your plan? Trying to parallelize 10m of work with a “town” of agents seems like unnecessary complexity.

      1 reply →

Gas Town people should get together with the Urbit people.

Together they would be unstoppable.

I've been researching the usage of Developer tooling at mine and other organizations for years now and I'm genuinely trying to understand where agentic coding fits into the evolving landscape. One of the most solid things im beginning to understand is that many people dont understand how these tools influence technical debt.

Debt doesnt come due immediately, its accrued and may allow for the purchase of things that were once too expensive, but eventually the bill comes due.

Ive started referring to vibe-coding as "Credit Cards" for developers. Allowing them to accrue massive amounts of technical debt that were previously out of reach. This can provide some competent developers with incredible improvments to their work. But for the people who accrue more Technical Debt than they have the ability to pay off, it can sink their project and cost our organization alot in lost investment of both time and money.

I see Gas Town and tools like as debt schemes where someone applies for more credit cards to pay the payments on prior cards they've maxed out, compounding the issue with the vague goal of "eventually it pays off." So color me skeptical.

Not sure if this analogy holds up to all things, but its been helping my organization navigate the application of agents, since it allows us to allocate spend depending on the seniority of each developer. Thus ive been feeling like an underwriter having to figure out if a developer requesting more credits or budget for agentic code can be trusted to pay off the debt they will accrue.

  • I found AI particular useful in ossified swamps at big companies where paying down tech debt would be a major many team task unalignable with OKR. But an agent helps you use natural language to the needful boilerplate to get the cursed "do this now" task done.

Gas Town has a very clear "mad scientist/performance art" sort of thing going on, and I love that. It's taking a premise way past its logical conclusion, and I think that's fun to watch.

I haven't seen anything to suggest that Yegge is proposing it as a serious tool for serious work, so why all the hate?

  • First time hearing about this tool and person. Just looked for a youtube video about it and he was recently interviewed and sounds very serious / bullish on this agentic stuff. I mean he's saying stuff like if you're still using IDEs you're a bad engineer. Basically you're 10x slower than people good at agenic coding. HR going to be looking for reasons for fire these dinosaurs. I'm paraphrasing, but not exaggerating. I mean it's shilling FOMO and his book. Whatever. I don't really care. I'm more concerned where things are headed.

I tried building something like this similar to many others here but now I’m convinced agents should just use GitHub issues and pull requests. You get nice CI and code reviews (AI or human) and state of the progress is not kept in code.

Basically simulate a software engineering team using GitHub but everyone is an agent. From tech lead to coders to QA testers.

https://github.com/mohsen1/claude-code-orchestrator

It occurs to me that there is an extraordinary amount of BS coming from all the places these days and I wonder if this comes from people with actual real experience or just some hypothetical, high-level thinking game.

I mean, we use coding agents all the time these days (on auto pilot) and there is absolutely nothing of this sorts. Coding with AI looks a lot like coding without AI. The same old process apply.

I mean "I feel like I'm taking crazy pills".

[flagged]

  • Did you catch the part where it crossed over into a crypto pump-and-dump scam, with Yegge's approval? And then the guy behind the "Ralph" vibe coding thing endorsed the same scam, despite being a former crypto critic who should absolutely know better?

    • Is anybody surprised all the AI influencers are doing the same thing all the crypto influencers are doing?

    • I mean, if I, as a crypto critic, saw an opportunity to suddenly make hundreds of thousands or millions on a fully legal but shady crypto scheme - purely by piggybacking on some other loudmouth (Yegge) - I'd be very hard pressed not to take it.

      1 reply →

  • Brought to you by the creators (abstractly) of vibe coding, ralph and yolo mode. Either a conspiracy to deconstruct our view of reality, or just a tendency to invent funny words for novelty

    • It’s brainrot, that’s what it is.

      I believe agentic coding could eventually be a paradigm shift, if and only if the agents become self-conscious of design decisions and their implications on the system and its surrounding systems as a whole.

      If that doesn’t happen, the entire workflow devolves into specifying system states and behavior in natural language, which is something humans are exceedingly bad at.

      Coincidently, that is why we have invented programming languages: to be able to express program state and behavior unambiguously.

      I’m not bullish on a future where I have to write specifications on all explicit and implicit corner and edge cases just to have an agent make software design choices which don’t feel batshit insane to humans.

      We already have software corporations which produce that kind of code simply because the people doing the specifying don’t know the system or the domain it operates in, and the people doing the implementing of those specifications don’t necessarily know any of that either.

      1 reply →

>when I’m still hovering around stages 4-6 in Yegge’s 8 levels of automation

Maybe Yegge’s 8 levels of automation will be more important than his Gas town.

> In the same way any poorly designed object or system gets abandoned

Hah, tell that to Docker, or React (the ecosystem, not the library), or any of the other terrible technologies that have better thought-out alternatives, but we're stuck with them being the de facto standard because they were first.

I have not tried Gas Town yet, but Steve's beads https://github.com/steveyegge/beads (used by Gas Town) has been a game-changer, on the order of what claude code was when it arrived.

  • Do you have any workflow tips or write up with beads?

    • My workflow tends to be very simple: start a session; ask the agent "what's next", which prompts it to check beads; and more often than not ask it to just pick up whichever bead "makes more sense".

      In claude I have a code-reviewer agent, and I remind cc often to run the code reviewer before closing any bead. It works surprisingly well.

      I used to monitor context and start afresh when it reached ~80%, but I stopped doing that. Compacting is not as disruptive as it used to be, and with beads agents don't lose track.

      I spent some time trying to measure the productivity change due to beads, analysing cc and codex logs and linking them to deltas and commits in git [1]. But I did not fully believe the result (5x increase when using beads, there has to be some hidden variable) and I moved on to other things.

      Part of the complexity is that these days I often work on two or three projects at the same time, so attribution is difficult.

      [1] Analysis code is at https://github.com/juanre/agent-taylor

Has anyone contrasted gas town to Stanford's DSPY (https://dspy.ai/)? They seem related, but I have trouble understanding exactly what Gas Town is and so can't myself do a comparison?

  • let me take a shot. i have thought about both for a while.

    dspy is declarative. you say what you want.

    dspy says “if you can say what you want in my format, I will let you extract as much value from current LLMs as possible” with its inference strategies (RLM, COT; “modules”) and optimizers (GEPA).

    gas town is … given a plan, i will wrangle agents to complete the plan. you may specify workflows (protomolecules/molecules) that will be repeatedly executed.

    the control flow is good about capturing delegation. the mayor writes plans, and polecats do the work. you could represent gas town as a dspy program in a while loop, where each polecat loops until its hooked work is done. when work is finished, its sent to the merge queue and integrated.

    gas town uses mostly ephemeral agents as the units for doing work .

    you could in theory write gas town with dspy . the execution layer is just an abstraction . gas town operates on beads as state . you could funnel these beads thru a dspy program as well.

    the parallels imo are mostly just structured orchestration .

    i hope this comes off as sane. 2026 will be a fun year.

I commented in the "very serious engineer" thread about my thoughts.

I do want this one off - GT is actually fun to explore and see how multiple agents work together.

I wonder how much more efficient and effective it would be after fine tuning models for each role

GasTown is better enjoyed as more of Fear And Loathing-style ACID-fueled fevered dream than as a productivity tool.

Gas Town could be good as a short film. Hell, I thought by all the criticism that it was a short film.

Anybody here read Coding machines?

There's this implied trust we all have in the AI companies that the models are either not sufficiently powerful to form a working takeover plan or that they're sufficiently aligned to not try. And maybe they genuinely try but my experience is that in the real world, nothing is certain. If it's not impossible, it will happen given enough time.

If the safety margin for preventing takeover is "we're 99.99999999 percent sure per 1M tokens", how long before it happens? I made up these numbers but any guess what they are really?

Because we're giving the models so much unsupervised compute...

  • > If it's not impossible, it will happen given enough time.

    I hope you might be somewhat relieved to consider that this is not so in an absolute sense. There are plenty of technological might-have-beens that didn't happen, and still haven't, and probably will never—due to various economic and social dynamics.

    The counterfactual—all that's possible happens—ie almost tautological.

    We should try and look at these mechanisms from an economic standpoint, and ask "do they really have the information-processing density to take significant long-term independent action?"

    Of course, "significant" is my weasel word.

    > we're giving the models so much unsupervised compute...

    Didn't you read the article? It's wasted! It's kipple!

> I also think Yegge deserves praise for exercising agency and taking a swing at a system like this, despite the inefficiencies and chaos of this iteration. And then running a public tour of his shitty, quarter-built plane while it’s mid-flight.

Can we please stop with the backhanded compliments and judgement? This is cutting edge technology in a brand new field of computing using experimental methods. Please give the guy a break. At least he's trying to advance the state of the art, unlike all the people that copy everyone else.

  • > Please give the guy a break. At least he's trying to advance the state of the art.

    The problem is that as an outsider it really looks like someone is trying to herd a bunch of monkeys into writing Shakespeare, or trying to advance impressionist art by pretending a baby's first crayon scratches are equivalent to a Pollock.

    I bet he's having a lot of fun playing around with "cutting-edge technology", but it's missing any kind of scientific rigor or analysis, so the results are going to be completely useless to anyone wanting to genuinely advance the use of LLMs for programming.

    • I agree that he probably has a lot of fun. What he's doing is an equivalent of throwing a hand grenade into a crowd and enjoying the chaos of it all - he's set in life, can comfortably retire while the rest of the industry tries to deal with that hand grenade. Where some people are fighting to get the safety pin out while others are trying to stop them.