Tip: I very often use AI for inspiration. In this case, I ended up keeping a lot (not all) of the UI code it made, but I will very often prompt an agent, throw away everything it did, and redo it myself (manually!). I find the "zero to one" stage of creation very difficult and time consuming and AI is excellent at being my muse.
This right here is the single biggest win for coding agents. I see and directionally agree with all the concerns people have about maintainability and sprawl in AI-mediated projects. I don't care, though, because the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races. It's getting to that golden moment that constitutes 80% of what's costly about programming for me.
This is the part where I simply don't understand the objections people have to coding agents. It seems so self-evidently valuable --- even if you do nothing else with an agent, even if you literally throw all the code away.
I was talking about this the other day with someone - broadly I agree with this, they're absolutely fantastic for getting a prototype so you can play with the interactions and just have something to poke at while testing an idea. There's two problems I've found with that, though - the first is that it's already a nightmare to convince management that something that looks and acts like the thing they want isn't actually ready for production, and the vibe coded code is even less ready for production than my previous prototyping efforts.
The second is that a hand-done prototype still teaches you something about the tech stack and the implementation - yes, the primary purpose is to get it running quickly so you can feel how it works, but there's usually some learning you get on the technical side, and often I've found my prototypes inform the underlying technical direction. With vibe coded prototypes, you don't get this - not only is the code basically unusable, but you really are back to starting from scratch if you decide to move forward - you've tested the idea, but you haven't really tested the tech or design.
I still think they're useful - I'm a big proponent of "prototype early," and we've been able to throw together some surprisingly large systems almost instantly with the LLMs - but I think you've gotta shift your understanding of the process. Non-LLM prototypes tend to be around step 4 or 5 of a hypothetical 10-step production process, LLM prototypes are closer to step 2. That's fine, but you need to set expectations around how much is left to do past the prototype, because it's more than it was before.
> the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races. [...] This is the part where I simply don't understand the objections people have to coding agents.
That's what's valuable to you. For me the zero to one part is the most rewarding and fun part, because that's when the possibilities are near endless, and you get to create something truly original and new. I feel I'd lose a lot of that if I let an AI model prime me into one direction.
It's not FUN building all the scaffolding and setting up build scripts and all the main functions and directory structures.
Nor do I want to use some kind of initialiser or skeleton project, they always overdo things in my opinion, adding too much and too little at the same time.
With AI I can have it whip up an MVP-level happy-paths-only skeleton project in minutes and then I can start iterating with the fun bits of the project.
> This is the part where I simply don't understand the objections people have to coding agents. It seems so self-evidently valuable --- even if you do nothing else with an agent, even if you literally throw all the code away.
It sounds like the blank page problem is a big issue for you, so tools that remove it are a big productivity boost.
Not everyone has the same problems, though. Software development is a very personal endeavor.
Just to be clear, I am not saying that people in category A or category B are better/worse programmers. Just that everyone’s workflow is different so everyone’s experience with tools is also different.
The key is to be empathetic and trust people when they say a tool does or doesn’t work for them. Both sides of the LLM argument tend to assume everyone is like them.
Just this week I watched an interview with Mitchell about his dev setup and when asked about using neovim instead of an IDE he said something along the lines of "I don't want something that writes code for me". I'm not pointing this out as a criticism, but rather that it's worth taking note that an accomplished developer like him sees value in LLMs that he didn't see in previous intellisense-like tooling.
Not sure exactly what you're referring to, but I'm guessing it may be this interview I did 2 years ago: https://youtu.be/rysgxl35EGc?t=214 (timestamp linked to LLM-relevant section) I'm genuinely curious because I don't quite remember saying the quote you're saying I did. I'm not denying it, but I'd love to know more of the context. :)
But, if it is the interview from 2 years ago, it revolved more around autocomplete and language servers. Agentic tooling was still nascent so a lot of what we were seeing back then was basically tab models and chat models.
As the popular quote goes, "When the Facts Change, I Change My Mind. What Do You Do, Sir?"
The facts and circumstances have changed considerably in recent years, and I have too!
I explain it to my peers as "exploiting Cunningham's Law[0] with thyself"
I'll stare blankly at a blank screen/file for hours seeking inspiration, but the moment I have something to criticise I am immediately productive and can focus.
About that quote, iirc it's also a technique in Intelligence for getting information from people. You say something stupid and wrong and they will instantly just correct you on the spot and explain why etc.
People get into this field for very different reasons.
- People who like the act and craftsmanship of coding itself. AI can encourage slop from other engineers and it trivializes the work. AI is a negative.
- People who like general engineering. AI is positive for reducing the amount of (mundane) code to write, but still requires significant high-level architectural guidance. It’s a tool.
- People who like product. AI can be useful for prototyping but won’t won’t be able to make a good product on its own. It’s a tool.
- People who just want to build a MVP. AI is honestly amazing at making something that at least works. It might be bad code but you are testing product fit. Koolaid mode.
That’s why everyone has a totally different viewpoint.
> - People who like the act and craftsmanship of coding itself. AI can encourage slop from other engineers and it trivializes the work. AI is a negative.
Those who value craftsmanships would value LLM, since they can pick up new languages or frameworks much faster. They can then master the newly acquired skills on their own if preferred, or they can use LLM to help along the way.
> People who like product. AI can be useful for prototyping but won’t won’t be able to make a good product on its own. It’s a tool.
Any serious product often comprises of multiple modules, layers, interfaces. LLM can help greatly with building some of those building blocks. Definitely a useful tool for product building.
Real subtle. Why not just write "there are good programmers and bad programmers and AI is good for bad programmers and only bad programmers"? Think about what you just said about Mitchell Hashimoto here.
> This is the part where I simply don't understand the objections people have to coding agents
Because I have a coworker who is pushing slop at unsustainable levels, and proclaiming to management how much more productive he is. It’s now even more of a risk to my career to speak up about how awful his PRs are to review (and I’m not the only one on the team who wishes to speak up).
The internet is rife with people who claim to be living in the future where they are now a 10x dev. Making these claims costs almost nothing, but it is negatively effecting mine and many others day to day.
I’m not necessarily blaming these internet voices (I don’t blame a bear for killing a hiker), but the damage they’re doing is still real.
I don't think you read the sentence you're responding to carefully enough. The antecedent of "this" isn't "coding agents" generally: it's "the value of an agent getting you past the blank page stage to a point where the substantive core of your feature functions well enough to start iterating on". If you want to respond to the argument I made there, you have to respond to the actual argument, not a broader one that's easier (and much less interesting) to take swipes at.
I'm the opposite, I find getting started easy and rewarding, I don't generally get blocked there. Where I get blocked, after almost thirty years of development, is writing the code.
I really like building things, but they're all basically putting the same code together in slightly different ways, so the part I find rewarding isn't the coding, it's the seeing everything come together in the end. That's why I really like LLMs, they let me do all the fun parts without any of the boring parts I've done a thousand times before.
It's funny because the part you find challenging is exactly the thing LLM skeptics tend to say accounts for almost none of the work (writing the code). I personally find that once my project is up on its legs and I'm in flow state, writing code is easy and pleasant, but one thing clear from this thread is everyone experiences programming a little differently.
> the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races
AI is an absolute boon for "getting off the ground" by offloading a lot of the boilerplate and scaffolding that one tends to lose enthusiasm for after having to do it for the 99th time.
> AI is excellent at being my muse.
I'm guessing we have a different definition for muse. Though I admit I'm speaking more about writing (than coding) here but for myself, a muse is the veritable fount of creation - the source of ideas.
Feel free to crank the "temperature" on your LLM until the literal and figurative oceans boil off into space, at the end of the day you're still getting the ultimate statistical distillation.
Agree – my personal rule is that I throw away any branches where I use LLM-generated code, and I still find it very helpful because of the speed of prototyping various ideas.
I concur here and would like to add that I worry less about sprawl when I know I can ask the agent to rework things in future. Yes, it will at times implement the same thing twice in two different files. Later, I’ll ask it to abstract that away to a library. This is frankly how a lot of human coding effort goes too.
This is an artefact of a language ecosystem that does not prioritize getting started. If you picked php/laravel with a few commands you are ahead of the days of work piping golang or node requires to get to a starting point.
I guess it depends on your business. I rarely start new professional projects, but I maintain them for 5+ years - a few pieces of production software I started are now in the double digits. Ghostty definitely aims to be in that camp of software.
I really respect Mitchell's response to the OpenAI accident, even if it is seen in positive light for ghostty. Can't think of any software vendor that actively tries to eliminate nag / annoyances (thinking specifically of MS Auto Update), so this is welcome.
Also this article shows responsible use of AI when programming; I don't think it fits the original definition of vibe coding that caused hysterics.
This post demonstrates one area where ai agents are a huge win: ui frameworks. I have a very similar workflow on an app I’m currently developing in Rust and GTK.
It’s not that I don’t know how to implement something, it’s that the agent can do so much of the tedious searching and trial and error that accompanies this ui framework code.
Notice that Mitchell maintains understanding of all the code through the session. It’s because he already understands what he needs to do. This is a far cry from the definition of “vibe coding” I think a lot of people are riding on. There’s no shortcut to becoming an expert.
I wonder what people would discuss in all these ghostty posts if cmd-f had been present from the start. It’s getting a little boring hearing about it in every post!
There’s interesting things to discuss here about LLM tooling and approaches to coding. But of course we’d rather complain about cmd-f ;)
Since we're here, I'm just waiting for them to implement drag-and-drop on KDE. Right now it only works on GNOME, and although Ghostty is great I'm not going to switch to GNOME just for a terminal emulator.
Funnily enough, I spent the last weekend implementing search in Ghostty using Claude. There's already a kinda working implementation of the actual searching, so most of the job was just wiring it up to the UI. After two sessions of maybe 10 hours in total, I had basic searching with highlighting and go to next/previous match working in the Linux frontend. The search implementation is explicitly a work in progress though, so not something that's ready for general use.
That said, it certainly made me appreciate the complexity of such a "basic" feature when I started thinking about how to make this work when tailing a stream of text.
Have you tried the suggestions in https://ghostty.org/docs/help/terminfo#ssh? I don't know what issue you may be experiencing but this solved my issue with using htop in an ssh session.
And the very bit that you admit not being good at - the zero to one stage - will remain forever out of reach because you get the AI to do the heavy lifting. You have to practise that bit yourself, unless of course you never want to be able to do it yourself...
It looks like Mitchell is using an agentic framework called Amp (I’d never heard of it) - does anybody else here use it or tried it? Curious how it stacks up against Claude Code.
I haven't yet spent any time with it myself, but the impression I have been getting is that it is the most credible of the vendor-independent terminal coding agents right now.
Claude Code, Codex CLI and Gemini CLI are all (loosely) locked to their own models.
Reason to stick with Claude Code or Codex CLI is that they use the respective platforms’ subscriptions, like Claude Max or ChatGPT-Pro. Unless I’m mistaken, with the other cli apps like Amp you get charged by token usage which can add up to a lot more than the $200/mo that Claude Max 20x or ChatGPT Pro cost.
> "You can see in chats 11 to 14 that we're entering the slop zone. The code the agent created has a critical bug, and it's absolutely failing to fix it. And I have no idea how to fix it, either.
I'll often make these few hail mary attempts to fix a bug. If the agent can figure it out, I can study it and learn myself. If it doesn't, it costs me very little. If the agent figures it out and I don't understand it, I back it out. I'm not shipping code I don't understand. While it's failing, I'm also tabbed out searching the issue and trying to figure it out myself."
Awesome characterization ("slop zone"), pragmatic strategy (let it try; research in parallel) and essential principle ("I'm not shipping code I don't understand.")
IMHO this post is gold, for real-world project details and commentary from an expert doing their thing.
> I have a toddler at home so my "computer time" is very scheduled and very limited
I'm slowly wondering if coding AI agents are turning into the best case scenario for workers. Maybe I'm being hopeful haha.
But having something that doesn't make you faster but just lets you be less concentrated and focused and mentally lazier while producing the same amount of work in the same time as a super dedicated and focused session would have. That's best case scenario here.
The user says "Consult the oracle." at the end of the prompt and the AI begins its answer with:
"I'm going to ask the oracle for advice on planning custom UI for Sparkle update notifications in the titlebar."
What does this refer to?
EDIT: Another comment in this thread says "It's currently Sonnet 4.5 by default but uses GPT-5 for the "oracle" second opinion", so I guess that is what it means.
At work, they help me to kickstart a task - taking the first step is very often the hardest part. It helps me grok new codebases and write boring parts.
But side projects is where the real fun starts - I can materialize random ideas extremely quickly. No more hours spent on writing boilerplate or fighting the tooling. I can delegate the parts I'm not good at the agent. Or one-prompt a feature, if I don't like the result or it doesn't work, I roll it back.
> Let's set the stage for what lead to this feature (pun intended, as you'll see shortly). During a high-profile OpenAI keynote, a demo was rudely interrupted by a Ghostty update prompt
quite apart from the article itself, I'm reminded of how much we put up with from our operating systems. presentations and screen sharing have been a fact of life for a couple of decades now; why is it so hard to tell the operating system to strictly not allow anything other than that one window access to the screen?
which ones, and what is the mechanism to do it? from what I've seen of both linux and windows there is no good central way to toggle this efficiently, and evidently macs also leave it up to various applications to be well behaved.
> Tip: I very often use AI for inspiration. In this case, I ended up keeping a lot (not all) of the UI code it made, but I will very often prompt an agent, throw away everything it did, and redo it myself (manually!). I find the "zero to one" stage of creation very difficult and time consuming and AI is excellent at being my muse.
This is pretty much how I use AI. I don't have it in my editor, I always use it in a browser window, but I bounce ideas off it, use it like a better search engine and even if I don't use the exact code it produces I do feel there's some value.
>> AI is very good at fill-in-the-blank or draw-the-rest-of-the-owl.
This is definitely the sweet spot imo. Any time I've been able to come up with a solid, hand coded pattern and have AI repeat in several similar areas has been the most rewarding experiences that I have had with it.
By the time you've done that, why not repeat it yourself?
I once needed to define a large table of trigrams in C (for the purpose of distinguishing "English" from "not English"). Setting it up looked like this:
1. Download a book from Project Gutenberg.
2. Write something that went through the book character-by-character, and for every character, remembered the trigram ending there.
3. Make the form of "remembering" be "append a line of code to this file that says `array[char1][char2][char3] = 1`".
4. Now I have a really long C file that compiles into the dictionary I want.
It sounds to me kind of like you want to replace step 3 with an LLM. If that's right... what value is the LLM adding?
Concretely, I was creating a cli application. I had implemented a few commands end-to-end and established some solid patterns. I used Codex (i.e. the PR creating flavor) to provide instructions and get it to review the existing patterns before continuing as asked it to rigorously follow them. I had to do about ~10 more things and it worked really well. It was easy for me to review and understand because I already knew the pattern and it seemed easy for it to get right.
It worked so well that I am always trying to look for opportunities like this but honestly, it isn't that common. Many times you aren't creating a pattern and repeating - you are creating a new pattern. AI is good to chat with to get ideas and come up with an approach in these situations seems to be more effective to me.
This is a post by a hugely respected developer (due to his past & current accomplishments), writing about his experiences in how to best leverage an AI coding assistant.
Product merits aside… Its developer is a celebrity figure and people here are captivated by the story of a billionaire who writes open source without a business model, like we regular folks do when we don’t have a hustle to focus on
I think as long as a human audit passes its good
I also generated some pretty great code before, but I went in to review every single line to make sure
> You can see in chats 11 to 14 that we're entering the slop zone. The code the agent created has a critical bug, and it's absolutely failing to fix it. And I have no idea how to fix it, either.
People are really bad at evaluating whether ai speeds them up or slows them down.
The main question is, do you enjoy this kind of process of working with ai.
I personally don't, so I don't use it.
It's hard for me to believe any claims about productivity gains.
This is the crux of the discussion. For me the output is a greater reward than the input. The faster I can reach the output the better.
And to clarify, I don't mean output as "this feature works, awesome", but "this feature works, it's maintainable and the code looks as beautiful as I can make it"
I like it when it works but I literally had to take a break yesterday due to the rage I was feeling from Claude repeatedly declaring "I found it!" or "Perfect - found the issue!" before totally breaking the code.
Despite people yelling, vibe coding isn’t going away.
Developers whining that they can do it better and that it will ruin software forget that business always goes with the “good enough” option.
Human receptionists were far better than IVRs and “press 3 for” automation, but it was cheaper and good enough. Now very few companies have human phone operators.
SDEs need to come to terms with the fact that the value associated with human coding has been degraded. Salaries don’t just go up forever. Human SDEs aren’t going away, and 10x engineers will still be very valuable, but overall the perceived value of SDEs and their comp as a job family will likely start slide rapidly from this point forward.
> Human receptionists were far better than IVRs and “press 3 for” automation, but it was cheaper and good enough.
Good enough? Depends on who you ask. It was and is clearly terrible for the users (customers). People hate to navigate huge, invisible menus by slowly pressing buttons or speaking single words very clearly, but the business doesn't really care about that and deems it "worth it" for the money saved.
This is a pattern one can find all over the place: make something worse but cheaper, as long as someone else, not the business, bears the cost.
Tip: I very often use AI for inspiration. In this case, I ended up keeping a lot (not all) of the UI code it made, but I will very often prompt an agent, throw away everything it did, and redo it myself (manually!). I find the "zero to one" stage of creation very difficult and time consuming and AI is excellent at being my muse.
This right here is the single biggest win for coding agents. I see and directionally agree with all the concerns people have about maintainability and sprawl in AI-mediated projects. I don't care, though, because the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races. It's getting to that golden moment that constitutes 80% of what's costly about programming for me.
This is the part where I simply don't understand the objections people have to coding agents. It seems so self-evidently valuable --- even if you do nothing else with an agent, even if you literally throw all the code away.
PS
Put a weight on that bacon!
I was talking about this the other day with someone - broadly I agree with this, they're absolutely fantastic for getting a prototype so you can play with the interactions and just have something to poke at while testing an idea. There's two problems I've found with that, though - the first is that it's already a nightmare to convince management that something that looks and acts like the thing they want isn't actually ready for production, and the vibe coded code is even less ready for production than my previous prototyping efforts.
The second is that a hand-done prototype still teaches you something about the tech stack and the implementation - yes, the primary purpose is to get it running quickly so you can feel how it works, but there's usually some learning you get on the technical side, and often I've found my prototypes inform the underlying technical direction. With vibe coded prototypes, you don't get this - not only is the code basically unusable, but you really are back to starting from scratch if you decide to move forward - you've tested the idea, but you haven't really tested the tech or design.
I still think they're useful - I'm a big proponent of "prototype early," and we've been able to throw together some surprisingly large systems almost instantly with the LLMs - but I think you've gotta shift your understanding of the process. Non-LLM prototypes tend to be around step 4 or 5 of a hypothetical 10-step production process, LLM prototypes are closer to step 2. That's fine, but you need to set expectations around how much is left to do past the prototype, because it's more than it was before.
> it's already a nightmare to convince management
Then just don’t show it to management, no?
> the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races. [...] This is the part where I simply don't understand the objections people have to coding agents.
That's what's valuable to you. For me the zero to one part is the most rewarding and fun part, because that's when the possibilities are near endless, and you get to create something truly original and new. I feel I'd lose a lot of that if I let an AI model prime me into one direction.
The white page problem hits me every time.
It's not FUN building all the scaffolding and setting up build scripts and all the main functions and directory structures.
Nor do I want to use some kind of initialiser or skeleton project, they always overdo things in my opinion, adding too much and too little at the same time.
With AI I can have it whip up an MVP-level happy-paths-only skeleton project in minutes and then I can start iterating with the fun bits of the project.
OP is considering output productivity, but your comment is about personal satisfaction of process
2 replies →
Surely there are some things which you can’t be arsed to take from zero to one?
This isn’t selling your soul; it is possible to let AI scaffold some tedious garbage while also dreaming up cool stuff the old fashioned way.
1 reply →
> This is the part where I simply don't understand the objections people have to coding agents. It seems so self-evidently valuable --- even if you do nothing else with an agent, even if you literally throw all the code away.
It sounds like the blank page problem is a big issue for you, so tools that remove it are a big productivity boost.
Not everyone has the same problems, though. Software development is a very personal endeavor.
Just to be clear, I am not saying that people in category A or category B are better/worse programmers. Just that everyone’s workflow is different so everyone’s experience with tools is also different.
The key is to be empathetic and trust people when they say a tool does or doesn’t work for them. Both sides of the LLM argument tend to assume everyone is like them.
Just this week I watched an interview with Mitchell about his dev setup and when asked about using neovim instead of an IDE he said something along the lines of "I don't want something that writes code for me". I'm not pointing this out as a criticism, but rather that it's worth taking note that an accomplished developer like him sees value in LLMs that he didn't see in previous intellisense-like tooling.
Not sure exactly what you're referring to, but I'm guessing it may be this interview I did 2 years ago: https://youtu.be/rysgxl35EGc?t=214 (timestamp linked to LLM-relevant section) I'm genuinely curious because I don't quite remember saying the quote you're saying I did. I'm not denying it, but I'd love to know more of the context. :)
But, if it is the interview from 2 years ago, it revolved more around autocomplete and language servers. Agentic tooling was still nascent so a lot of what we were seeing back then was basically tab models and chat models.
As the popular quote goes, "When the Facts Change, I Change My Mind. What Do You Do, Sir?"
The facts and circumstances have changed considerably in recent years, and I have too!
11 replies →
Cognitive Dissonance. Still there, even in the best of us.
I explain it to my peers as "exploiting Cunningham's Law[0] with thyself"
I'll stare blankly at a blank screen/file for hours seeking inspiration, but the moment I have something to criticise I am immediately productive and can focus.
[0]: https://en.wikipedia.org/wiki/Ward_Cunningham#Law
> The best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer.
About that quote, iirc it's also a technique in Intelligence for getting information from people. You say something stupid and wrong and they will instantly just correct you on the spot and explain why etc.
So it works in real life too
People get into this field for very different reasons.
- People who like the act and craftsmanship of coding itself. AI can encourage slop from other engineers and it trivializes the work. AI is a negative.
- People who like general engineering. AI is positive for reducing the amount of (mundane) code to write, but still requires significant high-level architectural guidance. It’s a tool.
- People who like product. AI can be useful for prototyping but won’t won’t be able to make a good product on its own. It’s a tool.
- People who just want to build a MVP. AI is honestly amazing at making something that at least works. It might be bad code but you are testing product fit. Koolaid mode.
That’s why everyone has a totally different viewpoint.
> - People who like the act and craftsmanship of coding itself. AI can encourage slop from other engineers and it trivializes the work. AI is a negative.
Those who value craftsmanships would value LLM, since they can pick up new languages or frameworks much faster. They can then master the newly acquired skills on their own if preferred, or they can use LLM to help along the way.
> People who like product. AI can be useful for prototyping but won’t won’t be able to make a good product on its own. It’s a tool.
Any serious product often comprises of multiple modules, layers, interfaces. LLM can help greatly with building some of those building blocks. Definitely a useful tool for product building.
1 reply →
Real subtle. Why not just write "there are good programmers and bad programmers and AI is good for bad programmers and only bad programmers"? Think about what you just said about Mitchell Hashimoto here.
8 replies →
> This is the part where I simply don't understand the objections people have to coding agents
Because I have a coworker who is pushing slop at unsustainable levels, and proclaiming to management how much more productive he is. It’s now even more of a risk to my career to speak up about how awful his PRs are to review (and I’m not the only one on the team who wishes to speak up).
The internet is rife with people who claim to be living in the future where they are now a 10x dev. Making these claims costs almost nothing, but it is negatively effecting mine and many others day to day.
I’m not necessarily blaming these internet voices (I don’t blame a bear for killing a hiker), but the damage they’re doing is still real.
I don't think you read the sentence you're responding to carefully enough. The antecedent of "this" isn't "coding agents" generally: it's "the value of an agent getting you past the blank page stage to a point where the substantive core of your feature functions well enough to start iterating on". If you want to respond to the argument I made there, you have to respond to the actual argument, not a broader one that's easier (and much less interesting) to take swipes at.
6 replies →
Not sure what to tell you, if there's a problem you have to speak up.
1 reply →
What if your coworker was pushing tons of crap code and AI didn't exist? How would you deal with the situation then? Do that.
2 replies →
Maybe it's possible to use AI to help review the PRs and claim it's the AI making the PR's hyperproductive?
8 replies →
I'm the opposite, I find getting started easy and rewarding, I don't generally get blocked there. Where I get blocked, after almost thirty years of development, is writing the code.
I really like building things, but they're all basically putting the same code together in slightly different ways, so the part I find rewarding isn't the coding, it's the seeing everything come together in the end. That's why I really like LLMs, they let me do all the fun parts without any of the boring parts I've done a thousand times before.
It's funny because the part you find challenging is exactly the thing LLM skeptics tend to say accounts for almost none of the work (writing the code). I personally find that once my project is up on its legs and I'm in flow state, writing code is easy and pleasant, but one thing clear from this thread is everyone experiences programming a little differently.
6 replies →
> the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races
AI is an absolute boon for "getting off the ground" by offloading a lot of the boilerplate and scaffolding that one tends to lose enthusiasm for after having to do it for the 99th time.
> AI is excellent at being my muse.
I'm guessing we have a different definition for muse. Though I admit I'm speaking more about writing (than coding) here but for myself, a muse is the veritable fount of creation - the source of ideas.
Feel free to crank the "temperature" on your LLM until the literal and figurative oceans boil off into space, at the end of the day you're still getting the ultimate statistical distillation.
https://imgur.com/a/absqqXI
My trouble is that the remaining 20% of work takes 80% of my time. Ai assistance or not. The edge
100% agree and LLM does have many blind spots and high confidence which makes it hard to really trust without checking
Agree – my personal rule is that I throw away any branches where I use LLM-generated code, and I still find it very helpful because of the speed of prototyping various ideas.
100% agreed.
> "Put a weight on that bacon!" ?
Mitchell included a photograph of his breakfast preparation, and the bacon was curling up on the frying pan.
6 replies →
I concur here and would like to add that I worry less about sprawl when I know I can ask the agent to rework things in future. Yes, it will at times implement the same thing twice in two different files. Later, I’ll ask it to abstract that away to a library. This is frankly how a lot of human coding effort goes too.
It's invaluable if you don't know how to work with it
This is an artefact of a language ecosystem that does not prioritize getting started. If you picked php/laravel with a few commands you are ahead of the days of work piping golang or node requires to get to a starting point.
I guess it depends on your business. I rarely start new professional projects, but I maintain them for 5+ years - a few pieces of production software I started are now in the double digits. Ghostty definitely aims to be in that camp of software.
I really respect Mitchell's response to the OpenAI accident, even if it is seen in positive light for ghostty. Can't think of any software vendor that actively tries to eliminate nag / annoyances (thinking specifically of MS Auto Update), so this is welcome.
Also this article shows responsible use of AI when programming; I don't think it fits the original definition of vibe coding that caused hysterics.
yeah the usage of "vibe coding" here doesn't fit at all. That term is so overused.
Worth noting that Mitchell didn't actually use the term "vibe coding" anywhere in this article.
He called it "vibing" in the headline, which matches my suspicion that the term "vibe" is evolving to mean anything that uses generative AI, see also Microsoft "vibe working": https://www.microsoft.com/en-us/microsoft-365/blog/2025/09/2...
1 reply →
It has simply evolved from its initial meaning, as language does.
> Also this article shows responsible use of AI when programming; I don't think it fits the original definition of vibe coding that caused hysterics.
Yep. It's vibe engineering, which simonw coined here: https://simonwillison.net/2025/Oct/7/vibe-engineering/
As an aside, the Ghostty recently made it mandatory to disclose the use of AI coding tools:
https://github.com/ghostty-org/ghostty/pull/8289
This post demonstrates one area where ai agents are a huge win: ui frameworks. I have a very similar workflow on an app I’m currently developing in Rust and GTK.
It’s not that I don’t know how to implement something, it’s that the agent can do so much of the tedious searching and trial and error that accompanies this ui framework code.
Notice that Mitchell maintains understanding of all the code through the session. It’s because he already understands what he needs to do. This is a far cry from the definition of “vibe coding” I think a lot of people are riding on. There’s no shortcut to becoming an expert.
Loving Ghostty!
OT but I can't imagine leaving my laptop open next to a pan of sizzling bacon
Ghostty is awesome and I almost dropped iTerm for it until I hit cmd-f and nothing happened.
https://github.com/ghostty-org/ghostty/issues?q=is%3Aissue%2...
I wonder what people would discuss in all these ghostty posts if cmd-f had been present from the start. It’s getting a little boring hearing about it in every post!
There’s interesting things to discuss here about LLM tooling and approaches to coding. But of course we’d rather complain about cmd-f ;)
Missing scrollbars. That and ctrl f are my two annoyances, apart from that I love ghostty and use it daily at work.
People don’t care about advanced use cases when fundamentals are missing
1 reply →
Since we're here, I'm just waiting for them to implement drag-and-drop on KDE. Right now it only works on GNOME, and although Ghostty is great I'm not going to switch to GNOME just for a terminal emulator.
1 reply →
Funnily enough, I spent the last weekend implementing search in Ghostty using Claude. There's already a kinda working implementation of the actual searching, so most of the job was just wiring it up to the UI. After two sessions of maybe 10 hours in total, I had basic searching with highlighting and go to next/previous match working in the Linux frontend. The search implementation is explicitly a work in progress though, so not something that's ready for general use.
That said, it certainly made me appreciate the complexity of such a "basic" feature when I started thinking about how to make this work when tailing a stream of text.
It’s planned for v1.3 in March 2026.
https://ghostty.org/docs/install/release-notes/1-2-0#roadmap
Yeah I quickly (and unfortunately!) switched back to Warp, as Ghostty was a little too barebones for my use case.
Word to the wise: Ghostty’s default scrollback buffer is only ~10MB, but it can easily be changed with a config option.
Missing search and weird ssh control character issues are my blockers. It's great otherwise!
Reposting my comment from [0]:
[0] https://news.ycombinator.com/item?id=45359239
This is the blocker for me as well.
And the very bit that you admit not being good at - the zero to one stage - will remain forever out of reach because you get the AI to do the heavy lifting. You have to practise that bit yourself, unless of course you never want to be able to do it yourself...
Tangential question. Why does every app still need its own auto-update framework? Shouldn’t we all rely on app-stores and package managers for this?
Because the macOS ecosystem is miserable for this.
No we shouldn’t rely on app stores. That would mean giving up a lot of control and putting your app at the whim of the store.
It is nice to have it on ubuntu for example. So you can install latest version by downloading it manually and keep getting updates after.
Ubuntu has both Snap and custom APT repositories.
Such a useful walkthrough.
It looks like Mitchell is using an agentic framework called Amp (I’d never heard of it) - does anybody else here use it or tried it? Curious how it stacks up against Claude Code.
I haven't yet spent any time with it myself, but the impression I have been getting is that it is the most credible of the vendor-independent terminal coding agents right now.
Claude Code, Codex CLI and Gemini CLI are all (loosely) locked to their own models.
This is good to know. I’ll probably play around with it sometime in the future.
BTW, appreciate your many great write-ups - they’ve been invaluable for keeping up to date in this space.
As far as I know it only uses sonnet 4.5
1 reply →
Reason to stick with Claude Code or Codex CLI is that they use the respective platforms’ subscriptions, like Claude Max or ChatGPT-Pro. Unless I’m mistaken, with the other cli apps like Amp you get charged by token usage which can add up to a lot more than the $200/mo that Claude Max 20x or ChatGPT Pro cost.
I’m using it. It’s expensive but it’s awesome
How is it compared to Claude Code?
> "You can see in chats 11 to 14 that we're entering the slop zone. The code the agent created has a critical bug, and it's absolutely failing to fix it. And I have no idea how to fix it, either.
I'll often make these few hail mary attempts to fix a bug. If the agent can figure it out, I can study it and learn myself. If it doesn't, it costs me very little. If the agent figures it out and I don't understand it, I back it out. I'm not shipping code I don't understand. While it's failing, I'm also tabbed out searching the issue and trying to figure it out myself."
Awesome characterization ("slop zone"), pragmatic strategy (let it try; research in parallel) and essential principle ("I'm not shipping code I don't understand.")
IMHO this post is gold, for real-world project details and commentary from an expert doing their thing.
> I have a toddler at home so my "computer time" is very scheduled and very limited
I'm slowly wondering if coding AI agents are turning into the best case scenario for workers. Maybe I'm being hopeful haha.
But having something that doesn't make you faster but just lets you be less concentrated and focused and mentally lazier while producing the same amount of work in the same time as a super dedicated and focused session would have. That's best case scenario here.
In the first coding session:
https://ampcode.com/threads/T-9fc3eb88-5aa2-45e4-8f6d-03697f...
The user says "Consult the oracle." at the end of the prompt and the AI begins its answer with:
"I'm going to ask the oracle for advice on planning custom UI for Sparkle update notifications in the titlebar."
What does this refer to?
EDIT: Another comment in this thread says "It's currently Sonnet 4.5 by default but uses GPT-5 for the "oracle" second opinion", so I guess that is what it means.
LLMs have brought back my excitement with coding.
At work, they help me to kickstart a task - taking the first step is very often the hardest part. It helps me grok new codebases and write boring parts.
But side projects is where the real fun starts - I can materialize random ideas extremely quickly. No more hours spent on writing boilerplate or fighting the tooling. I can delegate the parts I'm not good at the agent. Or one-prompt a feature, if I don't like the result or it doesn't work, I roll it back.
> Let's set the stage for what lead to this feature (pun intended, as you'll see shortly). During a high-profile OpenAI keynote, a demo was rudely interrupted by a Ghostty update prompt
quite apart from the article itself, I'm reminded of how much we put up with from our operating systems. presentations and screen sharing have been a fact of life for a couple of decades now; why is it so hard to tell the operating system to strictly not allow anything other than that one window access to the screen?
there are operating systems where this is not hard at all.
which ones, and what is the mechanism to do it? from what I've seen of both linux and windows there is no good central way to toggle this efficiently, and evidently macs also leave it up to various applications to be well behaved.
> I'm not shipping code I don't understand.
So not vibe coded, but AI assisted.
> Tip: I very often use AI for inspiration. In this case, I ended up keeping a lot (not all) of the UI code it made, but I will very often prompt an agent, throw away everything it did, and redo it myself (manually!). I find the "zero to one" stage of creation very difficult and time consuming and AI is excellent at being my muse.
This is pretty much how I use AI. I don't have it in my editor, I always use it in a browser window, but I bounce ideas off it, use it like a better search engine and even if I don't use the exact code it produces I do feel there's some value.
>> AI is very good at fill-in-the-blank or draw-the-rest-of-the-owl.
This is definitely the sweet spot imo. Any time I've been able to come up with a solid, hand coded pattern and have AI repeat in several similar areas has been the most rewarding experiences that I have had with it.
By the time you've done that, why not repeat it yourself?
I once needed to define a large table of trigrams in C (for the purpose of distinguishing "English" from "not English"). Setting it up looked like this:
1. Download a book from Project Gutenberg.
2. Write something that went through the book character-by-character, and for every character, remembered the trigram ending there.
3. Make the form of "remembering" be "append a line of code to this file that says `array[char1][char2][char3] = 1`".
4. Now I have a really long C file that compiles into the dictionary I want.
It sounds to me kind of like you want to replace step 3 with an LLM. If that's right... what value is the LLM adding?
Concretely, I was creating a cli application. I had implemented a few commands end-to-end and established some solid patterns. I used Codex (i.e. the PR creating flavor) to provide instructions and get it to review the existing patterns before continuing as asked it to rigorously follow them. I had to do about ~10 more things and it worked really well. It was easy for me to review and understand because I already knew the pattern and it seemed easy for it to get right.
It worked so well that I am always trying to look for opportunities like this but honestly, it isn't that common. Many times you aren't creating a pattern and repeating - you are creating a new pattern. AI is good to chat with to get ideas and come up with an approach in these situations seems to be more effective to me.
1 reply →
Haven’t used Ghostty but why is HN putting it on front page every other week? What’s the main attraction to it
This isn’t a “ghostty” post.
This is a post by a hugely respected developer (due to his past & current accomplishments), writing about his experiences in how to best leverage an AI coding assistant.
Product merits aside… Its developer is a celebrity figure and people here are captivated by the story of a billionaire who writes open source without a business model, like we regular folks do when we don’t have a hustle to focus on
Today, for the first time ever, I had to kill a terminal with all the tabs because it became unresponsive.
Welp, that explains it. I haven't changed terminal in a while anyway...
I think as long as a human audit passes its good I also generated some pretty great code before, but I went in to review every single line to make sure
This is exactly what I hoped for when someone talks about their LLM Enabled coding experience.
- language - product - level of experience / seniority
> You can see in chats 11 to 14 that we're entering the slop zone. The code the agent created has a critical bug, and it's absolutely failing to fix it. And I have no idea how to fix it, either.
This definitely relaxes my ai-hype anxiety
People are really bad at evaluating whether ai speeds them up or slows them down. The main question is, do you enjoy this kind of process of working with ai. I personally don't, so I don't use it. It's hard for me to believe any claims about productivity gains.
This is the crux of the discussion. For me the output is a greater reward than the input. The faster I can reach the output the better.
And to clarify, I don't mean output as "this feature works, awesome", but "this feature works, it's maintainable and the code looks as beautiful as I can make it"
I like it when it works but I literally had to take a break yesterday due to the rage I was feeling from Claude repeatedly declaring "I found it!" or "Perfect - found the issue!" before totally breaking the code.
Despite people yelling, vibe coding isn’t going away.
Developers whining that they can do it better and that it will ruin software forget that business always goes with the “good enough” option.
Human receptionists were far better than IVRs and “press 3 for” automation, but it was cheaper and good enough. Now very few companies have human phone operators.
SDEs need to come to terms with the fact that the value associated with human coding has been degraded. Salaries don’t just go up forever. Human SDEs aren’t going away, and 10x engineers will still be very valuable, but overall the perceived value of SDEs and their comp as a job family will likely start slide rapidly from this point forward.
> Human receptionists were far better than IVRs and “press 3 for” automation, but it was cheaper and good enough.
Good enough? Depends on who you ask. It was and is clearly terrible for the users (customers). People hate to navigate huge, invisible menus by slowly pressing buttons or speaking single words very clearly, but the business doesn't really care about that and deems it "worth it" for the money saved.
This is a pattern one can find all over the place: make something worse but cheaper, as long as someone else, not the business, bears the cost.
> Despite people yelling
The "people yelling" are overwhelmingly pro-AI. I find the other side to be relatively quiet.
[flagged]
[flagged]
Before you accuse people of being paid shills it's useful to consider the reputations of the people you are accusing.
(Plus Mitchell certainly doesn't need the money: https://en.wikipedia.org/wiki/HashiCorp )
[flagged]
[flagged]