The case the article tries to make doesn’t stack up for me.
What you get when it becomes easier to generate code/applications is a whole lot more code and a whole lot more noise to deal with. Sure, some of it is going to be well crafted – but a lot of it will not be.
It’s like the mobile app stores. Once these new platforms became available, everyone had a go at building an app. A small portion of them are great examples of craftsmanship – but there is an ocean of badly designed, badly implemented, trivial, and copycat apps out there as well. And once you have this type of abundance, it creates a whole new class of problems for the users but potentially also developers.
The other thing is, it really doesn’t align with the priorities of most companies. I’m extremely skeptical that any of them will suddenly go: “Right, enough of cutting corners and tech debt, we can really sort that out with AI.”
No, instead they will simply direct extra capacity towards new features, new products, and trying to get more market share. Complexity will spiral, all the cut corners and tech debt will still be there, and the end result will be that things will be even further down the hole.
Unless I’m totally misreading the article they are saying what you’re saying and then taking it and using it as an argument for why we should care about quality. They aren’t saying quality will necessarily happen. They are saying that because there will be a whole lot more noise it will be important to focus on quality and those who don’t will drown in complexity.
I don't know if it's been documented or studied, but it seems the availability argument is a fallacy. It just open the floodgates and you get 90% of small effort attempts and not much more. The old world where the barrier was higher guaranteed that only interesting things would happen.
It seems there's some kind of corollary to what you're saying to when (in the US) we went from three major television networks to many cable networks or, later, when streaming video platforms began to proliferate and take hold -- YouTube, Netflix, etc.: The barriers to entry dropped for creators, and the market fragmented. There is still quality creative content out there, some it as good as or better than ever. But finding it, and finding people to share the experience of watching it with you is harder.
Same could be said of traditional desktop software development and the advent of web apps I suppose.
I guess I'm not that worried, other than being worried about personally finding myself in a technological or cultural eddy.
I wrote this many years ago, when I moved from symbian (with very few apps available) to android, with a lot of apps, but having to spend several hours to find a half decent one.
The "Emerging" diagram on the Agentic Engineering page [1] linked from this post is the first time I've seen the exact thing I think of when people start frothing about fleets of agents driving fleets of agents, while just assuming multiplying all those <= 1.0s together will somehow automatically create 1.0
The article is basically: We sell tools that help you vibe but we had to do craft hard and long.
That ship has sailed because everyone wants to sell what you're selling. Craftmanship is a blah blah ... maybe, but this is being sacrificed for profit, and the profit happens because everyone wants to vibe. ;)
I think the key question is what new or unique problems we need to solve. Unique problems demand unique designs and implementations, which can't be threatened by vibe coding, which requires software craftsmanship. That said, AI-assisted coding should still improve our productivity, therefore reducing the average number of engineers per project. Hopefully Jevon's Paradox will come into play, but that's really not a technical problem but a business one.
Can you run out of new and unique problems without running out of economic innovation? If there are no new business processes, no new features or products that require new complexity, how is there room for new entrants? You either get a race to the bottom, commodity prices for commodity goods; or market and regulatory capture and artificially high prices and barriers to entry.
Where is the room for "vibe coder" - or really any of the business people that would hypothetically be using agents to "just write their own code"?
Teams and tools will certainly look different. But people crave novelty which is a big part of why I've never been on an engineering team that was staffed enough to come even close to the output the business owners and product teams desired. So I don't think we'll actually hit the end of the road there.
--
In a true-super-human-AGI world things look very different, but then there's one question that matters more than any others: where is the money coming from to build and maintain the machines? The AGI agent that can replace an engineer can actually replace an entire company - why go to a vibe-coding entrepreneur and let them take a cut when you can just explain what you need to an agent yourself? The agent will be smarter than that entrepreneur, definitionally.
> Can you run out of new and unique problems without running out of economic innovation?
Probably not, but the numbers may vary. Case in point, we have a booming chip industry now, but the demand for EE graduates is still far less than that for CS graduates, even under the current tough market condition.
I like the article. In fact just yesterday I quipped to someone about how the quality of AI output will be determined by the competence of its "operators".
I have always had a strong drive to produce awe-inspiring software. At every job I have had, I have strived for usability, aesthetics, performance and reliability. I have maintained million LoC codebases and never caused a major issue, while people around me kept creating more problems than they fixed. However I do not recall one single word of encouragement, let alone praise. I do recall being lectured for being fixated on perfectionism a few times.
I took me 15 years to realize that consumers themselves do not care about quality. Until they do, the minority of engineers who strive for it are gonna waste their efforts.
Yes software is complex. Yes, you cannot compare software engineering to mechanical, electrical engineering or architecture. But do we deserve the absolute shit show that every app, website and device has become?
> I took me 15 years to realize that consumers themselves do not care about quality. Until they do, the minority of engineers who strive for it are gonna waste their efforts.
N=1 but I would personally pay $15 for a higher quality garden hose valve when the competition is <$10, but I typically can't because I have no way of knowing which one of the products marketed at me is the higher quality one.
The worst case scenario for me is paying $15 for a bad quality product.
It's not that consumers don't want quality, it's that quality is hard to market.
I don’t think it’s true that “consumers don’t care about quality” but rather that their concern for quality doesn’t really manifest itself in those terms. Consumers care about critical tools being available when they need them and businesses often have a hard time situating feature requests in the broader desire for utility and stability (in part because these are things only noticed when things are really bad).
Part of my growth as a developer was connected with realizing that a lot of the issues with quality resulted from miscommunication between “the business” and engineers advocating for quality.
agree that we (users, humans, customers) all are desperately reaching for something steady, well designed, rugged.
something that people thought about for longer than whatever the deadline they had to work on it. something that radiates the care and human compassion they put into their work.
we all want and need this quality. it is harder to pull off these days for a lot of dumb reasons. i wish we all cared enough to not make it so mysterious, and that we could all rally around it, celebrate it, hold it to high regard.
Consumers don't care about your code, only what it does for them. If your crappy software provides its intended service quickly, accurately, and reliably enough, your customers consider it a win. Any further improvements on those axes are just gravy -- expensive gravy.
No. Consumers don't care about what they get. May be a small minority of tech savvy ones do but others don't or at least they don't know how to demand better software because software comes with no promises and guarantees.
Crafting code and designing applications is the fun part of what I do. I do it whether or not someone pays me to. Why would I want to hand that over to an AI app? If I took up painting as a hobby, why would I want a robot to paint for me?
You might want a robot to prepare paints for you and put them into convenient containers, to prepare and trim brush hairs put them in a ferrule onto a stick, prepare canvas or other substrates, cut frame pieces at precise and repeatable angles, or wash/clean up.
That would leave you to the creative part of painting, while removing some of the more mechanical and less creative parts.
Particularly anything where the target audience is also highly technical: you can't just throw consumerietic shovelware at folks. They are going to notice!
It's challenging & fun to build things where your audience is gonna form their own technical impressions of how well crafted your thing is! It's fun for engineering to matter!
we've come a long way from the days of software that does one thing well. back in the days of desktop apps craftsmanship was high VLC, different mac os apps etc.
once the web took over - people took quality for granted since you can easily ship a fix (well those never happen). then move fast & break things. software has been shoddy since. Yeah you can make excellent software on web technologies - so the web isn't the problem.
One of the more exciting aspects of LLM-aided development for me is the potential for high quality software from much smaller teams.
Historically engineering teams have had to balance their backlog and technical debt, limiting what new features/functionality was even possible (in a reasonable timeframe).
If you squint at the existing landscape (Claude Code, o3, codex, etc.) you can start to envision a new quality bar for software.
Not only will software used by millions get better, but the world of software for 10s or 1000s of users can now actually be _good_, with much less effort.
Sure we’ll still have the railroad tycoons[0] of the old world, but the new world is so so vast!
> LLMs are trained on poor code quality and as a result, output poor code quality.
This is an already outdated take. Modern LLMs use synthetic data, and coding specifically uses generate -> verify loops. Recent stuff like context7 also help guide the LLMs towards using modern libs, even if they are outside the training cut-off.
> In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.
This is reminiscent of "AI will never do x, because it doesn't do x now" of the gpt-3.5 era. Oh, look, it's so cute that it can output something that looks like python, but it will never get programming. And yet here we are.
There's nothing special about security. everything that works for coding / devops / agentic loops will work for security as well. If anything, the absolute bottom line will rise with LLM-assisted stacks. We'll get "smarter" wapitis / metasploits, and agentic autonomous scanners, and verifiers. Instead of siems missing 80% of attacks [0] while also inundating monitoring consoles with unwanted alerts, you'll get verified reports where a codex/claude/jules will actually test and provide PoC for each report they make.
I think we've seen this "oh, but it can't do this so it's useless" plenty of times in the past 2 years. And each and every time we got newer, better versions. Security is nothing special.
I am so much out of sync with this idea that a text editor must be blazingly fast. The latency of processing my input was never an issue for me in text editors unless it was an obvious misbehaviour due to a bug or something. And 120Hz text rendering is a thing that I couldn't care less about.
Iv'e seen people, and even been one myself, where the screen latency could be a problem for your raw text processing speed. We were well under 25 years old at the time though, using very low-level languages, and after 30 I never really felt that the rendering was ever too slow for me.
With larger projects and more modern code this is simply not an issue, and hasn't been for decades for me at least.
There's a huge chasm between VSCode-with-all-kitchen-sinks and 120 Hz. "Never freezes for more than 300ms" is a very valid point on that spectrum, and nowhere near the need for GPU acceleration.
Instead of asking how can we ship more code or how can we ship better code, why not "how can AI give me a better life"? Machines are supposed to make our lives easier. If I can output the same quality at a faster rate of speed, why can't I have that time back to my own life now? This is how my view on agentic coding is evolving toward. I don't want to be under the pressure of doubling my productivity for an employer. I want to capture that gain for myself.
> If I can output the same quality at a faster rate of speed, why can't I have that time back to my own life now?
We have done a terrible job at allocating the net benefits of globalization. We will (continue to) do a terrible job at allocating the net benefits of productivity improvements.
The "why" is hard to answer. But one thing is clear: the United States has a dominant economic ideology, and the name of that ideology is not hardworker-ism.
> I don't want to be under the pressure of doubling my productivity for an employer. I want to capture that gain for myself.
Unfortunately this will never happen. I don't think it has ever happened in the history of capital
When a machine can double your productivity, capital buys the machine for you and fires one of your coworkers
Or the machine makes it so less skilled people can do the same work you do. But since they're less skilled, they command less pay. So capital pays them less, and devalues your work
Already seeing this with AI. My employer is demanding all engineers start using LLM tools, citing that it is "an easy 20% boost". Not sure where they got the number but whatever.
But is everyone going to get a 20% raise? No. Not a chance. Capital will always try to capture 100% of any productivity gains for themselves
Honestly I feel AI has helped me be a better craftsman.
I can think "oh it would be kinda nice to add this little tidbit of functionality in this code". Previously I'd have to spend loads of time googling around the question in various ways, wording it differently etc, reading examples or digging a lot for info. This would sometimes just not be worth adding that little feature or nice to have.
Now, I can have Claude help me write some code, then ask it about various things I can add or modify it with or maybe try it differently. It gives me more time to spend figuring out the best thing for the user. More various ideas for how to tackle a problem.
I don't let it code willy nilly. I'm fairly precise in what I ask it to do and that's only after I get it to explain how it would go about tackling a problem.
I still write my own code. What I have, are a couple of LLM subscriptions (Perplexity and ChatGPT), that I regularly consult. I now ask them even the “silliest” questions.
The last couple of days, I tried having ChatGPT basically write an app for me, but it didn’t really turn out so well.
I look forward to being able to train an agent on my personal technique and style, as well as Quality bar, and have it write a lot of my code.
The question is: can an LLM actually power a true "agent" or can it just create a pretty decent simulation of one? When your tools are a bigger context window and a better prompt, are there some nails that are out of your capacity to hit?
We have made LLMs that need far less "prompt engineering" to give you something pretty-decent than they did 2 years ago. It makes them WAY more useful as tools.
But then you hit the wall like you mention, or like another poster on this thread saw: "Of course, it's not perfect. For example, it gave me some authentication code that just didn’t work." This happens to me basically daily. And then I give it the error and ask it to modify. And then that doesn't work. And I give it this error. And it suggests the previous failed attempt again.
It's often still 90% of the way there, though, so the tool is pretty valuable.
But is "training on your personal quality bar" achievable? Is there enough high-quality draining data in the world, that it can recognize as high-quality vs low? Are the fundamentals of the prediction machine the right ones to be able to understand at-generation-time "this is not the right approach for this problem" given the huge variety and complexity in so many different programming languages and libraries?
TBD. But I'm a skeptic about that because I've seen "output from a given prompt" improve a ton in 2 years, but I haven't seen that same level of improvement for "output after getting a really really good prompt and some refinement instructions". I have to babysit it less, so I actually use it day to day way more, but it hits the wall in the same sort of very similar, unsurprising ways. (It's harder to describe than that - it's like a "know it when you see it" thing. "Ah, yes, there's a subtly that it doesn't know how to get past because there are so many wrinkles in a particular OAUTH2 implementation, but it was so rare a case in the docs and examples that it's just looping on things that aren't working.")
(The personification of these things really fucks up the discussion. For instance, when someone tells me "no, it was probably just too lazy to figure out the right way" or "it got tired of the conversation." The chosen user-interface of the people making these tools really messes with people's perceptions of them. E.g. if LLM-suggested code that is presented as an in-line autocomplete by Copilot is wrong, people tend to be more like "ah, Copilot's not always that great, it got it wrong" but if someone asks a chatbot instead then they're much more likely to personify the outcome.)
I really don’t think training an agent on “your style” is the future. We’re more adaptable than the agents.
I think programming is a job people don’t need to do anymore and anyone who called themselves a software engineer is now a manager of agents. Jira is the interface. Define the requirements.
Writing your own code will still be a thing. We’ll even call those people hackers. But it will be a hobby.
> I can think "oh it would be kinda nice to add this little tidbit of functionality in this code". Previously I'd have to spend loads of time googling around the question in various ways, wording it differently etc, reading examples or digging a lot for info.
Research is how people learn. Or to learn requires research. Either way one wants to phrase it, the result is the same.
> Now, I can have Claude help me write some code, then ask it about various things I can add or modify it with or maybe try it differently.
LLM's are statistical text (token) generators and highly sensitive to the the prompt given. More importantly in this context is the effort once expended by a person doing research is at best an exercise in prompt refinement (if the person understands the problem context) or at worst an outsourcing of understanding (if they do not).
> I'm fairly precise in what I ask it to do and that's only after I get it to explain how it would go about tackling a problem.
Again, LLM algorithms strictly output statistically generated text derived from the prompt given.
LLM's do not "explain", as that implies understanding.
They do not "understand how it would go about tackling a problem", as that is a form of anthropomorphization.
We can go on all day about how an LLM doesn't explain and doesn't actually think. In the end though, I've found myself being able to do things better and faster especially given a codebase I have no experience in with developers who aren't able to help me in the moment given our timezone differences
I'm a .NET developer working on backend for the last 7 years. I used to work with WinForms, WebForms, and ASP.NET MVC 5. Lately, I've been wanting to get back into frontend development using Blazor.
I have the GitHub Copilot for $10, and I’ve been "vibe coding" with it. I asked the AI to give me an example of a combo box, and it gave me something to start with. I used a Bootstrap table and asked the AI to help with pagination, and it provided workable code. of course to seasoned frontend developer, what i'm doing is simple but i haven't work on front end for so long, vibing with AI have been a good expereinces.
Of course, it's not perfect. For example, it gave me some authentication code that just didn’t work. Still, I’ve found AI to be like a “smart” Google, it doesn’t judge me or tell me my question is a duplicated like Stack Overflow.
Ultimately though none of the rendering speed improvements or collaboration ideas make a difference to me. Then there are major feature gaps like not being able to open Jupyter notebooks or view images when connected over ssh that keep bringing me back to vscode editors where everything just works out of the box or with an extension. The developers have great craftsmanship but, they are also tasked with reimplementing a whole ecosystem.
And ultimately I think native performance just keeps being less and less of a draw as agents write most of the code and I spend time reviewing it where web tools are more than adequate.
I want craftsmanship to be important as someone who takes pride in their work and likes nice tools. I just haven’t seen evidence of it being worth it over “good-enough and iterate fast” engineering. I don’t think this vision of engineering will win out over “good enough and fast”
> I just haven’t seen evidence of it being worth it over “good-enough and iterate fast” engineering.
Aren't things bound to come to a point where quality is a defining feature of certain products? Like take video game software for example. The amount of polish and quality that goes into good selling games is insane. There video game market is so saturated that the things that come up on top must have a high level of polish and quality put into them.
Another thought experiment: imagine thousands of company startups for creating task managers. I can't imagine that those products with strong engineering fundamentals wound't end up on top. To drive this point even further, despite the rise of AI, I don't think I've seen even one example of a longstanding company being disrupted by an "AI native" "agentic first" company.
I think Zed has the potential to become a good editor some day, and it might be the only editor with that potential. But yes, right now VS Code is more acceptable.
> I want to like zed. I keep trying it. ... Ultimately though none of the rendering speed improvements or collaboration ideas make a difference to me.
I feel this way as well. I've tried to incorporate Zed into my workflow a few times but I keep getting blocked by 30 years of experience with Emacs. E.g. I need C-x 2 to split my window. I need C-x C-b to show me all my buffers. I need a dired mode that can behave like any ordinary buffer. Etc. etc.
Sadly the list is quite long and while Zed offers many nice things, none are as essential to me as these.
>don’t think this vision of engineering will win out over “good enough and fast”
Oh I'm sure of it. However that won't be good enough for the MBA'S. My prediction is that AI slopware is going to drive the average quality of software back down to the buggy infuriating 1000 manual workarounds sofware of the late 90's and early 00's.
As long as I've followed the software industry there's always been people saying "blah blah craftsmanship blah". The fact is, people don't care about craftsmanship. You can't see the code of most of the software you use and even when you can (FOSS), who's actually looking?
Also, the few times I've use "handcrafted" software it's underwhelming in terms of functionality, apart from a few FOSS programs that have thousands of contributors.
Everything you use every day depends on a few core libraries and services that are truly crafted by some competent people. Take your browser and all the various fantastic libraries it uses to decode and display content to you. How about cryptography libraries like OpenSSL? Linux itself, I would consider the result of craftsmanship, same with most GNU software. You're really, really, really missing a lot here with your evaluation.
You're right, people don't care about craftsmanship, until their vibe coded TLS implementation causes their government to track them down and have them executed. Then, suddenly, it matters.
People don't "care" about material science either. Without it, we'd be screwed.
Yes, I did mention that there's a few FOSS projects that are handcrafted that work because of lots of contributors or at least eyes (I guess I should have also mentioned corporate maintainers). Foundational libraries, Linux kernel, stuff like that.
But on the whole, a lot of software (mostly user facing) is pretty bloated, filled with bugs and no one cares.
The people who care about craftsmanship, trust that you care.
When friends and family ask me about which software to buy/use/install, I recommend them something that I think is well crafted. They ask me, because they know I care.
The case the article tries to make doesn’t stack up for me.
What you get when it becomes easier to generate code/applications is a whole lot more code and a whole lot more noise to deal with. Sure, some of it is going to be well crafted – but a lot of it will not be.
It’s like the mobile app stores. Once these new platforms became available, everyone had a go at building an app. A small portion of them are great examples of craftsmanship – but there is an ocean of badly designed, badly implemented, trivial, and copycat apps out there as well. And once you have this type of abundance, it creates a whole new class of problems for the users but potentially also developers.
The other thing is, it really doesn’t align with the priorities of most companies. I’m extremely skeptical that any of them will suddenly go: “Right, enough of cutting corners and tech debt, we can really sort that out with AI.”
No, instead they will simply direct extra capacity towards new features, new products, and trying to get more market share. Complexity will spiral, all the cut corners and tech debt will still be there, and the end result will be that things will be even further down the hole.
Unless I’m totally misreading the article they are saying what you’re saying and then taking it and using it as an argument for why we should care about quality. They aren’t saying quality will necessarily happen. They are saying that because there will be a whole lot more noise it will be important to focus on quality and those who don’t will drown in complexity.
I don't know if it's been documented or studied, but it seems the availability argument is a fallacy. It just open the floodgates and you get 90% of small effort attempts and not much more. The old world where the barrier was higher guaranteed that only interesting things would happen.
It seems there's some kind of corollary to what you're saying to when (in the US) we went from three major television networks to many cable networks or, later, when streaming video platforms began to proliferate and take hold -- YouTube, Netflix, etc.: The barriers to entry dropped for creators, and the market fragmented. There is still quality creative content out there, some it as good as or better than ever. But finding it, and finding people to share the experience of watching it with you is harder.
Same could be said of traditional desktop software development and the advent of web apps I suppose.
I guess I'm not that worried, other than being worried about personally finding myself in a technological or cultural eddy.
Trivially, fewer interesting things happen if the barrier is incidental to some degree.
I think the more pressing issues are costs: opportunity cost, sunk cost, signal to noise ratio.
Increasing energy input to a closed system increases entropy.
Why on earth people expect to attach gpu farms to render characters into their codebase to not only not increase its entropy but to lower it?
I wrote this many years ago, when I moved from symbian (with very few apps available) to android, with a lot of apps, but having to spend several hours to find a half decent one.
> No, instead they will
The article is making a normative argument. It is not saying what people "will" do but instead what they "should" do.
It's Carlyle's idea of "the cheap and nasty" in the age of software.
The "Emerging" diagram on the Agentic Engineering page [1] linked from this post is the first time I've seen the exact thing I think of when people start frothing about fleets of agents driving fleets of agents, while just assuming multiplying all those <= 1.0s together will somehow automatically create 1.0
[1] https://zed.dev/agentic-engineering
The article is basically: We sell tools that help you vibe but we had to do craft hard and long.
That ship has sailed because everyone wants to sell what you're selling. Craftmanship is a blah blah ... maybe, but this is being sacrificed for profit, and the profit happens because everyone wants to vibe. ;)
I'm feeling that vibe brah.
I think the key question is what new or unique problems we need to solve. Unique problems demand unique designs and implementations, which can't be threatened by vibe coding, which requires software craftsmanship. That said, AI-assisted coding should still improve our productivity, therefore reducing the average number of engineers per project. Hopefully Jevon's Paradox will come into play, but that's really not a technical problem but a business one.
Can you run out of new and unique problems without running out of economic innovation? If there are no new business processes, no new features or products that require new complexity, how is there room for new entrants? You either get a race to the bottom, commodity prices for commodity goods; or market and regulatory capture and artificially high prices and barriers to entry.
Where is the room for "vibe coder" - or really any of the business people that would hypothetically be using agents to "just write their own code"?
Teams and tools will certainly look different. But people crave novelty which is a big part of why I've never been on an engineering team that was staffed enough to come even close to the output the business owners and product teams desired. So I don't think we'll actually hit the end of the road there.
--
In a true-super-human-AGI world things look very different, but then there's one question that matters more than any others: where is the money coming from to build and maintain the machines? The AGI agent that can replace an engineer can actually replace an entire company - why go to a vibe-coding entrepreneur and let them take a cut when you can just explain what you need to an agent yourself? The agent will be smarter than that entrepreneur, definitionally.
> Can you run out of new and unique problems without running out of economic innovation?
Probably not, but the numbers may vary. Case in point, we have a booming chip industry now, but the demand for EE graduates is still far less than that for CS graduates, even under the current tough market condition.
I like the article. In fact just yesterday I quipped to someone about how the quality of AI output will be determined by the competence of its "operators".
I have always had a strong drive to produce awe-inspiring software. At every job I have had, I have strived for usability, aesthetics, performance and reliability. I have maintained million LoC codebases and never caused a major issue, while people around me kept creating more problems than they fixed. However I do not recall one single word of encouragement, let alone praise. I do recall being lectured for being fixated on perfectionism a few times.
I took me 15 years to realize that consumers themselves do not care about quality. Until they do, the minority of engineers who strive for it are gonna waste their efforts.
Yes software is complex. Yes, you cannot compare software engineering to mechanical, electrical engineering or architecture. But do we deserve the absolute shit show that every app, website and device has become?
> I took me 15 years to realize that consumers themselves do not care about quality. Until they do, the minority of engineers who strive for it are gonna waste their efforts.
N=1 but I would personally pay $15 for a higher quality garden hose valve when the competition is <$10, but I typically can't because I have no way of knowing which one of the products marketed at me is the higher quality one. The worst case scenario for me is paying $15 for a bad quality product.
It's not that consumers don't want quality, it's that quality is hard to market.
I don’t think it’s true that “consumers don’t care about quality” but rather that their concern for quality doesn’t really manifest itself in those terms. Consumers care about critical tools being available when they need them and businesses often have a hard time situating feature requests in the broader desire for utility and stability (in part because these are things only noticed when things are really bad).
Part of my growth as a developer was connected with realizing that a lot of the issues with quality resulted from miscommunication between “the business” and engineers advocating for quality.
agree that we (users, humans, customers) all are desperately reaching for something steady, well designed, rugged.
something that people thought about for longer than whatever the deadline they had to work on it. something that radiates the care and human compassion they put into their work.
we all want and need this quality. it is harder to pull off these days for a lot of dumb reasons. i wish we all cared enough to not make it so mysterious, and that we could all rally around it, celebrate it, hold it to high regard.
Consumers don't care about your code, only what it does for them. If your crappy software provides its intended service quickly, accurately, and reliably enough, your customers consider it a win. Any further improvements on those axes are just gravy -- expensive gravy.
No. Consumers don't care about what they get. May be a small minority of tech savvy ones do but others don't or at least they don't know how to demand better software because software comes with no promises and guarantees.
4 replies →
Exactly and that's why we're here - against all odds you do your best to make things right - that's THE job.
We can imagine the electricity failing but AI can't because it would be like us imagining the sun failing to shine
Crafting code and designing applications is the fun part of what I do. I do it whether or not someone pays me to. Why would I want to hand that over to an AI app? If I took up painting as a hobby, why would I want a robot to paint for me?
You might want a robot to prepare paints for you and put them into convenient containers, to prepare and trim brush hairs put them in a ferrule onto a stick, prepare canvas or other substrates, cut frame pieces at precise and repeatable angles, or wash/clean up.
That would leave you to the creative part of painting, while removing some of the more mechanical and less creative parts.
Particularly anything where the target audience is also highly technical: you can't just throw consumerietic shovelware at folks. They are going to notice!
It's challenging & fun to build things where your audience is gonna form their own technical impressions of how well crafted your thing is! It's fun for engineering to matter!
we've come a long way from the days of software that does one thing well. back in the days of desktop apps craftsmanship was high VLC, different mac os apps etc.
once the web took over - people took quality for granted since you can easily ship a fix (well those never happen). then move fast & break things. software has been shoddy since. Yeah you can make excellent software on web technologies - so the web isn't the problem.
it's US.
One of the more exciting aspects of LLM-aided development for me is the potential for high quality software from much smaller teams.
Historically engineering teams have had to balance their backlog and technical debt, limiting what new features/functionality was even possible (in a reasonable timeframe).
If you squint at the existing landscape (Claude Code, o3, codex, etc.) you can start to envision a new quality bar for software.
Not only will software used by millions get better, but the world of software for 10s or 1000s of users can now actually be _good_, with much less effort.
Sure we’ll still have the railroad tycoons[0] of the old world, but the new world is so so vast!
[0]https://www.reddit.com/r/todayilearned/s/zfUX8StpXM
If Sturgeon’s Law holds (and I see no reason it wouldn’t) we won’t get better software, we’ll get more shit, faster.
10% of a large pie is more than 10% of a small pie (:
1 reply →
> One of the more exciting aspects of LLM-aided development for me is the potential for high quality software
There is no evidence to suggest this is true.
LLMs are trained on poor code quality and as a result, output poor code quality.
In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.
LLMs are great, but the potential for high quality software is not one of the selling points.
> LLMs are trained on poor code quality and as a result, output poor code quality.
This is an already outdated take. Modern LLMs use synthetic data, and coding specifically uses generate -> verify loops. Recent stuff like context7 also help guide the LLMs towards using modern libs, even if they are outside the training cut-off.
> In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.
This is reminiscent of "AI will never do x, because it doesn't do x now" of the gpt-3.5 era. Oh, look, it's so cute that it can output something that looks like python, but it will never get programming. And yet here we are.
There's nothing special about security. everything that works for coding / devops / agentic loops will work for security as well. If anything, the absolute bottom line will rise with LLM-assisted stacks. We'll get "smarter" wapitis / metasploits, and agentic autonomous scanners, and verifiers. Instead of siems missing 80% of attacks [0] while also inundating monitoring consoles with unwanted alerts, you'll get verified reports where a codex/claude/jules will actually test and provide PoC for each report they make.
I think we've seen this "oh, but it can't do this so it's useless" plenty of times in the past 2 years. And each and every time we got newer, better versions. Security is nothing special.
[0] - https://www.darkreading.com/cybersecurity-operations/siems-m...
5 replies →
I am so much out of sync with this idea that a text editor must be blazingly fast. The latency of processing my input was never an issue for me in text editors unless it was an obvious misbehaviour due to a bug or something. And 120Hz text rendering is a thing that I couldn't care less about.
Iv'e seen people, and even been one myself, where the screen latency could be a problem for your raw text processing speed. We were well under 25 years old at the time though, using very low-level languages, and after 30 I never really felt that the rendering was ever too slow for me.
With larger projects and more modern code this is simply not an issue, and hasn't been for decades for me at least.
If you are a 10x developer coding assembly, sure?
In software like VSCode the milliseconds stack up fast if you're switching between projects constantly and/or doing any kind of remote development.
There's a huge chasm between VSCode-with-all-kitchen-sinks and 120 Hz. "Never freezes for more than 300ms" is a very valid point on that spectrum, and nowhere near the need for GPU acceleration.
Instead of asking how can we ship more code or how can we ship better code, why not "how can AI give me a better life"? Machines are supposed to make our lives easier. If I can output the same quality at a faster rate of speed, why can't I have that time back to my own life now? This is how my view on agentic coding is evolving toward. I don't want to be under the pressure of doubling my productivity for an employer. I want to capture that gain for myself.
> If I can output the same quality at a faster rate of speed, why can't I have that time back to my own life now?
We have done a terrible job at allocating the net benefits of globalization. We will (continue to) do a terrible job at allocating the net benefits of productivity improvements.
The "why" is hard to answer. But one thing is clear: the United States has a dominant economic ideology, and the name of that ideology is not hardworker-ism.
You need to own parts of the company you work at for that to happen.
> I don't want to be under the pressure of doubling my productivity for an employer. I want to capture that gain for myself.
Unfortunately this will never happen. I don't think it has ever happened in the history of capital
When a machine can double your productivity, capital buys the machine for you and fires one of your coworkers
Or the machine makes it so less skilled people can do the same work you do. But since they're less skilled, they command less pay. So capital pays them less, and devalues your work
Already seeing this with AI. My employer is demanding all engineers start using LLM tools, citing that it is "an easy 20% boost". Not sure where they got the number but whatever.
But is everyone going to get a 20% raise? No. Not a chance. Capital will always try to capture 100% of any productivity gains for themselves
>"how can AI give me a better life"?
that's an easy one: by destroying itself, and taking social media and smartphones with it
Honestly I feel AI has helped me be a better craftsman. I can think "oh it would be kinda nice to add this little tidbit of functionality in this code". Previously I'd have to spend loads of time googling around the question in various ways, wording it differently etc, reading examples or digging a lot for info. This would sometimes just not be worth adding that little feature or nice to have.
Now, I can have Claude help me write some code, then ask it about various things I can add or modify it with or maybe try it differently. It gives me more time to spend figuring out the best thing for the user. More various ideas for how to tackle a problem.
I don't let it code willy nilly. I'm fairly precise in what I ask it to do and that's only after I get it to explain how it would go about tackling a problem.
I still write my own code. What I have, are a couple of LLM subscriptions (Perplexity and ChatGPT), that I regularly consult. I now ask them even the “silliest” questions.
The last couple of days, I tried having ChatGPT basically write an app for me, but it didn’t really turn out so well.
I look forward to being able to train an agent on my personal technique and style, as well as Quality bar, and have it write a lot of my code.
Not there, yet, but I could see it happening.
The question is: can an LLM actually power a true "agent" or can it just create a pretty decent simulation of one? When your tools are a bigger context window and a better prompt, are there some nails that are out of your capacity to hit?
We have made LLMs that need far less "prompt engineering" to give you something pretty-decent than they did 2 years ago. It makes them WAY more useful as tools.
But then you hit the wall like you mention, or like another poster on this thread saw: "Of course, it's not perfect. For example, it gave me some authentication code that just didn’t work." This happens to me basically daily. And then I give it the error and ask it to modify. And then that doesn't work. And I give it this error. And it suggests the previous failed attempt again.
It's often still 90% of the way there, though, so the tool is pretty valuable.
But is "training on your personal quality bar" achievable? Is there enough high-quality draining data in the world, that it can recognize as high-quality vs low? Are the fundamentals of the prediction machine the right ones to be able to understand at-generation-time "this is not the right approach for this problem" given the huge variety and complexity in so many different programming languages and libraries?
TBD. But I'm a skeptic about that because I've seen "output from a given prompt" improve a ton in 2 years, but I haven't seen that same level of improvement for "output after getting a really really good prompt and some refinement instructions". I have to babysit it less, so I actually use it day to day way more, but it hits the wall in the same sort of very similar, unsurprising ways. (It's harder to describe than that - it's like a "know it when you see it" thing. "Ah, yes, there's a subtly that it doesn't know how to get past because there are so many wrinkles in a particular OAUTH2 implementation, but it was so rare a case in the docs and examples that it's just looping on things that aren't working.")
(The personification of these things really fucks up the discussion. For instance, when someone tells me "no, it was probably just too lazy to figure out the right way" or "it got tired of the conversation." The chosen user-interface of the people making these tools really messes with people's perceptions of them. E.g. if LLM-suggested code that is presented as an in-line autocomplete by Copilot is wrong, people tend to be more like "ah, Copilot's not always that great, it got it wrong" but if someone asks a chatbot instead then they're much more likely to personify the outcome.)
I really don’t think training an agent on “your style” is the future. We’re more adaptable than the agents.
I think programming is a job people don’t need to do anymore and anyone who called themselves a software engineer is now a manager of agents. Jira is the interface. Define the requirements.
Writing your own code will still be a thing. We’ll even call those people hackers. But it will be a hobby.
3 replies →
> I can think "oh it would be kinda nice to add this little tidbit of functionality in this code". Previously I'd have to spend loads of time googling around the question in various ways, wording it differently etc, reading examples or digging a lot for info.
Research is how people learn. Or to learn requires research. Either way one wants to phrase it, the result is the same.
> Now, I can have Claude help me write some code, then ask it about various things I can add or modify it with or maybe try it differently.
LLM's are statistical text (token) generators and highly sensitive to the the prompt given. More importantly in this context is the effort once expended by a person doing research is at best an exercise in prompt refinement (if the person understands the problem context) or at worst an outsourcing of understanding (if they do not).
> I'm fairly precise in what I ask it to do and that's only after I get it to explain how it would go about tackling a problem.
Again, LLM algorithms strictly output statistically generated text derived from the prompt given.
LLM's do not "explain", as that implies understanding.
They do not "understand how it would go about tackling a problem", as that is a form of anthropomorphization.
Caveat emptor.
We can go on all day about how an LLM doesn't explain and doesn't actually think. In the end though, I've found myself being able to do things better and faster especially given a codebase I have no experience in with developers who aren't able to help me in the moment given our timezone differences
3 replies →
I'm a .NET developer working on backend for the last 7 years. I used to work with WinForms, WebForms, and ASP.NET MVC 5. Lately, I've been wanting to get back into frontend development using Blazor.
I have the GitHub Copilot for $10, and I’ve been "vibe coding" with it. I asked the AI to give me an example of a combo box, and it gave me something to start with. I used a Bootstrap table and asked the AI to help with pagination, and it provided workable code. of course to seasoned frontend developer, what i'm doing is simple but i haven't work on front end for so long, vibing with AI have been a good expereinces.
Of course, it's not perfect. For example, it gave me some authentication code that just didn’t work. Still, I’ve found AI to be like a “smart” Google, it doesn’t judge me or tell me my question is a duplicated like Stack Overflow.
+1 with greater power comes greater responsibility.
Power doesn’t mean lack of craft. Just different things to craft. Eg we don’t hand-roll assembly anymore.
Still have to know when you need to dive deep and how to approach that.
I want to like zed. I keep trying it.
Ultimately though none of the rendering speed improvements or collaboration ideas make a difference to me. Then there are major feature gaps like not being able to open Jupyter notebooks or view images when connected over ssh that keep bringing me back to vscode editors where everything just works out of the box or with an extension. The developers have great craftsmanship but, they are also tasked with reimplementing a whole ecosystem.
And ultimately I think native performance just keeps being less and less of a draw as agents write most of the code and I spend time reviewing it where web tools are more than adequate.
I want craftsmanship to be important as someone who takes pride in their work and likes nice tools. I just haven’t seen evidence of it being worth it over “good-enough and iterate fast” engineering. I don’t think this vision of engineering will win out over “good enough and fast”
My biggest motivation to use Zed is that it's not by Microsoft, and so far hasn't strayed from the open source path.
Feature-wise, it's been close enough for my use. Some things are missing but other things were buggier with VSCode.
> I just haven’t seen evidence of it being worth it over “good-enough and iterate fast” engineering.
Aren't things bound to come to a point where quality is a defining feature of certain products? Like take video game software for example. The amount of polish and quality that goes into good selling games is insane. There video game market is so saturated that the things that come up on top must have a high level of polish and quality put into them.
Another thought experiment: imagine thousands of company startups for creating task managers. I can't imagine that those products with strong engineering fundamentals wound't end up on top. To drive this point even further, despite the rise of AI, I don't think I've seen even one example of a longstanding company being disrupted by an "AI native" "agentic first" company.
I think Zed has the potential to become a good editor some day, and it might be the only editor with that potential. But yes, right now VS Code is more acceptable.
> I want to like zed. I keep trying it. ... Ultimately though none of the rendering speed improvements or collaboration ideas make a difference to me.
I feel this way as well. I've tried to incorporate Zed into my workflow a few times but I keep getting blocked by 30 years of experience with Emacs. E.g. I need C-x 2 to split my window. I need C-x C-b to show me all my buffers. I need a dired mode that can behave like any ordinary buffer. Etc. etc.
Sadly the list is quite long and while Zed offers many nice things, none are as essential to me as these.
>don’t think this vision of engineering will win out over “good enough and fast”
Oh I'm sure of it. However that won't be good enough for the MBA'S. My prediction is that AI slopware is going to drive the average quality of software back down to the buggy infuriating 1000 manual workarounds sofware of the late 90's and early 00's.
Then the pendulum will swing back.
As long as I've followed the software industry there's always been people saying "blah blah craftsmanship blah". The fact is, people don't care about craftsmanship. You can't see the code of most of the software you use and even when you can (FOSS), who's actually looking?
Also, the few times I've use "handcrafted" software it's underwhelming in terms of functionality, apart from a few FOSS programs that have thousands of contributors.
Life is short and hardware is relatively cheap.
Everything you use every day depends on a few core libraries and services that are truly crafted by some competent people. Take your browser and all the various fantastic libraries it uses to decode and display content to you. How about cryptography libraries like OpenSSL? Linux itself, I would consider the result of craftsmanship, same with most GNU software. You're really, really, really missing a lot here with your evaluation.
You're right, people don't care about craftsmanship, until their vibe coded TLS implementation causes their government to track them down and have them executed. Then, suddenly, it matters.
People don't "care" about material science either. Without it, we'd be screwed.
Yes, I did mention that there's a few FOSS projects that are handcrafted that work because of lots of contributors or at least eyes (I guess I should have also mentioned corporate maintainers). Foundational libraries, Linux kernel, stuff like that.
But on the whole, a lot of software (mostly user facing) is pretty bloated, filled with bugs and no one cares.
The people who care about craftsmanship, trust that you care.
When friends and family ask me about which software to buy/use/install, I recommend them something that I think is well crafted. They ask me, because they know I care.
So what do you recommend?