Comment by 7777332215
5 hours ago
Yup. I worked very hard, and for many years to acquire a skill in designing and writing systems. It is an art. And it is very disheartening to see people without any skills to behave the way they do. For now, the work I do cannot be replicated by these people, but I do not such high hopes for the distant future. Though at the point it can truly be automated I think it will be automating a large majority of non physical jobs (and those too will be likely getting automated by then)
On the plus side, vibe coding disaster remediation looks to be a promising revenue stream in the near future, and I am rubbing my hands together eagerly as I ponder the filthy lucre.
> On the plus side, vibe coding disaster remediation looks to be a promising revenue stream in the near future, and I am rubbing my hands together eagerly as I ponder the filthy lucre.
I don't think it will be; a vibe coder using Gas Town will easily spit out 300k LoC for a MVP TODO application. Can you imagine what it will spit out for anything non-trivial?
How do you even begin to approach remedying that? The only recourse for humans is to offer to rebuild it all using the existing features as a functional spec.
There's a middle ground here that you're not considering (at least in the small amount of text). Vibe coders will spit out a lot of nonsense because they don't have the skills (or choose not) to tweak the output of their agents. A well seasoned developer using tools like Claude Code on such a codebase can remediate a lot more quickly at this point than someone not using any AI. The current best practices are akin to thinking like a mathematician with regards to calculator use, rather than like a student trying to just pass a class. Working in small chunks and understanding the output at every step is the best approach in some situations.
1 reply →
> How do you even begin to approach remedying that? The only recourse for humans is to offer to rebuild it all using the existing features as a functional spec.
There are cases where that will be the appropriate decision. That may not be every case, but it'll be enough cases that there's money to be made.
There will be other cases where just untangling the clusterfuck and coming up with any sense of direction at all, to be implemented however, will be the key deliverable.
I have had several projects that look like this already in the VoIP world, and it's been very gainful. However, my industry probably does not compare fairly to the common denominator of CRUD apps in common tech stacks; some of it is specialised enough that the LLMs drop to GPT-2 type levels of utility (and hallucination! -- that's been particularly lucrative).
Anyway, the problem to be solved in vibe coding remediation often has little to do with the code itself, which we can all agree can be generated in essentially infinite amounts at a pace that is, for all intents and purposes, almost instantaneous. If you are in need vibe coding disaster remediation consulting, it's not because you need to refactor 300,000 lines of slop real quick. That's not going to happen.
The general business problem to be solved is how to make this consumable to the business as a whole, which still moves at the speed of human. I am fond of a metaphor I heard somewhere: you can't just plug a firehose into your house's plumbing and expect a fire hydrant's worth of water pressure out of your kitchen faucet.
In the same way, removing the barriers to writing 300,000 lines isn't the same as removing the barriers to operationalising, adopting and owning 300,000 lines in a way that can be a realistic input into a real-world product or service. I'm not talking about the really airy-fairy appeals to maintainability or reliability one sometimes hears (although, those are very real concerns), but rather, how to get one's arms around the 300,000 lines from a product direction perspective, except by prompting one's way into even more slop.
I think that's where the challenges will be, and if you understand that challenge, especially in industry- and domain-specific ways (always critical for moats), I think there's a brisk livelihood to be made here in the foreseeable future. I make a living from adding deep specialist knowledge to projects executed by people who have no idea what they're doing, and LLMs haven't materially altered that reality in any way. Giving people who have no idea what they're doing a way to express that cluelessness in tremendous amounts of code, quickly, doesn't really solve the problem, although it certainly alters the texture of the problem.
Lastly, it's probably not a great time to be a very middling pure CRUD web app developer. However, has it ever been, outside of SV and certain very select, fortunate corners of the economy? The lack of moat around it was a problem long before LLMs. I, for example, can't imagine making a comfortable living in it outside of SV engineer inflation; it just doesn't pay remotely enough in most other places. Like everything else worth doing, deep specialisation is valuable and, to some extent, insulating. Underappreciated specialist personalities will certainly see a return in a flight-to-quality environment.
2 replies →
Do you not fear that future/advanced AI will be able to look at a vibe-coded codebase and make sensible refactors itself?
That's my worry. Might be put off a few years, but still...
But its already the present.
For what I am vibing my normal work process is: build a feature until it works, have decent test coverage, then ask Claude to offer a code critique and propose refactoring ideas. I'd review them and decide which to implement. It is token-heavy but produces good, elegant codebases at scales I am working on for my side projects. I do this for every feature that is completed, and have it maintain design docs that document the software architecture choices made so far. It largely ignores them when vibing very interactively on a new feature, but it does help with the regular refactoring.
In my experience, it doubles the token costs per feature but otherwise it works fine.
I have been programming since I was 7 - 40 years ago. Across all tech stacks, from barebones assembly through enterprise architecture for a large enterprise. I thought I was a decent good coder, programmer and architect. Now, I find the code Claude/Opus 4.5 generates for me to be in general of higher quality then anything I ever made myself.
Mainly because it does things I'd be too tired to do, or never bother because why expand energy on refactoring for something that is perfectly working and not to be further developed.
Btw, its a good teaching tool. Load a codebase or build one, and then have it describe the current software architecture, propose changes and explain their impact and so on.
The amount of software needed and the amount being written are off many orders of magnitude. It has been that way since software's inception and I don't see it changing anytime soon. AI tools are like having a jr dev to do your grunt work. Soon it will be like a senior dev. Then like a dev team. I would love to have an entire dev team to do my work. It doesn't change the fact that I still have plenty of work for them to do. I'm not worried AI will take my job I will just be doing bigger jobs.
> Do you not fear that future/advanced AI will be able to look at a vibe-coded codebase and make sensible refactors itself?
This is a possibility in very well-trodden areas of tech, where the stack and the application are both banal to the point of being infinitely well-represented in the training.
As far as anything with any kind of moat whatsoever? Here, I'm not too concerned.
2 replies →
Yes, I immediately see the need for the opposite - perfect, accurate, proven bug free software. As long as there is AI there will be AI slop.
> And it is very disheartening to see people without any skills to behave the way they do.
The way the do, which is? I've skimmed comments and a lot of them is hate, hostility towards OP's project and coders "without skill" in general, also denial because there's no way anything vibe-coded worked. At best, there is strong tribalism on both ends.
There is definitely tribalism. I think a lot of the negativity is people who recognize the long term goals of these companies, not just to tech workers. Right now, these models are a threat to people who worked hard and invested their time, while it lets inexperienced or lazy people appear more competent. I think that less experienced developers (or people who don't care anymore or maybe ever) see what an LLM can do and immediately believe it will solve all their problems. That if you are not embracing this with full force you are going to be left behind.
You might see more opposing views in this thread, but if you browse this site often you'll see both sides.
Those embracing it heavily do not see the nuances carefully creating maintainable solutions, planning and recognizing tech debt, and where it's acceptable short term. They are also missing the theory building behind what is being created. Sure AI models might get even better and could solve everything. But I think it's naive to think that will be generally good for 90% of the population including people not in tech.
Using these models (text or image) devalues the work of everyone in more than one way. It is harmful for creative work and human expression.
This tech, and a lot of tech, especially ones built by large corporations for profit extraction and human exploitation, is very unlikely to improve the lives at a population level long term. It can be said for a lot of tech (ie. social media = powerful propaganda). The goal of the people creating these models are to not need humans for their work. At which point I don't know what would happen, kill the peasants?
I feel it's nice to use AI coding for side-projects, especially after work when I am kind of tired. Although the one issue is that if it gets stuck in a loop or just does not get the what is wrong and does the wrong thing no matter how you twist it, then you have to go into the weeds to fix it yourself and it feels so tiresome, at that point I think what if I had just done everything myself so my mental model would be better.
Also we are still designing systems and have to be able to define the problem properly, at least in my company when we look at the velocity in delivering projects it is barely up since AI because the bottlenecks are elsewhere..
Why do people assume what currently available is the ceiling, especially after the last 2-3 years of explosive growth?
Do you truly believe it won't get better, maybe even better at whole system design and implementation than people?
> Do you truly believe it won't get better, maybe even better at whole system design and implementation than people?
What are you calling "growth"? Adoption, or LLM progress? LLM progress has objectively slowed down, and for rather obvious reasons. The leaps from GPT-2 to GPT-4 can't be reprised forever.
It will get better, but the rate at which it does may not continue to be exponential. Past performance is not indicative of future results. While the agents models seem to continue to improve, I think LLMs as a whole have started seeing less and less benefits from the current scaling approaches.
I think what we currently have is pretty close to the ceiling for LLMs. But with the amount of money being spent there might be a new breakthrough (not llm)
3 replies →
feel the same, but I moved up. create full products and profit from them. you have a great taste if you know what's behind
>It is an art. And it is very disheartening to see people without any skills to behave the way they do
They've even got their own slogan: "you're probably just not prompting it properly"
> They've even got their own slogan: "you're probably just not prompting it properly"
That's the same energy as telling other professions to "just learn to code, bro" once they are displaced by AI.
But I guess it doesn't feel nice once the shoe is on the other foot, though. If nobody values the quality of human art, why should anybody value the quality of human code?
>That's the same energy as telling other professions to "just learn to code, bro" once they are displaced by AI. But I guess it doesn't feel nice once the shoe is on the other foot, though.
It's the exact same neoliberal elites who told everyone to code one year and told them they'd all be automated of a job the next year.
I dunno who exactly you think you're being condescending towards.