Comment by mrklol
13 hours ago
imo a question is, do you still need to understand the codebase? What if that process changes and the language you’re reading is a natural one instead of code?
13 hours ago
imo a question is, do you still need to understand the codebase? What if that process changes and the language you’re reading is a natural one instead of code?
> What if that process changes and the language you’re reading is a natural one instead of code?
Okay, when that happens, then sure, you don't need to understand the codebase.
I have not seen any evidence that that is currently the case, so my observation that "Continue letting the LLM write your code for you, and soon you won't be able to spot errors in its output" is still applicable today.
When the situation changes, then we can ask if it is really that improtant to understand the code. Until that happens, you still need to understand the code.
The same logic applies to your statement:
> Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.
Okay, when that happens, then sure, you'll have a problem.
I have not seen any evidence that that is currently the case i.e. I have no problems correcting LLM output when needed.
When the situation changes, then we can talk about pulling back on LLM usage.
And the crucial point is: me.
I'm not saying that everyone that uses LLM to generate code won't fall into "not able to use LLM generated code".
I now generate 90% of the code with LLM and I see no issues so far. Just implementing features faster. Fixing bugs faster.
You do have a point but as the sibling comment pointed out, the negative eventuality you are describing also has not happened for many devs.
I quite enjoy being much more of an architect than I could compared to 90% of my career so far (24 years in total). I have coded my fingers and eyes out and I spot idiocies in LLM output from trivially easy to needing an hour carefully reviewing.
So, I don't see the "soon" in your statement happening, ahem, anytime soon for me, and for many others.
What happens when your LLM of choice goes on an infinite loop failing to solve a problem?
What happens when your LLM provider goes down during an incident?
What happens when you have an incident on a distributed system so complex that no LLM can maintain a good enough understanding of the system as a whole in a single session to spot the problem?
What happens when the LLM providers stop offering loss leader subscriptions?
AFAIK everything I use has timeouts, retries, and some way of throwing up its hands and turning things back to me.
I use several providers interchangeably.
I stay away from overly complex distributed systems and use the simplest thing possible.
I plan to wait for some guys in China to train a model on traces that I can run locally, benefitting from their national “diffusion” strategy and lack of access to bleeding-edge chips.
I’m not worried.
> What if that process changes and the language you’re reading is a natural one instead of code?
Natural language is not a good way to specify computer systems. This is a lesson we seem doomed to forget again and again. It's the curse of our profession: nobody wants to learn anything if it gets in the way of the latest fad. There's already a historical problem in software engineering: the people asking for stuff use plain language, and there's a need to convert it to a formal spec, and this takes time and is error prone. But it seems we are introducing a whole new layer of lossy interpretation to the whole mess, and we're doing this happily and open eyed because fuck the lessons of software engineering.
I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.
> But it seems we are introducing a whole new layer of lossy interpretation to the whole mess (...)
I recommend you get acquainted with LLMs and code assistants, because a few of your assertions are outright wrong. Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.
Then you feed that plan to a LLM assistant and your feature is implemented.
I seriously recommend you check it out. This process is far more structured and thought through than any feature work that your average SDE ever does.
> I recommend you get acquainted with LLMs and code assistants
I use them daily, thanks for your condescension.
> I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.
Did you read this part of my comment?
> Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.
I'm not criticizing spec-driven development frameworks, but how battle-tested are they? Does it remove the inherent ambiguity in natural language? And do you believe this is how most people are vibe-coding, anyway?
2 replies →
[dead]