Comment by mrklol

6 hours ago

imo a question is, do you still need to understand the codebase? What if that process changes and the language you’re reading is a natural one instead of code?

What happens when your LLM of choice goes on an infinite loop failing to solve a problem?

What happens when your LLM provider goes down during an incident?

What happens when you have an incident on a distributed system so complex that no LLM can maintain a good enough understanding of the system as a whole in a single session to spot the problem?

What happens when the LLM providers stop offering loss leader subscriptions?

> What if that process changes and the language you’re reading is a natural one instead of code?

Okay, when that happens, then sure, you don't need to understand the codebase.

I have not seen any evidence that that is currently the case, so my observation that "Continue letting the LLM write your code for you, and soon you won't be able to spot errors in its output" is still applicable today.

When the situation changes, then we can ask if it is really that improtant to understand the code. Until that happens, you still need to understand the code.

  • The same logic applies to your statement:

    > Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

    Okay, when that happens, then sure, you'll have a problem.

    I have not seen any evidence that that is currently the case i.e. I have no problems correcting LLM output when needed.

    When the situation changes, then we can talk about pulling back on LLM usage.

    And the crucial point is: me.

    I'm not saying that everyone that uses LLM to generate code won't fall into "not able to use LLM generated code".

    I now generate 90% of the code with LLM and I see no issues so far. Just implementing features faster. Fixing bugs faster.

> What if that process changes and the language you’re reading is a natural one instead of code?

Natural language is not a good way to specify computer systems. This is a lesson we seem doomed to forget again and again. It's the curse of our profession: nobody wants to learn anything if it gets in the way of the latest fad. There's already a historical problem in software engineering: the people asking for stuff use plain language, and there's a need to convert it to a formal spec, and this takes time and is error prone. But it seems we are introducing a whole new layer of lossy interpretation to the whole mess, and we're doing this happily and open eyed because fuck the lessons of software engineering.

I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.