← Back to context

Comment by ramchip

5 days ago

Right. I don't treat the LLM like a colleague at all, it's just a text generator, so I partially agree with your earlier statement:

> it's like reviewing a PR with no trust possible, no opportunity to learn or to teach, and no possibility for insight that will lead to a better code base in the future

The first part is 100% true. There is no trust. I treat any LLM code as toxic waste and its explanations as lies until proven otherwise.

The second part I disagree somewhat. I've learned plenty of things from AI output and analysis. You can't teach it to analyze allocations or code complexity, but you can feed it guidelines or samples of code in a certain style and that can be quite effective at nudging it towards similar output. Sometimes that doesn't work, and that's fine, it can still be a big time saver to have the LLM output as a starting point and tweak it (manually, or by giving the agent additional instructions).