Comment by aspenmartin
1 day ago
Seems fine, works, is fine, is better than if you had me go off and write it on my own. You realize you can check the results? You can use Claude to help you understand the changes as you read through them? I mean I just don’t get this weird “it makes mistakes and it’s horrible if you understand the domain that it is generating over” I mean yes definitely sometimes and definitely not other times. What happens if I DONT have someone more experienced to consult with or that will ignore me because they are busy or be wrong because they are also imperfect and not focused. It’s really hard to be convinced that this point of view is not just some knee jerk reaction justified post hoc
Yes you can ask them "to check it for you". The only little problem is as you said yourself "they make mistakes", therefore : YOU CANNOT TRUST THEM. Just because you tell them to "check it" does not mean they will get it right this time. Again, however it seems "fine" to you, please, please, please / have a more senior person check that crap before you inflict serious damage somewhere.
Nope, you read their code, ask them to summarize changes to guide your reading, ask it why it made certain decisions you don’t understand and if you don’t like their explanations you change it (with the agent!). Own and be responsible for the code you commit. I am the “most senior”, and at large tech companies that track, higher level IC corresponds to more AI usage, hmm almost like it’s a useful tool.
Ok but you understand that the fundamental nature of LLMs amplifies errors, right? A hallucination is, by definition, a series of tokens which is plausible enough to be indistinguishable from fact to the model. If you ask an LLM to explain its own hallucinations to you, it will gladly do so, and do it in a way that makes them seem utterly natural. If you ask an LLM to explain its motivations for having done something, it will extemporize whichever motivation feels the most plausible in the moment you're asking it.
LLMs can be handy, but they're not trustworthy. "Own and be responsible for the code you commit" is an impossible ideal to uphold if you never actually sit down and internalize the code in your code base. No "summaries," no "explanations."
2 replies →