← Back to context

Comment by hansmayer

18 hours ago

Yes you can ask them "to check it for you". The only little problem is as you said yourself "they make mistakes", therefore : YOU CANNOT TRUST THEM. Just because you tell them to "check it" does not mean they will get it right this time. Again, however it seems "fine" to you, please, please, please / have a more senior person check that crap before you inflict serious damage somewhere.

Nope, you read their code, ask them to summarize changes to guide your reading, ask it why it made certain decisions you don’t understand and if you don’t like their explanations you change it (with the agent!). Own and be responsible for the code you commit. I am the “most senior”, and at large tech companies that track, higher level IC corresponds to more AI usage, hmm almost like it’s a useful tool.

  • Ok but you understand that the fundamental nature of LLMs amplifies errors, right? A hallucination is, by definition, a series of tokens which is plausible enough to be indistinguishable from fact to the model. If you ask an LLM to explain its own hallucinations to you, it will gladly do so, and do it in a way that makes them seem utterly natural. If you ask an LLM to explain its motivations for having done something, it will extemporize whichever motivation feels the most plausible in the moment you're asking it.

    LLMs can be handy, but they're not trustworthy. "Own and be responsible for the code you commit" is an impossible ideal to uphold if you never actually sit down and internalize the code in your code base. No "summaries," no "explanations."

    • So your argument is that if people don't use the tool correctly they might get incorrect results? How is that relevant? If you Google search for the wrong query you'll similarly get incorrect results