Comment by dspillett
2 hours ago
Certainly not in the case of asking it to do something you'd be slow at because you are unfamiliar. If you are not familiar enough with the system, how are you confident that what the LLM has produced is valid and complete? IMO the people saying LLMs make then 10x faster were either very bad to start with (like me!) or are not properly looking at the results before throwing them over the wall.
And how do you know if that is the case or the person/team using the LLMs is one of the good ones?
So the safest answer is just "no".
This is the crux of the problem. LLMs make me significantly faster at writing code I was mediocre or bad at. But when I use it to write code in domains I have more knowledge in I see design and correctness problems all over the place and actively fix them and it slows down my output.
Speed is seductive.
The bar isn't "this is a known good contributor". Its "this is a known good contributor working in a space they have knowledge in and has a track record of actually checking and thinking about LLM output before submitting it." It's much higher and I don't see how you can approve people on an organization-wide basis.