Comment by bigEnotation
2 years ago
I think you’re forgetting about the use case where the LLM returns something partially correct to a discerning expert, who is still able to use the response, but does not bother with a message like “btw I had to do X to make your suggestions usable”.
No comments yet
Contribute on Hacker News ↗