Comment by ManlyBread
2 days ago
This happened to me quite recently. I didn't spot the hallucination, trusted the LLM output and introduced a problem into the application I'd never introduce if I didn't use LLMs in the first place. Then my co-workers did the same thing because the LLM guided them in the same broken way as it guided me.
No comments yet
Contribute on Hacker News ↗