Comment by imiric

2 days ago

> Devs spot and fix hallucinations immediately, dismissing incorrect autocomplete suggestions

Bullshit. Spotting hallucinations requires expertise in the subject matter, and most devs wouldn't be using LLMs if they intimately knew the programming languages and APIs the LLM is generating.

The reality is that spotting hallucinations often takes more effort than reading the source documentation and writing the code from scratch, since the dev also needs to review and check the code for correctness.

This happened to me quite recently. I didn't spot the hallucination, trusted the LLM output and introduced a problem into the application I'd never introduce if I didn't use LLMs in the first place. Then my co-workers did the same thing because the LLM guided them in the same broken way as it guided me.