Comment by xk3

2 days ago

> Devs spot and fix hallucinations immediately, dismissing incorrect autocomplete suggestions

Some hallucinations stay in codebases longer than others! If there were zero hallucinations there would really be no novel output. Some hallucinations are useful and some are not.

The term "hallucinations" has always frustrated me. The marketing there makes sense, but an LLM that hallucinates is an LLM doing exactly what it was designed for -predicting what a human might say in response.

Facts don't really play a part there, if a response is factual its only a sign that the training set largely agreed on the facts (meaning the correlation of token sequence was high).