Comment by namaria
10 months ago
That we're getting either sycophantic or intransigent hallucinations point to two fundamental limitations: there's no getting rid of hallucinations, and there's a trade-off in observed agreement "behavior".
Also, the recurring theme of "just wipe out context and re-start" places a hard ceiling on how complex an issue the LLM can be useful for.
No comments yet
Contribute on Hacker News ↗