Comment by sminchev
4 days ago
Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors.
Here is the same, the moment things get too much, it start hallucinating and missing important things. It also depends on what model you are using. I read that Gemini 3 pro, which has limit of 1 million tokens can decrease its productivity to 25% getting close to its limits. Not WITH 25% but TO 25%. Becomes extremely dump.
Other models are just asking too many questions...
There are some tips and tricks that you can follow. And it is similar to how people work. Keep the tasks small, save what the model learned during the session somewhere, and re-use this knowledge in the next session, by explicitly writing to read that information, before it starts.
>Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors.
LLMs don't hallucinate because they get overwhelmed and tired JFC.
Why do they hallucinate? :)
Because LLMs are stochastic text-generation machines. The are designed to generate plausible natural human language based on next token prediction, the result of which coincidentally may or may not be true based on the likely correctness and quality of their data set. But that correctness (or lack thereof) comes from the human effort that produced the training data, not some innate ability of the LLM to comprehend real-world context and deduce truth from falsehood, because LLMs don't have anything of the sort.
Not because they're people.
https://medium.com/@nirdiamant21/llm-hallucinations-explaine...
2 replies →
They don't. They work as intended and "hallucination" is actually a marketing term to make it seem they are more than what they really are: text prediction software.