Comment by tylerrecall

7 hours ago

This is exactly why I'm building persistent memory for AI coding tools. LLMs hallucinate facts partly because they lose context between sessions. When they "remember" your project structure, past decisions, and error patterns, accuracy improves dramatically. Still not perfect, but context retention helps a lot. Curious what others are seeing - is it mainly hallucination or context loss that causes wrong answers?

Is this context loss? They never _knew_ the information, they just have a high percentage chance of hallucinating the right thing if it's in their training data or a high chance of hallucinating the wrong thing if they search for it (even higher if what they find is wrong).