← Back to context

Comment by OccamsMirror

5 days ago

If you could actually teach these models things, not just in the current context, but as temporal learning, then that would alleviate a lot of the issues of hallucination. I imagine being able to say "that method doesn't exist, don't recommend it again" and then give it the documentation and it would absorb that information permanently, that would fundamentally change how we interact with these models. But can that work for models hosted for everyone to use at once?

There are an almost infinite number of things that can be hallucinated, though. You can't maintain a list of scientific papers or legal cases that don't exist! Hallucinations (almost certainly) aren't specific falsehoods that need to be erased...

  • The level of hallucinations with o3 are no different than the level of hallucinations from most (all?) human sources in my experience. Yes, you definitely need to cross check, but yes, you need to do that for literally everything else, so it feels a bit redundant to keep preaching that as if it’s a failing of the model and not just an inherent property of all free sharing of information between two parties.