Comment by karmakaze

15 hours ago

I find it's a perfectly fine word to describe the result. Humans do the same as our visual system samples a low amount of data points to construct the view we see. In effect we're always hallucinating with the difference being that we maintain high context to filter it to the correct hallucinations. This shows up in dreams where we don't maintain such context, or when context is largely changed. When I got back from a vacation where there were many small and larger lizards, I swear I saw one upon returning, but it turned out to be a similarly colored leaf moving in a scurrying motion.

Edit: "mis-remembering" is another good term as it's using its vast training to output tokens and sometimes it maps them wrong.