← Back to context

Comment by hn_throwaway_99

8 hours ago

This is awesome, but minor quibble with the title - "hallucinates" is the wrong verb here. You specifically asked it to make up a 10-year-in-the-future HN frontpage, and that's exactly what it did. "Hallucinates" means when it randomly makes stuff up but purports it to be the truth. If some one asks me to write a story for a creative writing class, and I did, you wouldn't say I "hallucinated" the story.

It so very weird to see this called "hallucinate", as we all have more or less used it for "made up erroneously".

Is this a push to override the meaning and erase the hallucination critique?

If someone asked you, you would know about the context. LLMs are predictors, no matter the context length, they never "know" what they are doing. They simply predict tokens.

  • This common response is pretty uninteresting and misleading. They simply predict tokens? Oh. What does the brain do, exactly?

    • It does exactly the same, predicts tokens, but it's totally different and superior to LLMs /s

      OTOH, brain tokens seem to be concept based and not always linguistic (many people think solely in images/concepts).

    • The brain has intrinsic understanding of the world engraved in our DNA. We do not simply predict tokens based on knowledge, we base our thoughts on intelligence, emotions and knowledge. LLMs neither have intelligence nor emotions. If your brain simply predicts tokens I feel sorry for you.

      Edit: really does not surprise me that AI bros downvote this. Expecting to understand human values from people that want to make themselves obsolete was a mistake.