← Back to context

Comment by lostmsu

14 days ago

I think GPT-2 (2019) was already strong enough argument for possibility of modeling knowledge and language that Chomsky rejected.

Though given that LLMs fundamentally can't know whether they know something or not (without a later pass of fine-tuning on what they should know) is a pretty good argument against them being good knowledge bases.

  • No, it is not. In mathematical limit this applies to literally everything. In practice you are not going to store video compressed with a lossless codec, for example.

    • Me forgetting/never having "recorded" what necklace the other person had during an important event is not at all similar to a statistical text-generation.

      If they ask me the previous question I can retrospect/query my memory and tell 100% whether I know it or not - lossy compression aside. An LLM will just reply based on how likely a yes answer is with no regards to having that knowledge or not.

      1 reply →