Comment by lxgr

15 days ago

> [...] I can't help but feel it's an a really old take [...]

To be fair the article is from two years ago, which when talking about LLMs in this age arguably does count as "old", maybe even "really old".

I think GPT-2 (2019) was already strong enough argument for possibility of modeling knowledge and language that Chomsky rejected.

  • Though given that LLMs fundamentally can't know whether they know something or not (without a later pass of fine-tuning on what they should know) is a pretty good argument against them being good knowledge bases.

    • No, it is not. In mathematical limit this applies to literally everything. In practice you are not going to store video compressed with a lossless codec, for example.

      2 replies →