Comment by tsimionescu

5 months ago

LLMs can fundamentally only do something similar to learning in the training phase. So by the time you interact with it, it has learned all it can. The question we then care about is whether it has learned enough to be useful for problem X. There's no meaningful concept of "how intelligent" the system is beyond what it has learned, no abstract IQ test decoupled from base knowledge you could even conceive of.