Comment by computably

6 hours ago

On a technical level, sure, you could say it's a matter of time, but that could mean tomorrow, or in 20 years.

And even after that, it still doesn't really solve the intrinsic problem of encoding truth. An LLM just models its training data, so new findings will be buried by virtue of being underrepresented. If you brute force the data/training somehow, maybe you can get it to sound like it's incorporating new facts, but in actuality it'll be broken and inconsistent.