Comment by aldonius
1 day ago
I guess we can imagine a pure reasoning model (if that's even the right word any more) with almost zero world-knowledge. How does it know what to look for? How does it do any meaningful communication at all?
So I think it's useful to have an imprecise-but-fairly-accurate set of world knowledge as part of an otherwise reasoning-heavy model. It's a cache.
And if the it's an LLM, or something like that, I think it basically has to have world-knowledge built in, because what is natural language if not communication about the world?
No comments yet
Contribute on Hacker News ↗