Comment by storus
5 days ago
Ok, I'll bite. Show me an LLM that comes up with a new math operator. Or which will come up with theory of relativity if only Newton physics is in its training dataset. That it could remix existing ideas which leads to novel insights is expected, however the current LLMs can't come up with paradigm shifts that require novel insights. Even humans have a rather limited time they can come up with novel insights (when they are young, capable of latent thinking, not yet ossified from the existing formalization of science and their brain is still energetically capable without vascular and mitochondrial dysfunction common as we age).
How many humans have been born until now and how many Einsteins have been born? And in how many hundreds of thousands of years?
The point is that humans do have some edge compared to current LLMs which are essentially next token predictors. If we all start relying on current AI and stop thinking, we would only be able to "exhaust the remix space" of existing ideas but won't be able to do any paradigm jumps. Moreover, it's quite likely that current training sets are self-contradictory, containing Dutch books, carrying some innate error in them.
It takes a lot of intelligence to "essentially predict" next token when you're doing a math proof. Or writing code.