Comment by TeMPOraL
2 years ago
Science itself has the same problem. There's literally no reason to be certain that the Sun will rise tomorrow, or that physics will make sense tomorrow, or that the universe will not become filled with whipped cream tomorrow. There is no fundamental reason for such inductions to hold - but we've empirically observe they do, and the more they do, the safer we feel in assuming they'll continue to hold.
This assumption is built into science as its fundamental axiom. And then, all the theories and models we develop, also have "no mathematical guarantee" - we just keep using them to predict outcomes of some tests (designed or otherwise), and compare actual outcomes. As long as they remain identical (within tolerance), we remain confident in those theories.
Same will be the case with LLMs. If we train it and then test it by feeding it data from outside of the training set, for which we know the truth value, and the AI determines that truth value correctly - and then keep repeating it many many times, and the AI passes the test most of the times - then we can slowly gain certainty that it has, in fact, learned a lot, and isn't just guessing.
No comments yet
Contribute on Hacker News ↗