Comment by Borealid
2 years ago
I think this is a misunderstanding of what would be necessary for an LLM to only output truth.
Let's imagine there does exist a function for evaluating truth - it takes in a statement and produces whether that statement is "true" (whatever "true" means). Let's also say it does that perfectly.
We train the LLM. We keep training it, and training it, and training it, and we eventually get a set of weights where our eval runs only make it produce statements where the truth-function says they are truthful.
We deploy the LLM. It's given an input that wasn't part of the evaluation set. We have no guarantee at all that the output will be true. The weights we chose for the LLM during the training process are a serendipitous accident: we observed that they produced truthy output in the scenarios we tested. Scenarios we didn't test _probably_ produce truthy output, but in all likelihood some will not, and we have no mathematical guarantee.
This remains the case even if you have a perfect truth function, and remains true if you use deterministic inference (always the most likely token). Your comment goes even further than that and asserts that a mostly-accurate function is good enough.
Science itself has the same problem. There's literally no reason to be certain that the Sun will rise tomorrow, or that physics will make sense tomorrow, or that the universe will not become filled with whipped cream tomorrow. There is no fundamental reason for such inductions to hold - but we've empirically observe they do, and the more they do, the safer we feel in assuming they'll continue to hold.
This assumption is built into science as its fundamental axiom. And then, all the theories and models we develop, also have "no mathematical guarantee" - we just keep using them to predict outcomes of some tests (designed or otherwise), and compare actual outcomes. As long as they remain identical (within tolerance), we remain confident in those theories.
Same will be the case with LLMs. If we train it and then test it by feeding it data from outside of the training set, for which we know the truth value, and the AI determines that truth value correctly - and then keep repeating it many many times, and the AI passes the test most of the times - then we can slowly gain certainty that it has, in fact, learned a lot, and isn't just guessing.