Comment by codedokode
19 hours ago
If a programmer wrote a formula wrong and the program produces incorrect output, it is a "bug" and an "error".
19 hours ago
If a programmer wrote a formula wrong and the program produces incorrect output, it is a "bug" and an "error".
The program is producing correct output, for technical values of correct.
The LLM is a statistical model that predicts what words should come next based on current context and its training data. It succeeds at that very well. It is not a piece of software designed to report the objective truth, or indeed any truth whatsoever.
If the LLM was producing nonsense sentences, like "I can't do cats potato Graham underscore" then yes, that's "incorrect output". Instead, it's correctly putting sentences together based on its predictions and models, but it doesn't know what those sentences mean, what they're for, why it's saying them, if they're true, what "truth" is in the first place, and so on.
So to say that these LLMs are producing "incorrect output" misses the key thing that the general public also misses, which is that they are built to respond to prompts and not to respond to prompts correctly or in a useful or reasonable manner. These are not knowledge models, and they are not intended to give you correct sentences.