← Back to context

Comment by Kranar

1 day ago

It's nice that you feel that having one LLM that generates entirely incorrect statements is equally as functional as an LLM that does not, but reality in terms of what LLMs people will actually use in real life and not for the sake of being pedantic over an Internet argument is very much at odds with your sentiment.

How a product happens to currently be implemented using current machine learning techniques is not the same as the set of features that such a product offers and it's absolutely the case that actual researches in this field, those who are not quibbling on the Internet, do take this issue very seriously and devote a great deal of effort towards improving it because they actually care to implement possible solutions.

The feature set, what the product is intended to do based on the motivations of both those who created it and those who consume it, is a broader design/specification goal, independent of how it's technically built.

>LLM that generates entirely incorrect statements is equally as functional as an LLM that does not

And yet they would both be operating within the normal design parameters, even the supposed "LLM that does not" when it spits out nonsense every so often.

Your current zeitgeist is not much better than a broken clock, and that is the reality many people are witnessing. Whether or not they care if they are being fed wrong information is a whole other story entirely.