Comment by throwuxiytayq
13 hours ago
Of course you have to fact check - but verification is much faster and easier than searching from scratch.
13 hours ago
Of course you have to fact check - but verification is much faster and easier than searching from scratch.
How is verification faster and easier? Normally you would check an article's citations to verify its claims, which still takes a lot of work, but an LLM can't cite its sources (it can fabricate a plausible list of fake citations, but this is not the same thing), so verification would have to involve searching from scratch anyway.
Because it gives you an answer and all you have to do is check its source. Often you don’t have to do that since you have jogged your memory.
Versus finding the answer by clicking into the first few search results links and scanning text that might not have the answer.
As I said, how are you going to check the source when LLMs can't provide sources? The models, as far as I know, don't store links to sources along with each piece of knowledge. At best they can plagiarize a list of references from the same sources as the rest of the text, which will by coincidence be somewhat accurate.
5 replies →
For most things, no it isn’t. The reason it can work well at all for software is that it’s often (though not always) easy to validate the results. But for giving you a summary of some topic, no, it’s actually very hard to verify the results without doing all the work over again.