← Back to context

Comment by ptx

6 hours ago

As I said, how are you going to check the source when LLMs can't provide sources? The models, as far as I know, don't store links to sources along with each piece of knowledge. At best they can plagiarize a list of references from the same sources as the rest of the text, which will by coincidence be somewhat accurate.

Pretty much every major LLM client has web search built in. They aren't just using what's in their weights to generate the answers.

When it gives you a link, it literally takes you to the part of the page that it got its answer from. That's how we can quickly validate.

LLMs provide sources every time I ask them.

They do it by going out and searching, not by storing a list of sources in their corpus.

  • have you ever tried examining the sources? they actually just invent many "sources" when requested to provide sources