Comment by evnu
1 day ago
> Instead you should use high quality sources, then ask the LLM to summarize them for you to start with (NotebookLM does this very well for instance, but so can others).
How do you determine if the LLM accurately reflects what the high-quality source contains, if you haven't read the source? When learning from humans, we put trust on them to teach us based on a web-of-trust. How do you determine the level of trust with an LLM?
> When learning from humans, we put trust on them to teach us based on a web-of-trust.
But this is only part of the story. When learning from another human, you'll also actively try and devise whether they're trustworthy based on general linguistic markers, and will try to find and poke holes in what they're saying so that you can question intelligently.
This is not much different from what you'd do with an LLM, which is why it's such a problem that they're more convincing than correct pretty often. But it's not an insurmountable issue. The other issue is that their trustworthiness will wary in a different way than a human's, so you need experience to know when they're possibly just making things up. But just based on feel, I think this experience is definitely possible to gain.
Because summarizing is one of the few things LLMs are generally pretty good at. Plus you should use the summary to determine if you want to read the full source, kind of like reading an abstract for a research paper before deciding if you want to read the whole thing.
Bonus: the high quality source is going to be mostly AI written anyway
Actually, LLMs aren’t that great for summarizing. It would be a boon for RAG workflows if they were.
I’m still on the lookout for a great model for this.