Unlike large models such as Gemini or ChatGPT, where information is extracted from numerous web sources that may contain “hallucinations,” NotebookLM relies 100% on the sources you provide, such as PDFs, audio files, YouTube videos, Google Docs, or even articles. By working exclusively with your sources, the tolerance for hallucinations is very low.
Unlike large models such as Gemini or ChatGPT, where information is extracted from numerous web sources that may contain “hallucinations,” NotebookLM relies 100% on the sources you provide, such as PDFs, audio files, YouTube videos, Google Docs, or even articles. By working exclusively with your sources, the tolerance for hallucinations is very low.
Huh don't most hallucinations come from the models internal knowledge and not the RAG?
Please clarify the Google connection.
I'm guessing that it's an official Google-built product. [1]
[1] http://support.google.com/notebooklm/answer/16179536?sjid=62...