Comment by andai

3 days ago

I remember reading tht hallucination is still a problem even with perfect context. You build a theoretical perfect RAG, give the LLM the exact correct information, and it will still make mistakes surprisingly often.

this was my experience as of about 6 months ago, and i don't believe that hallucinating is a solved problem as of yet