Comment by otabdeveloper4
15 hours ago
What you're describing is just RAG, and it doesn't work that well. (You need a search engine for RAG, and the ideal search engine is an LLM with infinite context. But the only way to scale LLM context is by using RAG. We have infinite recursion here.)
No comments yet
Contribute on Hacker News ↗