Typical RAG implementations I’ve seen take the user query and directly run it against the full-text search and embedding indexes. This produces sub-par results because the query embedding doesn’t really capture fully what the user is really looking for.
A better solution is to send the user query to the LLM, and let it construct and run queries against the index via tool calling. Nothing too ground-breaking tbh, pretty much every AI search agent does this now. But it produces much better results.
Thank you!
Typical RAG implementations I’ve seen take the user query and directly run it against the full-text search and embedding indexes. This produces sub-par results because the query embedding doesn’t really capture fully what the user is really looking for.
A better solution is to send the user query to the LLM, and let it construct and run queries against the index via tool calling. Nothing too ground-breaking tbh, pretty much every AI search agent does this now. But it produces much better results.