← Back to context

Comment by mhuffman

5 days ago

My experience matches your's, but related to

>3. Hypothetical answer generation from a query using an LLM, and then using that hypothetical answer to query for embeddings works really well.

What sort of performance are you getting in production with this one? The other two are basically solved for performance and RAG in general if it is related to a known and pre-processed corpus but I am having trouble thinking of how you don't get a hit with #3.

It's slow. So we use hypothetical mostly for async experiences.

For live experiences like chat, we solved it with UX. As soon as you start typing the words of a question into the chat box, it does the FTS search and retrieves a set of documents that have word-matches, scored just using ES heuristics (eg: counting matching words etc)

These are presented as cards that expand when clicked. The user can see it's doing something.

While that's happening, also issue a full hyde flow in the background with a placeholder loading shimmer that loads in the full answer.

So there is some dead-time of about 10 seconds or so while it generates the hypothetical answers. After that, a short ~1 sec interval to load up the knowledge nodes, and then it starts streaming the answer.

This approach tested well with UXR participants and maintains acceptable accuracy.

A lot of the times, when looking for specific facts from a knowledge base, just the card UX gets an answer immediately. Eg: "What's the email for product support?"