← Back to context

Comment by LASR

2 days ago

It's slow. So we use hypothetical mostly for async experiences.

For live experiences like chat, we solved it with UX. As soon as you start typing the words of a question into the chat box, it does the FTS search and retrieves a set of documents that have word-matches, scored just using ES heuristics (eg: counting matching words etc)

These are presented as cards that expand when clicked. The user can see it's doing something.

While that's happening, also issue a full hyde flow in the background with a placeholder loading shimmer that loads in the full answer.

So there is some dead-time of about 10 seconds or so while it generates the hypothetical answers. After that, a short ~1 sec interval to load up the knowledge nodes, and then it starts streaming the answer.

This approach tested well with UXR participants and maintains acceptable accuracy.

A lot of the times, when looking for specific facts from a knowledge base, just the card UX gets an answer immediately. Eg: "What's the email for product support?"