Comment by zarathustreal
1 year ago
In our RAG pipeline we found that implementing HyDE also made a huge difference, maybe generating and embedding hypothetical user search queries (per document) would help here
1 year ago
In our RAG pipeline we found that implementing HyDE also made a huge difference, maybe generating and embedding hypothetical user search queries (per document) would help here
HyDE apparently means “Hypothetical Document Embeddings”, which seems to be a kind of generative query expansion/pre-processing
https://arxiv.org/abs/2212.10496
https://github.com/texttron/hyde
From the abstract:
Given a query, HyDE first zero-shot instructs an instruction-following language model (e.g. InstructGPT) to generate a hypothetical document. The document captures relevance patterns but is unreal and may contain false details. Then, an unsupervised contrastively learned encoder~(e.g. Contriever) encodes the document into an embedding vector. This vector identifies a neighborhood in the corpus embedding space, where similar real documents are retrieved based on vector similarity. This second step ground the generated document to the actual corpus, with the encoder's dense bottleneck filtering out the incorrect details.