Comment by CharlieDigital
7 months ago
The easiest solution to this is to stuff the heading into the chunk. The heading is hierarchical navigation within the sections of the document.
I found Azure Document Intelligence specifically with the Layout Model to be fantastic for this because it can identify headers. All the better if you write a parser for the output JSON to track depth and stuff multiple headers from the path into the chunk.
So subtle! The article is on doing that, which is something we are doing a lot on right now... though it seems to snatch defeat from the jaws of victory:
If we think about what this is about, it is basically entity augmentation & lexical linking / citations.
Ex: A patient document may be all about patient id 123. That won't be spelled out in every paragraph, but by carrying along the patient ID (semantic entity) and the document (citation), the combined model gets access to them. A naive one-shot retrieval over a naive chunked vector index would want it at the text/embedding, while a smarter one also in the entry metadata. And as others write, this helps move reasoning from the symbolic domain to the semantic domain, so less of a hack.
We are working on some fun 'pure-vector' graph RAG work here to tackle production problems around scale, quality, & always-on scenarios like alerting - happy to chat!
Also working with GRAG (via Neo4j) and I'm somewhat skeptical that for most cases where a natural hierarchical structure already exists that graph will significantly exceed RAG with the hierarchical structure.
A better solution I had thought about its "local RAG". I came across this while processing embeddings from chunks parsed from Azure Document Intelligence JSON. The realization is that relevant topics are often localized within a document. Even across a corpus of documents, relevant passages are localized.
Because the chunks are processed sequentially, one needs only to keep track o the sequence number of the chunk. Assume that the embedding matches with a chunk n, then it would follow that the most important context are the chunks localized at n - m and n + p. So find the top x chunks via hybrid embedding + full text match and expand outwards from each of the chunks to grab the chunks around it.
While a chunk may represent just a few sentences of a larger block of text, this strategy will grab possibly the whole section or page of text localized around the chunk with the highest match.
This works until relevant information is colocated. Sometimes though, for example in financial documents, important parts reference each other through keywords etc. That's why you can always try and retrieve not only positionally related chunks but also semantically related ones.
Go for chunk n, n - m, n + p and n' where n' are closest chunks to n semantically.
Moreover you can give this traversal possibility to your LLM to use itself as a tool or w/e whenever it is missing crucial information to answer the question. Thanks to that you don't always retrieve thousands of tokens even when not needed.
4 replies →
[flagged]
Would it be better to go all the way and completely rewrite the source material in a way more suitable for retrieval? To some extent these headers are a step in that direction, but you’re still at the mercy of the chunk of text being suitable to answer the question.
Instead, completely transforming the text into a dense set of denormalized “notes” that cover every concept present in the text seems like it would be easier to mine for answers to user questions.
Essentially, it would be like taking comprehensive notes from a book and handing them to a friend who didn’t take the class for a test. What would they need to be effective?
Longer term, the sequence would likely be “get question”, hand it to research assistant who has full access to source material and can run a variety of AI / retrieval strategies to customize the notes, and then hand those notes back for answers. By spending more time on the note gathering step, it will be more likely the LLM will be able to answer the question.
For a large corpus, this would be quite expensive in terms of time and storage space. My experience is that embeddings work pretty well around 144-160 tokens (pure trial and error) with clinical trial protocols. I am certain that this value will be different by domain and document types.
If you generate and then "stuff" more text into this, my hunch is that accuracy drops off as the token count increases and it becomes "muddy". GRAG or even normal RAG can solve this to an extent because -- as you propose -- you can generate a congruent "note" and then embed that and link them together.
I'd propose something more flexible: expand on the input query instead and basically multiplex it to the related topics and ideas instead and perform cheap embedding search using more than 1 input vector.
Contextual chunk headers
The idea here is to add in higher-level context to the chunk by prepending a chunk header. This chunk header could be as simple as just the document title, or it could use a combination of document title, a concise document summary, and the full hierarchy of section and sub-section titles.
That is from the article. Is this different from your suggested approach?
No, but this is also not really a novel solution.