← Back to context

Comment by CharlieDigital

7 months ago

Also working with GRAG (via Neo4j) and I'm somewhat skeptical that for most cases where a natural hierarchical structure already exists that graph will significantly exceed RAG with the hierarchical structure.

A better solution I had thought about its "local RAG". I came across this while processing embeddings from chunks parsed from Azure Document Intelligence JSON. The realization is that relevant topics are often localized within a document. Even across a corpus of documents, relevant passages are localized.

Because the chunks are processed sequentially, one needs only to keep track o the sequence number of the chunk. Assume that the embedding matches with a chunk n, then it would follow that the most important context are the chunks localized at n - m and n + p. So find the top x chunks via hybrid embedding + full text match and expand outwards from each of the chunks to grab the chunks around it.

While a chunk may represent just a few sentences of a larger block of text, this strategy will grab possibly the whole section or page of text localized around the chunk with the highest match.

This works until relevant information is colocated. Sometimes though, for example in financial documents, important parts reference each other through keywords etc. That's why you can always try and retrieve not only positionally related chunks but also semantically related ones.

Go for chunk n, n - m, n + p and n' where n' are closest chunks to n semantically.

Moreover you can give this traversal possibility to your LLM to use itself as a tool or w/e whenever it is missing crucial information to answer the question. Thanks to that you don't always retrieve thousands of tokens even when not needed.

  •     > positionally related chunks but also semantically related ones
    

    That's why the entry point would still be an embedding search; it's just that instead of using the first 20 embedding hits, you take the first 5 and if the reference is "semantically adjacent" to the entry concept, we would expect that some of the first few chunks would capture it in most cases.

    I think where GRAG yields more relevancy is when the referenced content is not semantically similar nor even semantically adjacent to the entry concept but is semantically similar to some sub fragment of a matched chunk. Depending on the corpus, this can either be common (no familiarity with financial documents) or rare. I've primarily worked with clinical trial protocols and at least in that space, the concepts are what I would consider "snowflake-shaped" in that it branches out pretty cleanly and rarely cross-references (because it is more common that it repeats the relevant reference).

    All that said, I think that as a matter of practicality, most teams will probably get much bigger yield with much less effort doing local expansion based on matching for semantic similarity first since it addresses two core problems with embeddings (text chunk size vs embedding accuracy, relevancy or embeddings matched below a given threshold). Experiment with GRAG depending on the type of questions you're trying to answer and the nature of the underlying content. Don't get me wrong; I'm not saying GRAG has no benefit, but that most teams can explore other ways of using RAG before trying GRAG.

    • Neo4j graph rag is typically not graph rag in the AI sense / MSR Graph RAG paper sense, but KG or lexical extraction & embedding, and some retrieval time hope of the neighborhood being ok

      GRAG in the direction of the MSR paper adds some important areas:

      - summary indexes that can be lexical (document hierarchy) or not (topic, patient ID, etc), esp via careful entity extraction & linking

      - domain-optimized summarization templates, both automated & manual

      - + as mentioned, wider context around these at retrieval

      - introducing a more generalized framework for handling different kinds of concept relations, summary indexing, and retrieval around these

      Ex: The same patient over time & docz, and seperately, similar kinds of patients across documents

      Note that I'm not actually a big fan of how the MSR paper indirects the work through KG extraction, as that exits the semantic domain, and we don't do it that way

      Fundamentally, that both moves away from paltry retrieval result sets that are small/gaps/etc, and enables cleaner input to the runtime query

      I agree it is a quick win if quality can be low and you have low budget/time. Like combine a few out of the box index types and do rank retrieval. But a lot of the power gets lost. We are working on infra (+ OSSing it) because that is an unfortunate and unnecessary state of affairs. Right now llamaindex/langchain and raw vector DBs feel like adhoc and unprincipled ways to build these pipelines in a software engineering and AI perspective, so from an investment side, moving away from hacks and to more semantic, composable, & scalable pipelines is important IMO.

      2 replies →