Comment by michalwarda
1 year ago
This works until relevant information is colocated. Sometimes though, for example in financial documents, important parts reference each other through keywords etc. That's why you can always try and retrieve not only positionally related chunks but also semantically related ones.
Go for chunk n, n - m, n + p and n' where n' are closest chunks to n semantically.
Moreover you can give this traversal possibility to your LLM to use itself as a tool or w/e whenever it is missing crucial information to answer the question. Thanks to that you don't always retrieve thousands of tokens even when not needed.
That's why the entry point would still be an embedding search; it's just that instead of using the first 20 embedding hits, you take the first 5 and if the reference is "semantically adjacent" to the entry concept, we would expect that some of the first few chunks would capture it in most cases.
I think where GRAG yields more relevancy is when the referenced content is not semantically similar nor even semantically adjacent to the entry concept but is semantically similar to some sub fragment of a matched chunk. Depending on the corpus, this can either be common (no familiarity with financial documents) or rare. I've primarily worked with clinical trial protocols and at least in that space, the concepts are what I would consider "snowflake-shaped" in that it branches out pretty cleanly and rarely cross-references (because it is more common that it repeats the relevant reference).
All that said, I think that as a matter of practicality, most teams will probably get much bigger yield with much less effort doing local expansion based on matching for semantic similarity first since it addresses two core problems with embeddings (text chunk size vs embedding accuracy, relevancy or embeddings matched below a given threshold). Experiment with GRAG depending on the type of questions you're trying to answer and the nature of the underlying content. Don't get me wrong; I'm not saying GRAG has no benefit, but that most teams can explore other ways of using RAG before trying GRAG.
Neo4j graph rag is typically not graph rag in the AI sense / MSR Graph RAG paper sense, but KG or lexical extraction & embedding, and some retrieval time hope of the neighborhood being ok
GRAG in the direction of the MSR paper adds some important areas:
- summary indexes that can be lexical (document hierarchy) or not (topic, patient ID, etc), esp via careful entity extraction & linking
- domain-optimized summarization templates, both automated & manual
- + as mentioned, wider context around these at retrieval
- introducing a more generalized framework for handling different kinds of concept relations, summary indexing, and retrieval around these
Ex: The same patient over time & docz, and seperately, similar kinds of patients across documents
Note that I'm not actually a big fan of how the MSR paper indirects the work through KG extraction, as that exits the semantic domain, and we don't do it that way
Fundamentally, that both moves away from paltry retrieval result sets that are small/gaps/etc, and enables cleaner input to the runtime query
I agree it is a quick win if quality can be low and you have low budget/time. Like combine a few out of the box index types and do rank retrieval. But a lot of the power gets lost. We are working on infra (+ OSSing it) because that is an unfortunate and unnecessary state of affairs. Right now llamaindex/langchain and raw vector DBs feel like adhoc and unprincipled ways to build these pipelines in a software engineering and AI perspective, so from an investment side, moving away from hacks and to more semantic, composable, & scalable pipelines is important IMO.
I would mildly disagree with this; Neo4j just serves as an underlying storage mechanism much like Postgres+pgvector could be the underlying storage mechanism for embedding-only RAG. How one extracts entities and connects them in the graph happens a layer above the storage layer of Neo4j (though Neo4j can also do this internally). Neo4j is not magic; the application layer and data modelling still has to define which entities and how they are connected.
But why Neo4j? Neo4j has some nice amenities for building GRAG on top of. In particular, it has packages to support community partitioning including Leiden[0] (also used by Microsoft's GraphRAG[1]) and Louvain[2] as well as several other community detection algorithms. The built-in support for node embeddings[3] as well as external AI APIs[4] make the DX -- in so far as building the underlying storage for complex retrieval -- quite good, IMO.
The approach that we are taking is that we are importing a corpus of information into Neo4j and performing ETL on the way in to create additional relationships; effectively connecting individual chunks by some related "facet". Then we plan to run community detection over it to identify communities of interest and use a combination of communities, locality, and embedding match to retrieve chunks.
I just started exploring it over the past week and I would say that if your team is going to end up doing some more complex GRAG, then Neo4j feels like it has the right tooling to be the underlying storage layer and you could even feasibly implement other parts of your workflow in there as well, but entity extraction and such feels like it belongs one layer up in the application layer. Most notably, having direct query access to the underlying graph with a graph query language (Cypher) means that you will have more control and different ways to experiment with retrieval. However; as I mentioned, I would encourage most teams to be more clever with embedding RAG before adding more infrastructure like Neo4j.
[0] https://neo4j.com/docs/graph-data-science/current/algorithms...
[1] https://microsoft.github.io/graphrag/
[2] https://neo4j.com/docs/graph-data-science/current/algorithms...
[3] https://neo4j.com/docs/graph-data-science/current/machine-le...
[4] https://neo4j.com/labs/apoc/5/ml/openai/
1 reply →