Comment by lmeyerov

1 year ago

Neo4j graph rag is typically not graph rag in the AI sense / MSR Graph RAG paper sense, but KG or lexical extraction & embedding, and some retrieval time hope of the neighborhood being ok

GRAG in the direction of the MSR paper adds some important areas:

- summary indexes that can be lexical (document hierarchy) or not (topic, patient ID, etc), esp via careful entity extraction & linking

- domain-optimized summarization templates, both automated & manual

- + as mentioned, wider context around these at retrieval

- introducing a more generalized framework for handling different kinds of concept relations, summary indexing, and retrieval around these

Ex: The same patient over time & docz, and seperately, similar kinds of patients across documents

Note that I'm not actually a big fan of how the MSR paper indirects the work through KG extraction, as that exits the semantic domain, and we don't do it that way

Fundamentally, that both moves away from paltry retrieval result sets that are small/gaps/etc, and enables cleaner input to the runtime query

I agree it is a quick win if quality can be low and you have low budget/time. Like combine a few out of the box index types and do rank retrieval. But a lot of the power gets lost. We are working on infra (+ OSSing it) because that is an unfortunate and unnecessary state of affairs. Right now llamaindex/langchain and raw vector DBs feel like adhoc and unprincipled ways to build these pipelines in a software engineering and AI perspective, so from an investment side, moving away from hacks and to more semantic, composable, & scalable pipelines is important IMO.

    > Neo4j graph rag is typically not graph rag

I would mildly disagree with this; Neo4j just serves as an underlying storage mechanism much like Postgres+pgvector could be the underlying storage mechanism for embedding-only RAG. How one extracts entities and connects them in the graph happens a layer above the storage layer of Neo4j (though Neo4j can also do this internally). Neo4j is not magic; the application layer and data modelling still has to define which entities and how they are connected.

But why Neo4j? Neo4j has some nice amenities for building GRAG on top of. In particular, it has packages to support community partitioning including Leiden[0] (also used by Microsoft's GraphRAG[1]) and Louvain[2] as well as several other community detection algorithms. The built-in support for node embeddings[3] as well as external AI APIs[4] make the DX -- in so far as building the underlying storage for complex retrieval -- quite good, IMO.

The approach that we are taking is that we are importing a corpus of information into Neo4j and performing ETL on the way in to create additional relationships; effectively connecting individual chunks by some related "facet". Then we plan to run community detection over it to identify communities of interest and use a combination of communities, locality, and embedding match to retrieve chunks.

I just started exploring it over the past week and I would say that if your team is going to end up doing some more complex GRAG, then Neo4j feels like it has the right tooling to be the underlying storage layer and you could even feasibly implement other parts of your workflow in there as well, but entity extraction and such feels like it belongs one layer up in the application layer. Most notably, having direct query access to the underlying graph with a graph query language (Cypher) means that you will have more control and different ways to experiment with retrieval. However; as I mentioned, I would encourage most teams to be more clever with embedding RAG before adding more infrastructure like Neo4j.

[0] https://neo4j.com/docs/graph-data-science/current/algorithms...

[1] https://microsoft.github.io/graphrag/

[2] https://neo4j.com/docs/graph-data-science/current/algorithms...

[3] https://neo4j.com/docs/graph-data-science/current/machine-le...

[4] https://neo4j.com/labs/apoc/5/ml/openai/

  • We generally stick with using neo4j/neptune/etc for more operational OLTP graph queries, basically large-scale managed storage for small neighborhood lookups. As soon as the task becomes more compute-tier AI workloads, like LLM summary indexing of 1M tweets or 10K documents, we prefer to use GPU-based compute stacks & external APIs with more fidelity. Think pipelines combining bulk embeddings, rich enrichment & wrangling, GNNs, community detection, etc. We only dump into DBs at the end. Speedups are generally in the 2-100X territory with even cheapo GPUs, so this ends up a big deal for both development + production. Likewise, continuous update flows end up being awkward in these environments vs full compute-tier ones, even ignoring the GPU aspect.

    Separately, we're still unsure about vector search inside vs outside the graph DB during retrieval, both in the graph RAG scenario and the more general intelligence work domains. I'm more optimistic there for keeping in these graph DB, especially for small cases (< 10M node+edges) we do in notebooks.

    And agreed, it's unfortunate neo4j uses graph RAG to market a variety of mostly bad quality solutions and conflate it with graph db storage, and the MSR researchers used it for a more specific and more notable technique (in AI circles) that doesn't need a graph DB and IMO, fundamentally, not even a KG. It's especially confusing that both groups are 'winning' on the term... in different circles.