Comment by adamgordonbell
3 days ago
So what is the answer to "Who is Scrooge?" and is it different / better than another approach?
( Like whole thing in contenxt window for instance? )
Is this approach just for cost savings or does it help get better answers and how so?
Could you share a specific example?
Generally speaking RAG comes in the game when it is impractical to use large context windows for three reasons: (1) accuracy drops as you stuff the context windows, (2) currently, context windows do not scale past 1M tokens, and (3) even with caching, moving millions of tokens is wasteful and not viable both in terms of costs and latency.
So we should really compare this to other RAG approaches. If we compare it to vector databases RAG, knowledge graphs have the advantage that they model the connections between datapoints. This is super important when asking questions that requires to reason across multiple pieces of information, i.e. multi-hop reasoning.
Also, the graph construction is essentially an exercise in cleaning data to extract the knowledge. Let me give you a practical example. Let's pretend we're indexing customer tickets for creating an AI assistant. If we were to store the data on the tickets as it is, we would overwhelm the vector database with all the noise coming from the conversational nature of this data. With knowledge graphs, we extract only the relevant entities and relationships and store the distilled knowledge in our graph. At query time, we find the answer over a structured data model that contains only clean information
Makes sense, but so can you compare it to to RAG then and show how an answer is superior and what the context contains that makes it superior?
Or how it is close to large context quality of answer with lower cost on some specific examples.
It's helpful when a readme contains a demonstration or as I said above, a specific example.