← Back to context

Comment by EGreg

2 years ago

So back to the question - how do you know the LLM didnt hallucinate an answer?

What do you think “indexed by an LLM” is?

Perhaps Anthropic with its 100K window can actually do it. But most LLM have such a small comtext window that it’s just Pinecone vector database indexing something and stuffing it in the prompt at prompt time. Come on.