Comment by isoprophlex

2 days ago

> LLMs are naively bad at identifying the correct level of granularity for the decomposed knowledge. One trick we found is to ask the LLM to output a mermaid.js mindmap to hierarchically break down the input into a tree. At the end of that output, ask the LLM to state which level is the appropriate root for a knowledge node. > Then the node is used to generate questions that could be answered from the knowledge contained in this node. We then index the text of these questions and also embed them.

Ha, that's brilliant. Thanks for sharing this!