Comment by littlestymaar
1 day ago
That's what I did, but the thing is the LLM has no way to know what details are important in the first chapter before seeing their importance in the later chapters, and so these details usually get discarded by the summarization process.
Which is why you go back and re-summarize a second time given the context of the important details found out in the first pass.
Maybe it would work, as I said, but that wouldn't fit the original challenge description anymore IMO.
Models can do this all by themselves after a single human interaction, I think that fits