Comment by gk1
7 months ago
92% reduction is amazing. I often write product marketing materials for devtool companies and load llms.txt into whatever AI I’m using to get accurate details and even example code snippets. But that instantly adds 60k+ tokens which, at least in Google AI Studio, annoyingly slows things down. I’ll be trying this.
Edit: After a longer look, this needs more polish. In addition to key question raised by someone else about quality, there are signs of rushed work here. For example the critical llm_min_guideline.md file, which tells the LLM how to interpret the compressed version, was lazily copy-pasted from an LLM response without even removing the LLM's commentary:
"You are absolutely right! My apologies. I was focused on refining the detail of each section and overlooked that key change in your pipeline: the Glossary (G) section is no longer part of the final file..."
Doesn't exactly instill confidence.
Really nice idea. I hope you keep going with this as it would be a very useful utility.
Oof, you nailed it. Thanks for the sharp eyes on llm_min_guideline.md. That's a clear sign of me pushing this out too quickly to get feedback on the core concept, and I didn't give the supporting docs the attention they deserve. My bad. Cleaning that up, and generally adding more polish, is a top priority. Really appreciate you taking the time to look deeper and for the encouragement to keep going. It's very helpful!
Wait, are you also using an LLM to respond on Hacker News?
haha, is it that obvious? I only let LLM polished this one. I am not native speaker and I was trying to be polite ^-^
Damn... I saw your sentence starting with "Wait', and immediately thought "reasonning llm?"