Comment by georgeck
14 days ago
I tried summarizing the thread so far (339 comments) with a custom system prompt [0] and a user-prompt that captures the structure (hierarchy and upvotes) of the thread [1].
This is the output that we got (based on the HN-Companion project) [2]:
LLama 4 Scout - https://gist.github.com/annjose/9303af60a38acd5454732e915e33...
Llama 4 Maverick - https://gist.github.com/annjose/4d8425ea3410adab2de4fe9a5785...
Claude 3.7 - https://gist.github.com/annjose/5f838f5c8d105fbbd815c5359f20...
The summary from Scout and Maverick both look good (comparable to Claude), and with this structure, Scout seems to follow the prompt slightly better.
In this case, we used the models 'meta-llama/llama-4-maverick' and 'meta-llama/llama-4-scout' from OpenRouter.
--
[0] - https://gist.github.com/annjose/5145ad3b7e2e400162f4fe784a14...
[1] - https://gist.github.com/annjose/d30386aa5ce81c628a88bd86111a...
[2] - https://github.com/levelup-apps/hn-enhancer
edited: To add OpenRouter model details.
This is the script that assembles the structured comments and generates the summary - https://github.com/levelup-apps/hn-enhancer/blob/main/script...
You can run it as: node summarize-comments.js <post_id> Example: node summarize-comments.js 43597782
And the summary will be put in the "output" folder.
You need to set the environment variable (in this case OPENROUTER_API_KEY because LLama4 is currently available at OpenRouter).
as another dateline, Maverick has taken #2 position on LMArena, just behind Gemini 2.5 Pro.