Show HN: Zonformat– 35–60% fewer LLM tokens using zero-overhead notation

1 day ago (zonformat.org)

hey HN!

Roni from India — ex-Google Summer of Code(GSoC) @ internet archive, full-stack dev.

got frustrated watching json bloat my openai/claude bills by 50%+ on redundant syntax, so i built ZON over a few weekends: zero-overhead notation that compresses payloads ~50% vs json (692 tokens vs 1,300 on gpt-5-nano benchmarks) while staying 100% human-readable and lossless.

Playground -> https://zonformat.org/playground ROI calculator -> https://zonformat.org/savings

<2kb typescript lib with 100% test coverage. drop-in for openai sdk, langchain js/ts, claude, llama.cpp, streaming, zod schemas—validates llm outputs at runtime with zero extra cost.

Benchmarks -> https://zonformat.org/#benchmarks

try it: npm i zon-format or uv add zon-format, then encode/decode in <10s (code in readme). full site with benchmarks: https://zonformat.org

github → https://github.com/ZON-Format

harsh feedback on perf, edge cases, or api very welcome. if it saves you a coffee's worth of tokens, a star would be awesome

let's make llm prompts efficient again

A playground for the zon format is great, but it would be amazing to see a few examples where zon has already been integrated into the LLM and see its responses to user queries. It doesn't even need to be a playground (as that becomes costly quickly), just some examples for the user to see how the black box will work when zon is integrated.

Let's make the English language mean something again.

If you're————————going————————to use a LLM, can you at least reformat this post to not sound like @sama?