← Back to context

Comment by simonw

1 day ago

Don't miss how this works. It's not a server-side application - this code runs entirely in your browser using SQLite compiled to WASM, but rather than fetching a full 22GB database it instead uses a clever hack that retrieves just "shards" of the SQLite database needed for the page you are viewing.

I watched it in the browser network panel and saw it fetch:

  https://hackerbook.dosaygo.com/static-shards/shard_1636.sqlite.gz
  https://hackerbook.dosaygo.com/static-shards/shard_1635.sqlite.gz
  https://hackerbook.dosaygo.com/static-shards/shard_1634.sqlite.gz

As I paginated to previous days.

It's reminiscent of that brilliant SQLite.js VFS trick from a few years ago: https://github.com/phiresky/sql.js-httpvfs - only that one used HTTP range headers, this one uses sharded files instead.

The interactive SQL query interface at https://hackerbook.dosaygo.com/?view=query asks you to select which shards to run the query against, there are 1636 total.

A recent change is I added date spans to the shard checboxes on query view so it's easier to zero dates you want if you have that in mind. Because if your copy isn't local all those network pulls take a while.

The sequence of shards you saw when you paginated to days is faciliated by the static-manifest which maps HN item ID ranges to shards, and since IDs are increasing and a pretty good proxy of time (a "HN clock"), we can also map the shards that we cut up by ID to the time spans their items cover. An in memory table sorted by time is created from the manifest on load so we can easily look up which shard we need when you pick a day.

Funnily enough, this system was thrown off early on by a handful of "ID/timestamp" outliers in the data: items with weird future timestamps (offset by a couple years), or null timestamps. To cleanse our pure data from this noise, and restore proper adjacent-in-time shard cuts we just did a 1/99 percentile grouping and discarded the outliers leaving shards with sensible 'effective' time spans.

Sometimes we end up fetching two shards when you enter a new day because some items' comments exist "cross shard". We needed another index for that and it lives in cross-shard-index.bin which is just a list of 4-byte item IDs that have children in more than 1 shard (2-bytes), which occurs when people have the self-indulgence to respond to comments a few days after a post has died down ;)

Thankfully HN imposes a 2 week horizon for replies so there aren't that many cross-shard comments (those living outside the 2-3 days span of most, recent, shards). But I think there's still around 1M or so, IIRC.

A read-only VFS doing this can be really simple, with the right API…

This is my VFS: https://github.com/ncruces/go-sqlite3/blob/main/vfs/readervf...

And using it with range requests: https://pkg.go.dev/github.com/ncruces/go-sqlite3/vfs/readerv...

And having it work with a Zstandard compressed SQLite database, is one library away: https://pkg.go.dev/github.com/SaveTheRbtz/zstd-seekable-form...

Thanks! I'm glad you enjoyed the sausage being made. There's a little easter egg if you click on the compact disc icon.

And I just now added a 'me' view. Enter your username and it will show your comments/posts on any day. So you can scrub back through your 2006 - 2025 retrospective using the calendar buttons.

  • I almost got tricked into trying to figure out what was Easter eggy about August 9 2015 :-) There's a clarifying tooltip on the link, but it is mostly obscured by the image's "Archive" title attribute.

    • Oh, shit that was the problem! You solved the bug! I was trying to figure out why the right tooltip didn't display. A linked wrapped in an image wrapped in an easter egg! Or something. Ha, thank you. Will fix :)

      edit: Fixed! Also I just pushed a new version with a Dec 29th Data Dump, so ... updates - yay!

Is there anything more production grade built around the same idea of HTTP range requests like that sqlite thing? This has so much potential

I am curios why they don't use a single file and HTTP Range Requests instead. PMTiles (a distribution of OpenStreetMap) uses that.

  • This would be a neat idea to try. Want to add a PR? Bench different "hackends" to see how DuckDB, SQLite shards, or range queries perform?

The GitHub page is no longer available, which is a shame because I'm really interested in how this works.

How was the entirety of HN stored in a single SQLite database? In other words, how was the data acquired? And how does the page load instantly if there's 22GB of data having to be downloaded to the browser?

  • You can see it now, forgot to make it public.

    - 1. download_hn.sh - bash script that queries BigQuery and saves the data to *.json.gz

    - 2. etl-hn.js - does the sharding and ID -> shard map, plus the user stats shards.

    - 3. Then either npx serve docs or upload to CloudFlare Pages.

    The ./toool/s/predeploy-checks.sh script basically runs the entire pipeline. You can do it unattended with AUTO_RUN=true