Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever

2 days ago (github.com)

Reddit's API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.

The key point: This doesn't touch Reddit's servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.

What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.

API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.

Self-hosting options: - USB drive / local folder (just open the HTML files) - Home server on your LAN - Tor hidden service (2 commands, no port forwarding needed) - VPS with HTTPS - GitHub Pages for small archives

Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.

Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.

How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is "trust but verify" – it accelerates the boring parts but you still own the architecture.

Live demo: https://online-archives.github.io/redd-archiver-example/

GitHub: https://github.com/19-84/redd-archiver (Public Domain)

Pushshift torrent: https://academictorrents.com/details/1614740ac8c94505e4ecb9d...

Cool way to self-host archives.

What I'd really like is a plugin that automatically pulls from archives somewhere and replaces deleted comments and those bot-overwritten comments with the original context.

Reddit is becoming maddening to use because half the old links I click have comments overwritten with garbage out of protest for something. Ironically the original content is available in these archives (which are used for AI training) but now missing for actual users like me just trying to figure out how someone fixed their printer driver 2 years ago.

  • That would only really be ironic if the reason for people overwriting their comments was out of protest for LLM training, but the main reason that resulted in by far the biggest wave of deletions was Reddit locking down their API. If the result of their protest is that the site is less useful for you, the user, then in fact it served its purpose, as the entire point was an attempt to boycott Reddit, ie. get people to stop using it by removing the user contributions that give the site its only value in the first place.

    • > If the result of their protest is that the site is less useful for you, the user, then in fact it served its purpose, as the entire point was an attempt to boycott Reddit, ie. get people to stop using it by removing the user contributions that give the site its only value in the first place.

      In practice I just give them more page views because I have to view more threads before I find the answer.

      Reddit's DAU numbers have only gone up since the protest.

      2 replies →

  • Just offering another perspective because I see those missing comments too. The author decided they didn't want to participate in public discourse anymore and their comment is gone. So be it. I don't search archives or use tools to undermine their effort. I move onto the next thing.

    I read "it's maddening because ... they decided to use their autonomy and..." and I stop there. So be it.

    • People use their autonomy to maddening ends—how does the fact that it is of their own volition offer you any comfort? I ask genuinely. Is it something along the lines of recognizing the things you can't change?

      1 reply →

Data is available via torrent in this section: https://github.com/19-84/redd-archiver?tab=readme-ov-file#-g...

I wonder if you could use this to "Seed" a new distributed social media thing and just take over from there.

sort of like forking a project.

Very cool project! Quick question: is the underlying Pushshift dataset updated with new Reddit data on any regular cadence (daily/weekly/monthly), or is this essentially a fixed historical snapshot up to a certain date? Just want to understand if self-hosters would need to periodically re-download for fresh content or if it's archival-only.

I tried spinning up the local approach with docker compose, but it fails.

There's no `.env.example` file to copy from. And even if the env vars are set manually, there are issues with the mentioned volumes not existing locally.

Seems like this needs more polish.

I wonder if this can be hooked up with the now-dead Apollo app in some way, to get back a slice of time that is forever lost now?

If reddit was a squeaky clean place, or if I could pick certain subs, maybe I would be interested, but I really wouldn't want ALL of reddit on my machine even temporarily.

  • the torrent has data for the top 40,000 subs on reddit. thanks to watchful1 splitting the data by subreddit, you can download only the subreddit you want from the torrent

    • I am going to be honest and this looks really cool.

      40,000 subs are good numbers and I hope that the number can be spread to even more subreddits

      Perhaps we can finally migrate all or much of the data to lemmy instances as well to finally get the lemmy instance up and running as well.

      Thank you for creating this. It opens up a lots of interesting opportunities.

Opened the live demo, went into programming subreddit, felt like I was showered with liquid shit. I tend to forget what kind of edgelord hellhole Reddit was (and stil is sometimes).

I want to do the same thing for tiktok. I have 5k videos starting from the pandemic downloaded. want to find a way to use AI to tag and categorize the videos to scroll locally.

This is a great way to participate in arguments you missed three years ago.

Appreciated.

EDIT: Is there any cheap way to search? I have MS TechNet archive which is useless without search, so I realky want to know a way to have a cheap local search w/o grepping everyting.

  • redd-archiver uses postgres full text search. for static search you could use lunr.js

[flagged]

Did you pay all the people who created its content?

  • Did anyone ever comment on reddit with an expectation of pay?

    It's an open forum - similar to here, whatever I post I it's in the public forum and therefore I expect it to be used / remixed however anyone wants.

  • I have no problem with this being downloaded for personal use, in fact that's a good thing. But of course we both know it'll be used to train AI.