Comment by cxr

4 years ago

> Most of it had to do with scaling - as all this occurred in the frontend of these p2p sites

You didn't say it, exactly, but just in case anyone who's reading casually misses it: this need not be the case with the static fediverse. I didn't mention JS or any sort of fully in-browser aggregator; nothing here requires client logic running on the "frontend" (i.e. in a Web browser). You'd be free to make whatever client choices are appropriate for you, including managing the whole thing via shell scripts (as many people choose to with their static sites).

I think a static site hosted on git is the sweet spot for the fediverse. It's realistic to expect an HTTPS 1.1 static endpoint and openssh to have << 1 unauthenticated remotely exploitable bugs per year.

This opens up a whole new world of hosting options. Think a raspberry pi that auto backs up to github/gitlab/git, with an ACME / let's encrypt client.

Stick it behind the right CDN and it could even host podcasts / video.

Enable automatic OS upgrades and give it a maintenance window and it's zero maintenance until the $35 hardware fails.

Edit: Or be lazy and stick the static site in s3.

The difference is that dat, fritter, rotonde, and friends are p2p. There's no frontend in the traditional sense, just like there's no backend. It's all static files executed by a process. Imagine the same with .sh files and a shell executing them, all on your machine.

The static fediverse requires an always-on, network-available machine that is your data storage, your interaction with other people, and your data manipulation center, all in the same place

  • I don't understand your comment. What are you responding to, and what am I (or someone else) supposed to do with the information that your comment tries to convey?

    • I think rakoo was contrasting your observation as a "server-side vs client-side" distinction. As you said, in a server-side model you're using a server and you have a lot of freedom in how you manage the system. The p2p model in Beaker put everything in the client (the browser and thus the frontend js) and you're constrained to that model.

      The nice thing about client-driven p2p is that the local-first model mirrors the intuition of using a desktop app, kind of like editing files in vim. Rakoo is overselling it a bit though, because the p2p network we used doesn't guarantee uptime unless you keep your device on. You might still want a caching supernode in the system. P2P also introduces coordination challenges with multiple devices, though the hypercore protocol folks are developing some answers to that now.

      The idea you're discussing of dumb files on an HTTP server can work. There's a shocking amount you can accomplish if you just add the ability to enumerate files with range-queries (aka "list files in folder") though solving that with generated index files as in RSS is also fine, and a bit more flexible to boot. If your goal is to keep the server "dumb" then you're probably looking at a pull-based architecture -- again like RSS -- which reduces the amount of coordination between servers. This tends to mean you sacrifice discovery because you don't receive information you're not pulling, so a @mention or reply or subscribe by a random won't reach you. That might be a feature more than a bug for some. The solution is either to add push or to do some network-crawling. Secure Scuttlebutt does the latter along N expansions of the FoaF graph, which has a nice web-of-trust concept embedded in it. You could also go "full google" and run a service that crawls the entire network, then serve the crawled output to users, at which point you're pretty much at Twitter-levels of connectivity with a pull-based network.

      The one other observation I'd make is that dumb file servers are most limited by the kinds of queries they can satisfy over the network. You can solve that either by crawling a site into your local index before attempting queries, or by trying to produce index-files on each server (which, again, is basically what RSS is). If you do the latter, I'd look into a file format in which range queries could be used so that a query can fetch a subset; perhaps using a fixed-length header and/or fixed-length records.

      1 reply →