A simple web we own

21 hours ago (rsdoiel.github.io)

As someone who is roughly in the same age group as the author and who was running a BBS, has witnessed the rise of IP4 networks, HTTP, Mosaic etc. let me provide a counter-point.

The democratization ends at your router. Unless you are willing to lay down your own wires - which for legal reasons you most likely won't be able to do, we will hopelessly be dependent on the ISP. (Radio on free frequencies is possible and there are valiant attempts, they will ultimately remain niche and have severe bandwidth limitations)

For decades ISP have throttled upload speeds: they don't want you to run services over their lines. When DSL was around (I guess it still is) in Germany, there was a mandatory 24h disconnect. ISP control what you can see and how fast you can see it. They should be subject to heavy regulation to ensure a free internet.

The large networks, trans-atlantic, trans-pacific cables, all that stuff is beyond the control of individuals and even countries. If they don't like your HTTP(S) traffic, the rest of the world won't see it.

So what you can own is your local network. Using hardware that is free of back-doors and remote control. There's no guarantee for that. If you are being targeted even the Rasperry Pi you just ordered might be compromised. We should demand from our legislators that hardware like this is free of back-doors.

As to content creation: There are so so many tools available that allow non-technical users to write and publish. There's no crisis here other than picking the best tool for the job.

In short: there's no hope of getting a world-wide, free, uncensored, unlimited IP4/6 network back. We never had it in the first place.

  • > In short: there's no hope of getting a world-wide, free, uncensored, unlimited IP4/6 network back. We never had it in the first place.

    We can build such a society. I am not sure why you think this is never possible.

    People can work for a better world. That sometimes works, too.

    • > We can build such a society. I am not sure why you think this is never possible.

      Maybe we can, but it is A) a far bigger, older, and more difficult problem than how to structure a computer network, and B) fundamentally not solvable through technological means.

      No matter how much technologists love the idea of technology as a liberating force, our worst instincts and dynamics always reassert themselves and soon figure out how to use that same technology to destroy liberty.

    • > We can build such a society. I am not sure why you think this is never possible.

      Where does such informed political and economic interest and power exist? With whom do we construct such society? Do they have the power and will to fight for it?

      Normies live with normie standards and with incresing social media exposure with more and more emotional animal-like manipulated world views. They are either ignorant or ambivalent.

      Will tech people gather on a piece of land and declare independence? Most of my tech worker colleagues are also quite pro-social media and they heavily use it to boost their apparent social status. We cannot even trust our kind.

      Similar examples of new technology being used to motivate and mobilize masses have always ended with devastating wars and genocides. Previously the speed of propagation of information gave advantages to statespeople like FDR to put an end to increasing racism/Nazism/violent tendencies (of course not everywhere, when let to its own devices new technology almost perfect for constructing dictatorships). Now everybody has equal access to misinformation.

    • Even in our fabulously wealthy societies, people are mostly worried about paying their bills, taking care of their families, and putting food on the table, not in getting together in a quixotic enterprise and paying for thousands of kilometers of communal fiber. Also like in most communist utopias what would probably happen is that the control infrastructure would be captured by special interest groups and now you’ve traded one evil for another, but in addition you’re left holding the capex bag and you’re poorer for it.

    • The technology is the easy part, the people are the hard part. The reality is that we simply don't have thought leaders in charge anymore, there's no innovations or anything that are coming to correct course, very few if any channels even exist anymore for good ideas to flow upwards that result in good & proper solution implementations that positively preserve/protect/harden what we want the web to be. I think a lot of bright minds who could be solutioning for some of these things understand the dynamics at play even if they've never taken a huge moment to think about it. Subconsciously they are aware that becoming a person to try and steer such a big ship would require a monumental exertion that is maybe not worth it anymore. The great leaders never actively seek out leadership positions, similarly I don't think the people who could be good decision makers and influence these types of ideals coming to fruition in society actively seek out such positions. The possible mental tax of getting there is probably enormous. It is not an economic win for anyone to take up the mantle of trying to steer ships this size, it is a massive sacrifice. People who would be fit for the task probably just want to sign off at the end of the day and... have a good life and exist/be a benefit in their communities. In some ways perhaps that makes them.. unfit for carrying this torch. Perhaps there are simply too few people out there that are adequately qualified to carry this torch, we are in dire need of competent people at the helm of many fronts and we simply don't have that, that's just the real life variables at play right now.

      We plebs are just driftwood floating in massive waves of nation state decision making. I don't doubt there are people who literally work at ISPs who are depressed at the state of things, depressed that theyre not allowed to take action on certain things, depressed that they see first-hand what kind of control mechanisms they're forced to implement or disallowed from implementing and more. It's got to be a trove of BS in an age of misinformation which has always been an information systems problem that humanity has implemented checks notes zero solutions for. And at the end of the day they, probably like all of us, just want to live a good and meaningful life.

      That's not to say just... give up on ideals. But instead to acknowledge the realities of ideals not being enough on their own. Have some real conversations on what it would actually take to embed these types of fundamentals into a society, get comfortable with the uncomfortable realities. So much work needs to be done before new ideals can even be shared. Outreach alone to spread ideals is a massive uphill battle at this point due to conglomerate control of broadcast media and concentrated ownership of social media apps. A lot of these particular ideals require a decent understanding/background of technology in general which most people don't even have, making these things an incredibly unlikely basis for a society where these things are well-enough understood. So the circus trick here is how do you make it a digestable topic that touches the souls of many and galvanizes them to take the correct stance so that these things become embodied in the set of ideals a society values, so that legislators and whatever other proxies that are tasked with decision making give these things the resourcing or policy making attention they deserve. That's the mega hard part, which is then additionally compounded in difficulty by ... most households in our societies just never having these types of discussions make it to their TV/computer screens. Hackernews types like to call these people "normies" and tack the blame on them, but they can't seem to wrap their mind around that not everyone could or should have a deep compsci background. We should be coexisting with people of a variety of backgrounds and instead we should be looking at their "normie"-ness as a thing to account for, not blame. It would be absurd to have a "normie" expect us to be exceptional at rebuilding car engines or any other broad subset of knowledge that we haven't ourselves committed our own lives/spare time to.

      So that leaves the other route to take which is just... renegade fine-we'll-do-it-ourselves. Which can succeed, but has its own set of challenges. Fronting infrastructure for a lot of stuff is expensive, so donors are needed on sometimes vast scales. To another commenters point like... ain't none of us on the renegade front laying undersea cables any time soon which are multi-billion dollar projects to cross the Pacific. Often times we see these underground efforts fail in their infancy simply because the UX just flat out sucks and we're up against entities who can giga-scale all their infrastructure/resources & ultimately capitalize on making whatever app thing fast&pleasant for users. It feels like we're drowning against titans sometimes, it's overwhelming.

      2 replies →

    • > People can work for a better world. That sometimes works, too.

      Not when people make arguments based on dreams, hope, and optimism.

      If somebody tells me that we can build a shed, I want them to talk about wood, nails and concrete, or to stop talking.

      5 replies →

    • The assertion that an uncensored internet is a better world should probably require some motivation.

      If everyone was a normal (as far as anyone is normal) law abiding citizen perhaps I would agree, but sadly that is far from the case. I think history has quite clearly shown that there is a minority of people out there that will take advantage and ruin things for everyone else. It's the same reason we have militaries, police forces, government checks and balances, etc. The internet is no exception to this.

      I don't think the world is simple enough where anyone could be absolutist about freedom, it's all grey areas and complicated lines drawn.

      1 reply →

  • I think some of those barriers are going away ( in the UK it's now possible to get symmetric full fibre at a reasonable price ), static IPs, ISP's without filtering etc.

    I think the main barrier is still the complexity of running your own service - it's a full time job to keep on top of the bad actors.

    For example, if you have your own domain it's perfectly possible to run your own email server - however it's quite a lot of ongoing effort - it's not just set up and forget.

    • > however it's quite a lot of ongoing effort - it's not just set up and forget.

      I have seen those kinds of opinions on internet already few times. No it is not that complicated. Yes you need to buy server. Yes you need to setup the DNS. Yes you need to maintain, and update server and its software. But this is like that with everything you selfhost.

      Beside that you need mostly 1 time operations like: - setup domain entries - setup SPF - setup DKIM - setup certs - install server (of course) - test if this works - setup some Google Postmaster account because they do not like new domains sending them emails

      I do not remember anything else beside some administrative tweaks here and there. But!

      I never attempted to run postfix, dovecot combo myself. I was aiming to run whole thing on Docker and forget about configuring dozens of config files on Linux host. With docker you can just migrate whole set of volumes to new machine and that is it. I am running Mailcow BTW.

      Lately I moved whole thing to new machine by just running one script https://docs.mailcow.email/backup_restore/b_n_r-coldstandby/...

      On the other hand you need to have some technical knowledge, but I do not think this is harder then any other containerized software.

  • There is hope. It's not only possible, but it is likely and is already underway. DNS can be replaced. IPv4/v6 can be replaced. We just need to build a layer on top that ignores the access controls of the underlying layer (the Internet), then replace the underlying layer altogether with new infrastructure.

    There will be more efforts like this: https://yggdrasil-network.github.io

    Those who want control over other people's mouths and eyes and ears, and rely on it to maintain their undeserved authority and prosperity are going to have a bad time.

    • First step to have a protocol that depends on mass adoption to be useful: choose an unpronounceable name

  • Yeah, it's kinda sad reality and I suddenly felt gloomy. Do you have a more optimistic view that you can share?

    • Let me introduce you to the decentralized alternative to ISPs, connecting and collaborating with the new-ish wireless mesh networks that are still active and maintained. The three biggest AFAIK are Freifunk (Germany), Guifi (Spain) and NYCMesh (NYC/US?).

      Basically, you can as a private individual set up a wireless node, talk with your nearest node that you have a visual line of sight to, and get connected to a completely separate network from the internet, where there is a ton of interesting stuff going on, and it's mostly volunteer run.

  • > We should demand from our legislators that hardware like this is free of back-doors

    In some countries that may be possible (if only for now). Where chips are produced makes that an impossibility for most. That is, you can have certain guarantees if you run the chip fab, although if you are downstream of that, it can be a tall order to guarantee your chips are sovereign. So, while I like the sentiment that you have some sort of control behind your router, I'm really unsure how true that is given the complexity of producing modern day chips. Disclaimer, not an expert, just an opinion.

    • I would even say unless you truly have full custody of the transportation of components as well, that is unlikely. Israel’s pager bombs in Lebanon were supplied via a third party, not the manufacturer.

  • Instead of thinking about cables, have we considered the idea of satellites getting cheap enough to launch?

    • Or use WiFi and decentralized networks, Freifunk, Guifi and NYCMesh already demonstrated it's possible, and you can easily with consumer hardware setup 1km links with directed antennas, so as long as you have line of sight to another already connected node, you too can participate and help build another network that is separate from the internet :)

    • there is already so much “space stuff” that launching spacecraft is increasingly difficult.

      The next comment will be “but they can have short orbits” but that betrays the fact that they can collide with other objects and if its so cheap we will launch thousands for bandwidth.

      As always: technical solutions to political problems is a band-aid and makes everything worse, lets beat our politicians to death (metaphorically) instead.

  • > there's no hope of getting a world-wide, free, uncensored, unlimited IP4/6 network back.

    What do you mean "back"? It was never free, as in zero-cost. It was also not very unlimited; I remember times when I had to pay not only for the modem time online, but also for the kilobytes transferred. Uncensored, yes, because basically nobody cared, and the number of users was relatively minuscule.

    The utopia was never in the past, and it remains in the future. I still think that staying irrelevant for large crowds and big money is key.

  • > The large networks, trans-atlantic, trans-pacific cables, all that stuff is beyond the control of individuals and even countries. If they don't like your HTTP(S) traffic, the rest of the world won't see it.

    Not really having a plan here, so if nothing else this is out of curiosity, but I'd like to know who is actually owning that stuff.

    For something that seems so ubiquitous and familiar like the internet, it would probably be good to understand who owns most of its infrastructure.

    • The most is owned by Big Telcos, previous national monopolies. Deutsche Telekom from Germany, NTT from Japan, AT&T and Level3 and Lucent from US, Vodafone from UK, some private lines for Big Tech. There are lots of privately owned companies for connecting all sorts of big and small companies' infrastructure (cables and routers) together in Internet Exchange Points all over the world. Some of them are again owned by big telcos, some of them are private independent companies, some of them are government owned, or any combination of the options.

    • There is a transpacific cable landing in my town. Large unmarked building, seems lightly staffed judging by the parking lot.

      It’s Verizon.

  • >The democratization ends at your router. Unless you are willing to lay down your own wires - which for legal reasons you most likely won't be able to do, we will hopelessly be dependent on the ISP. (Radio on free frequencies is possible and there are valiant attempts, they will ultimately remain niche and have severe bandwidth limitations)

    I don't know - the rate of adoption of MeshCore and similar technologies is quite astonishing.

  • >The democratization ends at your router. Unless you are willing to lay down your own wires - which for legal reasons you most likely won't be able to do, we will hopelessly be dependent on the ISP.

    We can have other protocols on top of TCP/IP and build a new Internet over the existing one, much like TOR/I2P/Hyphanet/Lokinet but without many of the disadvantages of those.

  • To be fair with fibre to the home rolling out in more and more places upstream speeds are improving.

    • The upstream bandwidth sure improved but ISPs are still hostile to self-hosting by limiting ports, resetting connections every x days and not providing an ipv4 for a reasonable charge.

  • >there's no hope of getting a world-wide, free, uncensored, unlimited IP4/6 network back. We never had it in the first place.

    I'd settle for a maximally private totally uncensored IPV4 like there used to be. Broadband turned out to be over-rated in some ways.

    One of the good things about dial-up was the way it was built on a peer-to-peer network that "everybody" already had, their land-line telephone service.

    Way before actual "networking", anybody with a modem could connect privately with anybody else who had one.

    An ISP could be formed by taking incoming calls from all active digital users simultaneously, and that was where the networking was done, plus connection to other networks around the world.

    You could still contact any one computer user privately if you wanted to, without going through an ISP, just like it was before the web.

    Also connect one network with another distant one, such as one office building to another, without ISP.

    If anybody wanted to form their own working ISP, they could do it privately anytime as an interested group and not even tell anybody about it if they didn't want to. It might not be a commercial ISP but there was no mainstream to begin with where it was assumed that an ISP must be commercial or make any money at all.

    These connections were intended to be "totally" private by law, it was well-established that a court order was required to do a wiretap, and the penalty for violation was based on the concept that spying on Americans was one of the worst crimes, and needed to deter those who acted to compromise the privacy & freedom that America cherished so deeply. And preserve citizen rights the country was chartered to uphold, no differently than before the telephone was invented.

    There's nothing like this any more, land-line copper is in miserable disuse so the only remaining wire if any is TV cable. But the only way to do peer-to-peer contact over cable is through an ISP, how private is that and why is there not a court order necessary before privacy can be compromised and very select Americans be subject to espionage?

    Cell phones won't help you now, they can be tapped without wires.

    The options are far fewer than the possibilities offered when dial-up first got popular.

  • > The democratization ends at your router.

    Mostly because we have allowed the ISPs to collapse into monopolies.

    People have forgotten that the US used to have competition in broadband back at the point where the Internet was rolling out to everybody.

  • This is just the "No True Scotsman" fallacy (with a dash of "appeal to tradition" fallacy). Yes, the internet has never been perfect, but it was really good for most users a long time and has only lost freedom for the majority of users recently with the rise of coordinated multi-state online censorship. Yes, there have been problems in the past, but if you can't compare now to then and see that things have radically shifted, I don't know what else to say to convince you.

  • There's also no hope of creating a web that is resistant to enshittification and power consolidation as long as it can technically support any form of economic transaction.

> I publish this site via GitHub Pages service for public Internet access

A whole post about not needing big corporations to publish things online, and then they use Microsoft to publish this thing online...

  • I think the point the author is trying to make is more so about these mini networks on their own LAN, which their family uses. (And maybe dreaming of a neighbourhood utility LAN as a middle ground between LAN in your house and WAN as just a trunk to a big ISP node) The full quote is

        - A Raspberry Pi 3B+ with a 3 gigabyte hard drive setup as a "server" (makes this site available on my home network[9])
        - I publish this site via GitHub Pages service for public Internet access (I have the least expensive subscription for this)
        ...
        [9] I can view my personal web on my home network from my phone, tablet and computers. So can the rest of my family.

    • The equivalent self contained home server exists today in the homelab community, either with Mac Minis or NAS systems running Unraid or TrueNAS with community apps. Add in Tailscale on top for remote access.

      What’s needed is a lot of work on the software front to make it much easier, with interoperable standards. Self-hosted WYSYWIG options as easy to use as the social media tools for photos and writing and social posts. Ability to run distributed chatroom style instances with tracker like discoverability to replace Discord. Built in backup options with easy offsite backup replication.

  • There's a whole meme subgenre dedicated to this type of argument. Search for "Yet you participate in society, curious!"

    • Except there are a ton of alternatives to hosting on Github that are not owned by one of the large companies he's railing against.

  • Yes, but that person owns their website, its content, and the address it lives at. They can publish anything they want, in any format they want.

    Hosting on GitHub is merely a convenience; they can up and leave anytime.

  • Yes agreed.

    It is possible through what he says. I’ve made fx [1] exactly for the purpose of the author. Wordpress but written in Rust for efficiency and loads of unneeded features omitted. Publishing is not via static site generator because the time between edit and seeing the result was too long for me. It does use Markdown for the posts and has built-in backup to Git functionality. I’m using it for my blog and like it a lot since I can quickly use it to jot down notes [2].

    [1]: https://github.com/rikhuijzer/fx

    [2]: https://huijzer.xyz/

  • But Github's publishing is at least _highly_ portable. You could move the same web site to a vanilla nginx setup on some random domain.

    The nice part here is that you can update your site via git version control, and have easy rollback etc... assuming you can deal with git.

    I don't think it's such a problem to have big corporations involved in your publishing efforts; the problem is when they lock users in with proprietary technology, and create barriers to entry like high costs and technical complexity.

    I don't have a need to mix anticapitalist fervor with the desire for an easy way to make durable, portable web sites. The internet has always involved paying some kind of piper. Corporations are too big & monopolization is a problem, but one thing at a time...

    • You can publish anywhere with git, it doesn't even need to be Github or a Git host, it can be a shared hosting account or your own server. There is literally no value add that Github offers in this regard other than visibility and implied cred with the tech community. No big corporations need to be involved.

  • For reals. I love the general premise behind the article, but to me how you publish it, and how others access it, is the sauce. Creating static sites is hardly the problem.

I remember a web when practically every ISP allowed you to have a "home page" hosted with them. Your home page was situated in the "public_html" directory of your home directory on their server — hence the name.

Then the URL was http://www.<hostname.domain>/~<username>

I haven't see an URL with a tilde ('~') in it in a long time.

Why did ISPs stop with this service? Was it to curb illegal file sharing?

  • i think Apache sets this up by default or use to. Every user on Linux would get a www, or maybe it was htdocs, folder in their home directory when they were added to the system. Any file you put there is served by Apache at resource /~<username> which was reading from /home/<username>/www on the file system.

    There use to be lots and lots of ISPs and so they were small enough to have a single webserver with all their customers setup as users and Apache serving content. They'd also setup FTP on the same server so you could get your html files into your www folder. Software like Dreamweaver had a ftp client built in, so you'd click like a "publish" button and it wold login to FTP and transfer your files.

    i would imagine this went away because it got expensive as the customer base grew and ISPs consolidated and it made no money. Other options with php, mysql, and other services cropped up and could offer more and charge for it so I think ISPs just preferred to concentrate on network access and not hosting websites.

    • Apache doesn't have it on by default but easy to turn it on. It's called usermod or mod_user. By default it's the ~/www directory. So, anyone with /home/<name>/www ends up being site.url/~<name>/

      It is also possible to add .htaccess and other things there, like username/password challenge (WWW-Authenticate) into that on per-user basis.

      Mostly universities had hosting setup the same way. ISPs would also offer a similar thing with an additional fee to your internet-subscription. They mostly provided FTP to upload files. Nowadays if anyone tries to, it will be a SFTP rather than FTP.

  • Nowadays you could use Vercel, Render, Netlify, or Cloudflare for free. Most even have a drag and drop interface.

  • Think it had more to do with the consolidation of the ISP space.

    I used to have my choice of dozens of ISP's. Now if I am lucky I might have 2 or 3 from very large companies that did the math on keeping that going. It mostly happened when ADSL and cable took over. In most areas that meant only 2 or 3 companies could actually provide anything at speeds their customers wanted. Think at the time they always said it was cost cutting.

  • Likely demand dropped and when the infra hosting it was needing replacement it just never got replaced

  • The ironic thing is that each subscriber now has a dedicated computer many times more powerful than what ISPs had back in the day for hosting ALL those websites sitting in the most privileged part of their network, being online 24h and begging to be used for small hosting tasks like this: their ISP provided router. It even serves it's configuration panel through html and a webserver for crying out loud!

    Unfortunately reality is such that those are closed systems with historically abhorrent security and ISPs usually forbids the user from properly providing their own choice of router.

My concern is that when we succeed making "our own web" popular, the Big Cos will lobby for legislation that would put burden on all operators, such that it would be unreasonably costly for small operators but easy for themselves to meet. Probably under disguise of "think of the children". Besides technology we'd need a strong organization advocating for "our own web".

I what the article proposes is the wrong direction:

Instead of every single person to maintain an offering a vertical slice through the whole stack, we should make it easier to publish content in the first place.

The real issue that this pushes the burden of maintenance and infrastructure on the individual; But this should be a shared responsibility.

Instead we need:

- A federated content/file system - An open standard for viewer/app definitions (hosted on this system)

  • Ex. Content can be in Git repo. Site can be build via static site generators + automated via CI update site when add new content to Git repo. We only need good blog theme, photo albums theme and profile theme to replace FB, Linkedin even TikTok. Publishers have full control on their own data.

As the author of a content management system I made with the idea to democratize internet content creation, I've had a lot of the same thoughts that the author brings up here. I've always thought that even learning Markdown was a bridge to far when it comes to empowering non-technical users however. In my experience it's best just to supply tooling similar to Word where you have buttons for things like lists and bolding. Using Markdown as the format itself is something I will agree with though.

Another thought I had is that local AI could most definitely play a part in helping non-technical users create the kind of content they want. If your CMS gives you a GPT-like chat window that allows a non-technical user to restyle the page as they like, or do things like make mass edits - then I think that is something that could help some of the issues mentioned here.

  • Yeah, for that a git-based CMS like Sveltia is really nice.

    And for people that actually want to learn a bit of HTML, CSS and JavaScript, Mastro JS is as simple a static site generator as I could make it.

  • It's definitely an approach. I do think in true democratization of the internet, teaching people some tech is inevitable. We just can't have equal access if we retain the classes of user and maker as completely distinct.

I enjoyed the article, but I’m skeptical of the “democratize via hardware + networking” path. Most people won’t run a Pi, manage updates/backups, or debug home networking, and that’s fine (as you note).

But I do think we’re reaching a turning point on the software side. The barrier to building custom, personalized apps is trending toward 0. I’m not naive enough to think every grandma will suddenly start asking ChatGPT to “build me an app to do XYZ,” but with the right UX it can be implicit. Imagine you tell an assistant: “My doctor says my blood sugar is high. Research tips to reduce it.” -> it not only replies with tips, it also proactively builds a custom app (that you own and control) for tracking your blood sugar (measurements, meals, reminders, charts, etc.). You can edit it by describing changes (“add a weekly trend graph,” “don’t nag me after 8pm,” etc.).

This doesn’t fully solve your Big Co control issue (they own the flagship models today), but open-weight + local options keep improving. I'm hopeful we have a chance to tip the scales back toward co-owner and participant.

  • > You can edit it by describing changes

    Even this is hard. Most people don't know what they want, and/or they don't know how to describe it/imagine it. They don't even know what a trend graph is.

    They just want someone else to do the mental effort of creating a nice product. Hence iOS > android for most people. They don't want to customise basically anything other than colours.

    That's why i predict Lovable/replit etc will not go mainstream. And why chatgpt will just offer you their UIs mainly. Artifacts weren't a big hit

    • If you've never worked in a call center, or in a technical support role, it can be hard to understand just how inarticulate people are on average. Even programmers.

      ...And how much brainpower goes into understanding what people like this are getting at when they speak about things. There's a lot of context and human element to this; I'm skeptical AI will be any good at it in the near future.

  • This post resonated with me. I have tried self hosting multiple times over the years and always gave up cause it is so hard to manage and there was always an online service that was good enough.

    This weekend I vibe coded (dont shoot me) a homelab platform that hosts a bunch of useful services on a MacMini and lets me deploy my own apps on top of. Using tailscale I can access the apps from my phone. I have multiple users with their own SSO to control access. I even have a pi as part of the network that hosts public facing content. All done with Claude Code and OpenClaw (as a kind of devops tool)... hardly any code written by me. Its been a seriously fun experiment that I will try to progress some how.... if only because I love the dream "Digital sovereignty" even if the reality is its unlikely to happen again. It got me thinking though if I could get inference hardware and a good enough open LLM to work with my setup it might just be possible. The OP advocates a form of basic computing that is understandable but when we are able to host our own LLM's we could end up in a very different but more capable paradigm.

    The repo for homelab anyone who has an interest: https://github.com/briancunningham6/homelab

  • Truly democratizing the web requires that "Compute Server" becomes a typical home appliance that is no more difficult to use than an oven or a furnace including the widespread access to vocational technicians who come to your house and fix it for you.

  • I mean -- my (completely non-technical) mother, after a few hours of my guidance, has started vibe-coding apps and websites for her local community organizations. And, like -- it works.

The only way we own a web of our own is to develop much more of a culture of leaving smallish machines online all the time. Imagine something like Tor or BitTorrent, but everyone has a very simple way of running their own node for content hosting.

That always-on device? To get critical mass, instead of just the nerds, you'd need it to ship with devices which are always-on, like routers/gateways, smart TV's. Then you're back to being at the mercy of centralized companies who also don't love patching their security vulnerabilities.

  • This is very right. There are two obstacles.

    (1) Security. An always-on, externally accessible device will always be a target for breaking in. You want the device to be bulletproof, and to have defense in depth, so that breaking into one service does not affect anything else. Something like Proxmox that works on low-end hardware and is as easy to administer as a mobile phone would do. We are somehow far from this yet. A very limited thing like a static site may be made both easy and bulletproof though.

    (2) Connectivity providers should allow that. Most home routers don't get a static IP, or even a globally routable IPv4 at all. Or even a stable IPv6. This complicates the DNS setup, and without DNS such resources are basically invisible.

    From the pure resilience POV, it seems more important to keep control of your domain, and have an automated way to deploy your site / app on whatever new host, which is regularly tested. Then use free or cheap DNS and VM hosting of convenience. It takes some technical chops, but can likely be simplified and made relatively error-proof with a concerted effort.

    • This is being solved with https://geogram.radio

      Every phone/device is their own server, they connect with a web socket to the preferred station which is typically a server online serving as bridge.

      There is no need to be always connected to a server, you can also connect locally on the WiFi, BLE or even USB-C cables (discovery is automatic).

      From there are internal apps for sharing static websites, chat, blogs, files and so forth.

    • > Connectivity providers should allow that. Most home routers don't get a static IP, or even a globally routable IPv4 at all. Or even a stable IPv6.

      At least we still have DDNS which solves the static IP problem. I've been using it for at least 10-15 years and my home network has always been resolvable over DNS. I guess I'm lucky that I've always had an ISP that handed out publicly routable IPv4 addresses. I think if I joined an ISP where I got some internal node on the ISP's 10.x.x.x network, I'd immediately cancel my service.

      1 reply →

    • Both or those are solved by having a tunnel and a cache that is hosted in the cloud. Something like tailscale or cloudflare provides this pretty much out of the box, but wireguard + nginx on a cheap VPS would accomplish much the same if you are serious about avoiding the big guys.

      1 reply →

  • if only we all had a little device that was always on and and connected….

    • If I'm reading the implication right, you're having a pretty terrible idea. Glossing over what running a server would do to your battery, it would never work because of the routing issues you'll run into.

      With IPv6 it would theoretically be possible, but currently with ipv4 and NATs everywhere, your website would almost never be reachable, even with fancy workarounds like dynDNS

      2 replies →

It's not about ease of publishing. The issue is what people get in return for publishing. Until you can design a platform that gives top creators as much money+attention as commercial platforms, you'll see a drain of top creators and their viewers to commercial platforms.

  • 100%. You don't even need to give people money. It's about attention and feedback.

    People post photos on Instagram and status updates on Facebook because their friends will see it there and give it a thumbs up.

    A couple of decades ago, I spent a lot of time laboriously building a website for scratch for my photography. It was objectively a really nice site. I had my own domain, hosted it on a VPS, and put a ton of work into the layout and design.

    But none of my friends ever thought to go there. I could see by my web stats that every now and then a random stranger would find the site... but they had no easy way of connecting with me and acknowledging that they saw it. If they put a lot of effort in, they could find my email address and email, but that's a hell of a lot harder than just clicking a little thumbs up button next to a Facebook post or filling a comment in the comment box.

    Uploading photos to my site was about as rewarding as printing them out and throwing them in the trash. I thought about adding support for that to my site, but then it opens the whole can of worms around user-generated content, abuse, moderation, etc.

    Eventually, I moved to Flickr, which at the time was an actual community that gave me that connection. Then Flickr fizzled out. Now, on the rare times I bother to process a photo... I just upload it to Facebook because that's where (a dwindling subset of) my friends are.

    It's not about the content. It's about the human connection. A CMS won't fix that.

    • >It's about attention and feedback.

      Feedback maybe, but blogging didn't start for attention. That's something that got bolted on by a nasty virus we as humans tend to be carriers of. I don't think feedback was even an inspiration for the initial bloggers.

      2 replies →

The irony of this being fully hosted on GitHub should not be lost. A toaster is sufficient to host a mostly static site, a VPS would be far more than sufficient.

  • GitHub is free, a VPS isn't.

    • Owning things isn't free (and a VPS isn't owning things, either)

      I absolutely agree with the concept, but people have to be ready to do their own work rather than delegating it to other parties. Consolidation has happened because these massive conglomerates absorb operational complexity on the cheap, and that's attractive. Moving away from them means we take on the responsibility of doing it ourselves.

    • You know what they say about the kind of services FAANG gives away for free...

      And yes, I get the practicality of it. However, when people are actually doing shit like this[1] in the real world, writers of manifestos might consider practicing what they preach a tad more.

      [1]: https://solar.lowtechmagazine.com

  • You think you "own" a VPS?

    • Well, you pay for it at least, and hence enter into a contract with the service provider. A free GitHub account is a come-on by Microsoft to enmesh you further in the world of hosted services - the precise thing this manifesto is complaining about

      2 replies →

I kind of resonate with a lot of things in the article. My own personal view is that we should make hosting stuff vastly simpler; that's one of the goals of my project, at least my attempt (self promo)

https://github.com/blue-monads/potatoverse

  • Potatoverse is a great name :)) BTW do you remember Sandstorm.io?

    • Thanks cap'n-py. Yeah, I love Sandstorm. My goal is to be more portable, lighter, and a 'download binary and run' kind of tool. There are also other attempts around what I call the 'packaging with Docker' approach (Coolify, etc.), which are more attempts at packaging existing apps. But my approach—the platform—gives a bunch of stuff you can use to make apps faster, but you have to bend to its idiosyncrasies. In turn, you do not need a beefy home lab to run it (not everyone is a tinkerer). It's more focused, so it will be easier for the end user running it than for the developer.

I think the main issue with federated apps is the identity and moderation. Without identity verification is hard to moderate so you end up with closed systems where some big CO does the moderation at an acceptable level

  • This is only half a thought.

    The current wave of AI agents is diminishing the value of identity as a DDOS or content-moderation signal. The formula until now included bot = bad, but unless your service wants to exclude everyone using OpenClaw and friends, that's no longer a valid heuristic.

    If identity is no longer a strong signal, then the internet must move away from CAPTCHAs and logins and reputation, and focus more on the proposed content or action instead. Which might not be so bad. After all, if I read a thought-provoking, original, enriching comment on HN, do I really care if it was actually written by a dog?

    We might finally be getting close to https://xkcd.com/810/.

    One more half thought: what if the solution to the Sybil problem is deciding that it's not a problem? Go ahead and spin up your bot network, join the party. If we can design systems that assign zero value to uniqueness and require originality or creativity for a contribution to matter, then successful Sybil "attacks" are no longer attacks, but free work donated by the attacker.

    • > if I read a thought-provoking, original, enriching comment on HN, do I really care if it was actually written by a dog?

      I would rather just read the thought as it was originally expressed by a human somewhere in the AI's training data, rather than a version of it that's been laundered through AI and deployed according to the separate, hidden intent of the AI's operator.

Unfortunately the transparency of the IP stack means that unless u want whole world to know where u live via one DNS query, you'd need to use a service to proxy back to urself. And if ur paying for remote compute anyways, you could probably just host ur stuff there. Any machine that can proxy traffic back to you is just as capable of hosting ur static stuff there.

  • It only gives a pretty rough estimation, not a street address. I don't think many self-hosters have run into issue w/ this.

Publishing markup content is not the problem world needs solving. Infact there is no problem with web that applies to all users of the web. Most are just consumers of the content. Mostly non-text content.

I guess the author first need to get some stats on content type, use cases, money flows, controls etc and then define the problem that applies to most users of the web.

Keep in mind that, usually a system that evolves through feedback loops, and shaped by forces, doesn't have a major problem as it's evolution has ensured the fit between itself and it's context.

You may call the forces which shape that evolution as evil. But the forces are part of the context that you need to live with. The forces are also a product of that evolution.

This guy has been around long enough to know about NNTP, which is the original distributed people-focused web, but talks about how HTML is some kind of barrier to entry.

HTTP requires always-on + always-discoverable infrastructure

It's all over the place.

This is all fine and dandy for websites but what we’ve really been locked out of is email.

You can’t run your own email server. All other large email providers will consider your self hosted emails as spam by default. It understandable why they took this stance (due to actual spam) but it is also awfully convenient it also increases their market power.

We are now at the whim of large corps even if we get a custom domain with them.

I am in the process of co-founding a new protocol which creates a decentralized root of trust using normal plain-text names (i.e. `foo.bar`). One of the goals I hope to obtain is allowing domain-style lookups of private websites hosted on P2P networks. It's lofty, but the dialog used by OP is _very_ close to why I think it's necessary.

  • I've been interested in doing something similar in the past, but I could really never solve issues like domain squatting and stopping individuals from claiming every name possible. Do you have a place where you keep these plans or have discussions around it? Or even just a place where I could get updates if anything does come of it?

    • > I've been interested in doing something similar in the past, but I could really never solve issues like domain squatting and stopping individuals from claiming every name possible

      I think that's just a property of a naming system. Without something like a centralized threat of force that can know every person participating in the system - there really is no recourse. The approach we are taking is making it difficult to create a speculative market around names which seems to be the driving force behind squatters.

      Happy to discuss it in more detail: hackernews@sepositus.com

      (Note: that's an alias that goes to my email address which I avoid putting in public places for obvious reasons).

  • Have you considered improving ygddrasil or something similar.

    • Yes, the protocol is going to be open and the plan is to submit PRs to various projects when we're at that point. It's not really an "either or" with ygddrasil but more like a "both and."

It's easier than ever to build p2p software. E.g. You can use Pkarr as a distributed identity system, and Iroh for reliable p2p connectivity.

There's no point in messing with custom hardware, etc. We could host bunch of redundant p2p access points for everyone, and use p2p portable software for everything.

E.g. I'm building a P2P/F2F Social Media protocol that is very close to syndication platform. https://app.radicle.xyz/nodes/radicle.dpc.pw/rad:zzK566qFsZn... . I'm not saying that it's exactly the same thing as author is looking for, but the technical bits and even functionality are very close.

I agree with the point that big companies have persuaded people that only they can offer ease of publishing content. most of my friends publish on Facebook, X, Instagram etc.

I have tried to get them to publish markdown sites using GitHub pages, but the pain of having to git commit and do it via desktop was the blocker.

So I recently made them a mobile app called JekyllPress [0] with which they can publish their posts similar to WordPress mobile app. And now a bunch of them regularly publish on GitHub pages. I think with more tools to simplify the publishing process, more people will start using GitHub pages (my app still requires some painful onboarding like creating a repo, enabling GitHub pages and getting PAT, no oAuth as I don't have any server).

[0] https://www.gapp.in/projects/jekyllpress/

  • Isn’t publishing on Github Pages still posting to a corporate centrally owned entity and not a solution to the problem described?

    • But it is portable. It is essentially markdown files. You can download your repo, compile the Jekyll to static pages and publish them anywhere.

      When you publish to Facebook, WordPress etc you can't easily get your stuff out. You will have to process them even if they allow you to download your content as a zip folder. The images will be broken. Links between pages won't work etc.

      2 replies →

We got here iteratively..not all at once. So the path back...it's iterative. I shouldn't even say back. We're not going back. We have to go in a new direction. And again it's evolutionary. So ultimately a lot of these big systems and big tech companies aren't going anywhere and they will be integral to all infrastructure for the foreseeable future whether that be technical, financial or related to public services. But as individuals we can slowly shift some of our efforts elsewhere in ways that it might matter.

Here's my small contribution to that. https://github.com/micro/mu - an app platform without ads, algorithms or tracking.

I really like this model for individual services.

The challenge I've always felt, is shared services -- if I'm running infra myself, I can depend upon it, but if someone else is running it, I'm never really sure if I can, which makes external services really hard to rely on and invest into.

Maybe you can get further than expected with individual services? But shared services at some point seem really useful.

I think web2 solved that in an unfortunate way, where you know the corporations operating the services / networks are aligned in some ways but not in others.

But would be great to have shared services that do have better guarantees. Disclaimer, we're working on something in that direction, but really curious what others have seen or thinking in this area.

What's the article is missing: this is directly related to the complexity of file formats and protocols.

There are 2 webs:

- the web site, to serve noscript/basic (x)html, namely basic HTML forms which can be augmented with <video> and <audio> nowadays, namely it serves web _pages_. It was made super modular, you have browsers not handling CSS, and it is fine for _pages_ with a semantic 2D table (implicit navigation even for braille browsers). Web engines there, are more than reasonable to write an alternative of, even a plain CSS renderer (look at netsurf browser), only text (lynx/edbrowse/etc), graphic (links2/elinks/etc). In the end, 'HTML' is not perfect (like CSS), a bit of a mess actually, that's why they tried an XML representation, a failure because it was literaly sabotaged by... "Big Co" or in the web realm, the 'whatng cartel': I remember their web engines were a pain to use xhtml to develop even a simple page... but not with html... curiously. That said, mistakes were made also on the "w3c" side: the 'semantic web', a real abomination of delirious complexity, which I think is what actually made people jump on the "whatng" train, what a disaster. Now, HTML has been back with its weird(shabby?) parsing, but this was kind of 'cleaned up' and much more rigorously defined.

- the web app: the abomination. Basically, only gigantic and insanely complex software can make a web app work (including their SDK), aka only the web engines from Big Co, here 'the whatng cartel'. It is getting worse, it is said web apps are more and more requiring only one web engine to 'properly work' (often gogol blink), and suspicions are very strong at this is made _on purpose_ (I remember the day when gmail.com did disable their noscript/basic (x)html web interface... then POP3 not a long time ago... I guess you all see where this is going). In this realm, there is near ZERO possibility to create a _real-life_ alternative without a bunch a developers laser focused on that for one billion years. I have been wishing for an alternative web engine I could build from source with a simple SDK, does not exist, and even the few attempts here and there are _not_ doing that: they lean towards super complex syntax computer languages (c++ and similar), hence a failure right from the start.

The 'web3'? A lean javascript engine (for instance quickjs, but there are others), with a small set of OS basic abstraction APIs, and a few 'accelerated' specialized APIs (vector drawing, pixel drawing blitter, video decoding, glyph drawing, etc). First problem: nobody will agree an those interfaces (they would have to be as simple as possible), and the 'whatng cartel' will make sure their are useless...

Or a even simpler "HTML" (probably the same with CSS)? "markdown" like the article suggest? Would it have enough expressive power? Again, nobody will agree on the format and will want to make its own.

A good middle ground is to work with a 'subset' of HTML: rought on the edge, but would do a good enough job for nearly all online services out-there, whatever the platform. Nearly 100% of the online services were running on that a few years back, and with <video> and <audio>, it could be even closer than 100% nowadays.

And there is the danger of the 'mobile app only': there, the only way out is to regulate and enforce the availability of a small, stable in time, set of as simple as possible protocols and file formats to allow reasonable efforts at developping an 'app' for an alternative platform (elf/linux, *BSD, fooOS, etc).

I think this mostly misses the biggest reason why writers would choose big tech platforms or other big platforms: discovery and aggregation. If you want to speak to be heard and not just for its own sake, then you want to go where the people are hanging out and where they could actually find your content.

This is like talking about how book authors don't need Amazon when you have a printer and glue at home.

The cynic in me wants to say that most of the web these days is pushing H.264 frames from a CDN to proprietary phone apps and the rest is pushing Widevine video from the same CDN to proprietary browsers and we'll never cooperatively own any of that, even if we wanted to.

The idealist in me says we should still build a simple to use publishing and discovery system for hypertext that can be self-hosted and self-networked for the day the next generations realize they need it (authoritarian control of the Internet, collapse of social media, infrastructure instability, climate apocalypse, whatever). I suppose my idealism is still pretty pessimistic, but then it is Monday.

We share many of the same ideas, I am working on a solution, a new type of internet with a new type of browser, and it is going quite well. Something like this can really make a difference I think, it’s definitely possible.

https://github.com/mjdave/katipo

> Tiny computers are like tiny homes

They totally suck like tiny homes? No, actually they are better than tiny homes. Browser are the #1 reason why you want a computer that's better than a Pi 500. Wanting to play modern games is #2.

Mobile users hate when site publishers forget this one simple ~trick~ meta tag.

Can someone explain to me what the difference is between a union in which everyone is a member and a government.

I welcome everyone who wants to imagine a better web. One with less control by greedy mega-corporations and data-sniffing state actors. I am not sure how such a web should look, but I am pretty certain we will need it sooner or later. Otherwise we may end up with "hey, we were in the 1990s generation, we knew a free web - now this has been replaced by walled corporate gardens controlled by a few superrich".

Co-ownership of the hardware is a social not technical problem. Think of questions of trust, responsibility, who has power, who and how contributes, how decisions are made, etc, etc

Using Raspberry Pi, $20 network switches and cheap wire won't be useful for building a new Internet.

Also, you don't even have the rights to lay 'em wires wherever you see fit.

It’s not really covered, but p2p technology combined with every phone in the world (and a little wishful thinking) could make for some neat applications.

I mean, you do have a point, and I'll quite agree with it. The only way of monetizing your writing is to use Substack or Medium, or whatever.

Yet your approach is appallingly low on the other side of the spectrum. I've been in IT for the past 25 years. I have yet to see a non-IT person who knows what dedicated IP is. If you are not publishing it on the internet, then what's the point?

I've seen plenty of companies where the owner just had a read-only shared drive, where people can rummage thru a pack of PDFs. This' was all fine with that.

You have to understand, manage and work with the complexities of the tools, and offer tools quite enough for the task. It's alright to offer what you do to an engineer who has a spare Pi and a couple of days to kill. But it's quite useless for anyone else to adopt.

one note. Even when we all wrote html most people still used a bigco to host it. Maybe if you go back far enough you can say you don't think regional ISPs are "big" but those companies are all gone now

  • > Even when we all wrote html most people still used a bigco to host it.

    If you consider Geocities, Tripod or Angelfire (or your local ISP) "bigcos" I guess. It is true that most people didn't host from their own servers and at one point all of those free hosts forced ad banners on your pages but it still doesn't seem like the same thing.

> What if I do not wish to be tenant and product? What can I do to change the equation?

Host your own website (on a free server for as long as you can), print out some flyers, paste them around town or pass them around (to bypass the Ad Gods), ask for donations to pay the growing costs of bandwidth etc. as you get more users?

Ultimately it comes to down to convincing people, the ickiest task on earth :<

Very much enjoyed this. Always am shocked that my colleagues on the humanistic/writing studies side don't have a larger contingent actively contributing to web and publication technologies/specs, ceding so much of that space to folks with design backgrounds; they still don't really invest enough time into understanding networked writing

I really do agree with the sentiment of this guy. I am this guy mentally but what he is saying is so painfully out of touch and completely ignores how people actually use the web today. My bro the web isnt controlled by corps because its to hard to host a web page. Corps have created these extremely far reaching and complex applications and people prefer to use those than browser through statically generated pages. Average man doesnt want to come home and update his page he just wants to open the app on his phone, scroll for a bit and close it.

The problem is in the environment but also the user behavior. Unless you can provide a convincing argument to change both by presenting an actual improvement then its farting in the wind

I personally think the trend we witnessed with clawdbot where people ran to buy mac minis or other ways of self hosting ai agents is going to be a huge wind in the sails for generally hosting things at home.

> Simple to use software that empowers us to both read and write hypertext4 and syndicated content

Simple to use software... this would be grand!

> Raspberry Pi OS (a Linux distribution based on Debian GNU Linux)

Is this simple? I would contend that it is not. Why do I tell people "buy apple products" as a matter of course? Because they have decent security, great ease of use, and support is an Apple Store away.

They still manage to screw things up.

Look at the emergence of docker as an install method for software on linux. We sing the praises of this as means of software distribution and installation... and yet it's functionally un-usable by normal (read: non technical) people.

Usability needs to make a comeback.

  • > great ease of use

    Apple stuff is a nightmare of dark patterns and user-hostile idiocy.

    Maybe it's easy if you have Stockholm syndrome and have internalized all the arcane gestures, icons and bug avoidance patterns.

    The average normie has no clue, though. (This is borne from experience, I have like 8 iPhones in the immediate family among children and seniors.)

I wonder how hard a distributed search algorithm would be to build into a web server? I guess it would be open to bad actors, but there would also be a way to stop that, distributed moderation?

I made a content management system (CMS) for some friends years ago which was very easy to use. It's main paradigm was: 1 folder = 1 page. This was very easy for anyone to manage. Files in the folder were rendered in sort order so you could have an image followed by some markdown etc Was so easy to use I never got anyone (mostly non-technical artists) asking how to change something on their site. Most ppl understand how to organise their content as files and folders. It is the easiest UI I've ever seen for a CMS i.e. no UI :P Was going to expand it to read files from a Dropbox folder so they didn't even need FTP but life... It was in PHP, which at the time, all ISPs supported. Setup was: copy code, change files/folders in `/content` dir and your away.

I am on board with basically everything this article is arguing, but I think it covers the easy part (that "people run their own servers" is the only solution to the problems caused by relying on giant ad corps to provide the server half of client/server software) and skips the hard part, which is the software they run.

Like, suppose some really good personal server software existed. Suppose there were an OS-plus-app-repository platform, akin to linux plus snapcraft, but aimed solely at people who want to host a blog or email server despite knowing nothing and being willing to learn nothing. It installs on to a raspberry pi as easy as Windows. It figures out how to NAT out of your cable modem for you. It does all the disk partitioning and apt-gets and chmods, you just open the companion app on your phone and hit the Wordpress button and presto, you've got a blog. You hit the Minecraft button and you've got your own minecraft server, without having to learn what "-Xms2G -Xmx6G" means. It updates itself automatically, runs server components in sandboxes so they can't compromise each other, and it's crack-proof enough that you can store your bitcoins on it. Etc, etc.

If that existed, we wouldn't have to write essays about freedom and so forth to get people to buy it, they'd buy it just because it's there. I mean, look at those digital picture frames - they cost more than a rasbpi and are way less useful, and half the people I know got or gave them for christmas. Why? Because they're neat and they cost less than a hundred bucks and they require no knowledge or effort. If a server that can host your blog were that easy, it'd get adopted too, and we'd be on a path to some kind of distributed social media FB replacement. Imagine the software you could write, if you were allowed to assume that every user had a server to host it on!

The problem is, that software doesn't exist and it's not clear how it would ever get made. It'd be a huge effort (possibly "Google building Android" sized) and the extant open source efforts along these lines lack traction, mostly due to the chicken-and-egg problem of any new platform that needs apps to be useful. And until it exists, any kind of neighborhood-internet-collective-power-to-the-people dream has to necessarily begin with hoping that millions of people will spontaneously decide to spend their precious free time doing systems administration.

Not to shit on a fine essay that I mostly agree with. It just seems like, without figuring out the software, this is daydreaming.

I agree with owning the network devices, and lack of control here is a problem that still has solutions.

And self-hosting personal services makes sense and we're able to do that.

BUT, we don't own the connections. There's always going to be shared infrastructure for connecting these devices worldwide, and without an ideal state of Communism or utopian capitalism we're not going to own them or want to be responsible for them. Any kind of service that depends on a central database is not going to be communally owned.

Ownership is an economic problem, the technical aspect is merely interesting. Bitcoin might be a great example of this.

Yes, but: 1) Cooperatives inevitably transition into centers of power. If a cooperative becomes powerful enough, it becomes another mini-government that "unpublishes" you if you say something they don't like. 2) Your freedom ends at the ISP boundary; you don't own the cables. If the ISP doesn't like what you publish, they just disrupt, throttle or disconnect your upload link. They probably have the legal right already. 3) Even if you own the wires (or you invent a LoRa mesh stable enough to provide an alternative Internet), the government will disrupt you (or worse) if you say something they don't like. It's very likely the will disrupt you preventively because you could potentially say something they don't like. This is not right wing vs left wing: look at the UK, for example (left-wing government, abysmal freedom of speech on the Internet)

The issue with publishing content has always been censorship. Anyone in power has incentives to apply as much censorship as they can. It's never been a technical problem.

This is silly nonsense.

>I publish this site via GitHub Pages

Okay, and that depends on an entire economy and infrastructure of privately owned switching, other network equipment, fiber optic, etc, etc, etc, -- not to mention that if GitHub did not have, as a private company, a profit motive, they wouldn't even bother to offer the service you're using.

Sure, yes, rebuild the world but if you want it to be free like open source, you'll also need to make it free like beer -- and that means you'll need to work for free, too.

I support the aim. I acknowledge the problems. I'm just so frustrated by these silly oversimplifications of how to solve it.

  • The spirit and goal is respectable. How we get there, convincing people to join a new web and making it easy and attractive to replace 30 year old infra is something else.

The real barrier was never technical. It was convenience and discovery. Running a Pi at home is trivial for anyone on HN, but the moment you want people to actually find your stuff, you need DNS, a stable IP, and some way to not get buried under the noise.

Tailscale and similar overlay networks have made the "accessible from anywhere" part way easier than it used to be. The missing piece is still discovery. RSS was the closest we got to decentralized discovery, and we collectively let it rot. Maybe it's time to bring it back properly.

  • > The missing piece is still discovery.

    I think the key issue here is that Attention is a temporal construct, meaning discovery is often tied to "being the first thing that comes to people's minds" which means SEO, reverse engineering the ranking algorithms, and constantly having to manage an "online persona". Note none of those things contribute to the actual work you're doing, just your "marketing department" (and whatever time/financial "budget" you intend to give it).

    MrBeast figured out the YouTube algorithm - post early and often. Is that how we exist on modern Internet when every website/thumbnail is engineered by a team to maximize clickthrough rates? I agree RSS is useful, but it faces the same scalability issues if everyone starts filling up your RSS feeds. Given the limited amount of time you can devote to a particular task, we'll return to the era of A/B testing Headlines.

  • What does “bringing (RSS) back properly” entail in your eyes?

    It’s still alive. Many sites still use it. Many people still subscribe to those sites. RSS reader apps are still being created to this day.

  • > The missing piece is still discovery.

    If P2P file sharing network can do distributed search, surely it wouldn't be so hard for self hosting? Could make a web server plugin to do it?

    Would be nice to have a search engine/network that de-priorities pages with ads on them. Might be a Google killer ;)