The Man Who Keeps Predicting the Web's Death

8 hours ago (tedium.co)

The web of every age has certainly died, but only to be replaced by a new form of "web". We have had Web1.0, web2.0, or even web3.0, and waht more -- the AI;

The web that people were familiar with and loved back then had already died at that time. The new web is fundamentally different in both underlying technology and business models.

I'm just surprised there is still only one web after all this time.

Why isn't there a multiverse of "webs", each with different cultures or rules for whatever their niche is?

There could be a web where its only personal websites, and using different protocols besides TCP, HTML markup and even DNS.

Or have we forgotten how to simply even build a web from scratch? Didn't CERN do this in a lab like 40 years ago?

  • What's wrong with TCP, HTML and DNS? Why spend time to build an alternative solution? Why use the inferior solution someone built as a hobby project?

    Honestly there kinda is a new web, they call it web 3 and it's only crypto scams. I'll stick to TCP and html for now I think.

    • As somebody working in this "future-web" space, I see HUGE issues with the legacy web stack:

      - It requires a server to publish, which is expensive and difficult for regular users with a laptop or a phone. This can be solved with a mix of p2p and federation

      - There is no decentralized trust system- only DNS+HTTPS, which requires centralized registration (TLDs). A domain may be cost-prohibitive for somebody who just wants to write comments and a few documents on the web. This can be solved by forming a social graph of cryptographic identity validations (aka, the "web of trust")

      - There is no versioning system. This can be solved by making chains of immutable signed content, like we do with git.

      - There is no archival system that allows you to "back up" the content of a website in a trustless way. Look at IPFS and BitTorrent for the solution there.

      I believe these are the main reasons the web has failed as a social publishing system. Aside from companies and technically skilled individuals, everyone publishes on centralized social media platforms. This is a dangerous consolidation of power.

      We hate to admit it, but the open web has taken the "L". The good news: these are solvable problems and I'm not giving up anytime soon!

      > Honestly there kinda is a new web, they call it web 3 and it's only crypto scams.

      To distance ourselves from crypto scams, we strongly avoid the web3 label, despite some similarities.

      2 replies →

    • HTTP/HTML are legitimately not good designs for building networked applications besides those actually intended to be rendered as hypertext by a browser.

      IPv4 is not a good protocol for obvious reasons, and IPv6 isn’t for political/bureaucratic reasons on top of the baseline inertia (try getting a block and using it IRL). IP is used as a kind of proxy for physical identity that is very ill suited for the task (but the best option available to Internet users outside the application layer) and DNS/CA is in practice captured and centralized by people charging Bob’s Restaurant $10 for a name.

      The IANA is captured by both because renting IP addresses and domain names are basically their business model, which they franchise out through multiple layers of hierarchy to business that turn profit off of renting numbers and names to end users and do the dirty work, then contribute back up its governance structure.

      Domain ownership, DNS, and IP block assignment are probably the most legitimate possible applications of NFTs to date. One day LEO satellite “Internet” adoption might be good enough for non-IP global networking but until then we have even-worse centralized NFTs rented out by a bureaucracy. Works great if you want true participation at the Internet level to be too expensive, time consuming, and complex for 99% of people. Facebook and Reddit for the plebes to deal with the lack of usability/what they want elsewhere, Cloudflare for us!

    • > What's wrong with TCP, HTML and DNS?

      The Company doesn't own them. The Company doesn't control them. People can use them for things contrary to The Company's interests. The Company must protect itself, its brand, and its Intellectual Property!

      > Why use the inferior solution someone built as a hobby project?

      Hobby project? The Company is not a hobby. The Company is a Major Corporation with Interests, Investments, Shareholders, and Vision. The Company is The Future!

      For "The Company" read "CompuServe" or "The Source" or any of a few other "online services" that existed before the Internet was opened up and the World Wide Web wiped everything clean. They were The Future of the not-so-distant past. As for why they didn't survive, well, Metcalfe's Law is a good first-cut explanation: The value of a network is proportional to the square of the number of users, because that's the number of connections it can have, and value comes from connections, inherently. What good is a network that can't connect you to what you want?

      https://en.wikipedia.org/wiki/Metcalfe%27s_law

  • Most of the "alternative webs" are fairly dark forest by design. Each exists at a different variation of the OSI model. And strangely enough most that I've experienced have had entirely better cultures due to their smaller size, or even due to the alternative methods of transmission and replication (no one has to listen or replicate your messages, so if you're an ass, your messages don't propagate).

    Often the problem is that we have higher expectations from our technology. It's no longer okay to send a message over clear text on a network. We expect things to be fast, latency free and with security primitives. These additional elements are hard to implement without infrastructure, and hardware that is optimize for the particular task.

  • Interesting, I wonder whether you could create a Symbiotic web. One that utilizes existing traffic, but is able to create a different interpretation based off a parallelism.

    By having an index of the data, and then applying masks, you could reshape one payload into another by knowing the delta between what you have, and what you want to communicate.

    It could allow for a layered internet, which leverages the infrastructure that exists already, the traffic that exists already but allows for content to be transferred for free (or pennies)

    You could then send out multiple different videos using mostly the same traffic, all masking off the deltas from the source to the destination.

  • >Why isn't there a multiverse of "webs", each with different cultures or rules for whatever their niche is?

    Because you don't understand the problem with the current web.

    1. You find something interesting and put it on a site.

    2. Other people find it interesting too come to your site.

    3. Those other people have their own interesting things too, so you decide to allow them to put it on your site.

    4. Your site grows bigger.

    5. Other people see the people on your interesting site as targets to advertise to and the cycle of spamming and filtering spammers begins.

    6. You give up because it's too much work fighting the spammers.

    7. Only large sites exist.

    ---

    >here could be a web where its only personal websites, and using different protocols besides TCP, HTML markup and even DNS

    No the web isn't authoritative, so how is your 'person web' going to prevent non-personal sites. How is it going to prevent proxies that take IP and allow it to access your whichama-protocol?

    >even build a web from scratch

    It's both easy to do and useless. Who is going to access it, they don't have your software. And, this is what major providers already do with their lockin since they have a means of distributing their software.

  • > Why isn't there a multiverse of "webs", each with different cultures or rules for whatever their niche is?

    Those are called "websites"

  • There's still BBS, it's hugely popular in Taiwan.

    I'd argue Discord servers are another Web as well.

  • There is that web still but Google either doesn’t implement or kills off any tech it doesn’t like.

    XSLT, XML, Gopher, Js-free web, etc. are all different parts of the multivaried web.

    As soon as you make a case for more technologies. Google or the React team or someone tied to the Business Web will tell you it’s not a good idea because security or lack of support or some side stepping reason.

  • > Didn't CERN do this in a lab like 40 years ago?

    No. CERN (aka, Tim Berners-Lee) came up with HTTP and HTML, but he built on top of TCP/IP and DNS.

    > using different protocols besides TCP

    QUIC is going in that direction, by running on top of UDP.

    ---

    In any case, without replacing the protocol stack we already have different webs, thanks to the walled-garden nature of modern social networking platforms. Linking to an Instagram reel or a Tiktok video is a pain in the neck; if you do not have an account there's a good chance you won't be able to see the content anyway. X is going in a similar direction. This (as well as their non-textual nature, of course) makes them hard to crawl and index for search engines. Fragmentation and niches inevitably ensue.

  • We're basically right about to launch this at my company right now. It's going to be one of the first platform products we launch and will be based on our static site generator called Statue https://github.com/accretional/statue. We styled the default Statue site as a SaaS landing page because that's what we needed when we started working on it, but I really want to explore a more Myspace/Blog oriented UX

    Actually, making a second web is not so hard, that's arguably what Facebook is, it's making a web that is able to occupy the same privileged position on the Internet that is difficult. DNS, Domains, CAs, and IP allocations are all de-facto centralized. You could in theory convince a bunch of friends to use different DNS/Domains/CAs but IP touches real infrastructure so you need it for anything of notable size. But regardless, go ahead. Unless people can make HTTP GET requests to your Domain A/CNAME records and receive HTML it'll probably be like your web doesn't exist.

    Our network includes identity as a first-class citizen, and when you make internal-internal requests, terminates both ends of the connection, so it does do crazy stuff to cache/serve/resolve data, including site contents. It's kind of like a "shadow realm" for the Internet because it only lives in datacenters and whatever "wormhole" connections you make into that network from outside of it. That does allow us to evade the icy grip of Big Internet Protocol, but not the ghost of internet protocol past and present.

    Actually the problem with the web is that nobody wants it in its current form, I think. Partially maybe because it got polluted/embrace-extend-extinguished as a consequence of platforms like Reddit, Google Search, Facebook, etc. But also because the cost and time complexity is too high for regular people, or even most technical people, to make full use of it on a personal basis. My hope is that we/someone can make it dead-simple and cheap to setup, create, and host non-trivial websites used by real humans to do real human things besides marketing and the content that enables marketing.

The Web is dead. Most people access the internet through one of perhaps ten popular applications that do not contain Web content (semantic XML documents hyperlinked together).

  • HTML was an application of SGML, and precedes XML.

    HTML is a markup language, and not all of the elements and features are semantic in nature. The "span" tag literally exists to provide a way of affecting a bit of text where no tag with more semantic meaning applies.

And that someone who sayd years over years that the web will die now says again that the web will die means that the other people which say the web will die are wrong?

Most content is now on <10 big platforms. nearby no one has a website just to tell other people some stuff or what he has done for fun. only reason to have a website today is profit. active websites write in the first part of a article only bullshit to get found by google. Sites that are really like in 1999, just fun sites would never been found via google anymore. And now AI crawls all this and steals traffic from this sites without giving credits until no one beside of bot's will reads them anymore.

The web is not dying, the web is already dead! A commercialized part of the money making machine, nothing more anymore.

Next step: AI is dying because there are no new data to learn because it makes no sense anymore to create websites. Welcome to the stone age.

  • > The web is not dying, the web is already dead!

    Says he on a web site.

    • And the fact this remains a Web site is truly remarkable. The reason it is popular with a niche group of technically minded-users is because Web sites are so hard to come by these days that you need a special place to collect them.

      6 replies →

  • Its weird how people keep saying the rise of AI generated content will "kill AI" as though the companies training models don't have complete archives of all the data they already scraped from the Internet.

    It doesn't take all the text of the public Internet for someone to learn to talk, and all these companies are much more in the data curation business for the purposes of teaching models.

    Scraping is to make them up to date on current events (and has obvious alternative sources), or the actions of the start up space which don't already have such datasets.

    • I can't wait to see how the coding performance will start to drop on with newer tools and versions, as people no longer discuss them in the same detail and quantity as they used to. People using LLMs will be stuck in the pre-2023 tools, using new stuff is an uphill battle already (you have to give it the correct docs manually)