The idea is appealing but I feel that there is a conflict between
> I want this to work, furthermore, whether those people are sharing a random thought every day, a blog post every week, or an art project every two years.
> More importantly, every board holds its place, regardless of when it was last updated.
I would not like to stare at the same board for two years between updates, so probably I would end up manually re-organizing my client according to the feeds that get updated, or just unsubscribe from the ones where the author seems to have dropped off the planet.
Newspaper classifieds and comic book ads are different each issue and therefore there's some pleasure in scanning through them. Today's algorithmic feeds on social networks may optimize too much for engagement at the expense of quality. (Twitter started putting complete strangers' tweets at the top of my timeline, on topics like "Marvel" that I have no interest in.) But the solution to this is not to avoid algorithmic curation completely.
This is like an RSS client where the last update is never expired. I have a few feeds where I manually mark an update as unread so that it keeps popping up.
* if posts are limited to 2217 bytes with no external references then that excludes the richness of images, something I really enjoy about the modern web. Am I misunderstanding something?
* I think there's a pile of room for the client to display things differently from that imagined here. ultimately we would end up with clients that implement their own "algorithm", but that's a good thing because we're in charge of it.
* I always get irrationally stressed about the inefficiency of pinging a server every hour for a resource that only gets modified once a year, one of the use cases noted in the article. That's a lot of wasted 304 not-modified's.
Full CSS and zero javascript is the opposite of what I want.
I love the idea of just seeing everyone's current status, but realistically I want to be able to read people's history as well, otherwise the FOMO will have me checking this thing manually every minute (maybe an effective growth hack, but sounds like it'd end up even worse than twitter if it worked at all).
And realistically an RSS reader with a "tile view" achieves everything that this does.
This is super fascinating, as it combines several trends ("brutalist" web design or Web 1.0 nostalgia, federation/decentralization, ephemeral Snapchat-like "Stories" with no history) but feels authentic and compelling.
As ever, the issue will be achieving the critical mass needed for the network effect to kick in. But the system is simple enough to have a chance at succeeding (see Gall's Law)
I don't understand the cryptographic aspect of it. Unless I'm misreading things, which may be the case, it will take several minutes of basically cryptocurrency-style "mining" to find a key which matches a message on current relatively fast consumer hardware before a "board" can be posted. If the goal is validating a message hasn't been tampered with, is that serving some goal that something like sha512 or even crc32 wouldn't match? (It also means that it would basically be impossible to create a Spring '83 message using hardware actually contemporary to spring '83.)
Also, I'm not sure how important having the month and year in the identifier is to the whole thing, but using only two digits for the year has obvious ambiguities.
That the client must "situate each board inside its own Shadow DOM" seems to imply that all clients must effectively be web browsers too. A far cry from the original RFC 865 which could basically be done with netcat.
All in all this seems like a complex way to solve a seemingly easy problem.
the search is to generate a valid signing key pair, where "valid" means the last 28 bits are as specified. once a poster generates such a key, they can use it for as long as it's valid.
Top-level cryptographic identifiers that create an identity separate and distinct from a network location are basically de rigueur for network protocols these days. Almost as if it's a missing layer between IP and TCP? Interesting to see the trend continue in this proposal.
Without an easy way to pay for content online, this will be subject to the same distorting, ad-infiltrating forces as the web in general. Charging even tiny fees for content online resolves a host of problems... while of course creating others.
Something this has in common with similar efforts: the web page is treated as a document - here it's a set of documents, i.e. "boards" - and not an application. As a result, commerce is excluded. That's fine if it's what you want, but I think it limits the scale and appeal to nostalgia. I deprecate lots of things about the modern web, but I like being able to buy things and perform transactions online.
I mused on a very similar concept a few years back, without writing more than a brief treatment. I complicated it a bit for myself by also considering the idea of a periodic delay line loop that has some of the properties of a reblog - a way to circulate ideas repeatedly and add those of others to your own, making the page fresh even when unattended.
The idea is appealing but I feel that there is a conflict between
> I want this to work, furthermore, whether those people are sharing a random thought every day, a blog post every week, or an art project every two years.
> More importantly, every board holds its place, regardless of when it was last updated.
I would not like to stare at the same board for two years between updates, so probably I would end up manually re-organizing my client according to the feeds that get updated, or just unsubscribe from the ones where the author seems to have dropped off the planet.
Newspaper classifieds and comic book ads are different each issue and therefore there's some pleasure in scanning through them. Today's algorithmic feeds on social networks may optimize too much for engagement at the expense of quality. (Twitter started putting complete strangers' tweets at the top of my timeline, on topics like "Marvel" that I have no interest in.) But the solution to this is not to avoid algorithmic curation completely.
This is like an RSS client where the last update is never expired. I have a few feeds where I manually mark an update as unread so that it keeps popping up.
I love the ideas here, but...
* if posts are limited to 2217 bytes with no external references then that excludes the richness of images, something I really enjoy about the modern web. Am I misunderstanding something?
* I think there's a pile of room for the client to display things differently from that imagined here. ultimately we would end up with clients that implement their own "algorithm", but that's a good thing because we're in charge of it.
* I always get irrationally stressed about the inefficiency of pinging a server every hour for a resource that only gets modified once a year, one of the use cases noted in the article. That's a lot of wasted 304 not-modified's.
Full CSS and zero javascript is the opposite of what I want.
I love the idea of just seeing everyone's current status, but realistically I want to be able to read people's history as well, otherwise the FOMO will have me checking this thing manually every minute (maybe an effective growth hack, but sounds like it'd end up even worse than twitter if it worked at all).
And realistically an RSS reader with a "tile view" achieves everything that this does.
This is super fascinating, as it combines several trends ("brutalist" web design or Web 1.0 nostalgia, federation/decentralization, ephemeral Snapchat-like "Stories" with no history) but feels authentic and compelling.
As ever, the issue will be achieving the critical mass needed for the network effect to kick in. But the system is simple enough to have a chance at succeeding (see Gall's Law)
I don't understand the cryptographic aspect of it. Unless I'm misreading things, which may be the case, it will take several minutes of basically cryptocurrency-style "mining" to find a key which matches a message on current relatively fast consumer hardware before a "board" can be posted. If the goal is validating a message hasn't been tampered with, is that serving some goal that something like sha512 or even crc32 wouldn't match? (It also means that it would basically be impossible to create a Spring '83 message using hardware actually contemporary to spring '83.)
Also, I'm not sure how important having the month and year in the identifier is to the whole thing, but using only two digits for the year has obvious ambiguities.
That the client must "situate each board inside its own Shadow DOM" seems to imply that all clients must effectively be web browsers too. A far cry from the original RFC 865 which could basically be done with netcat.
All in all this seems like a complex way to solve a seemingly easy problem.
the search is to generate a valid signing key pair, where "valid" means the last 28 bits are as specified. once a poster generates such a key, they can use it for as long as it's valid.
We could have gone so, so many different directions.
Still can, I'm thinking.
Top-level cryptographic identifiers that create an identity separate and distinct from a network location are basically de rigueur for network protocols these days. Almost as if it's a missing layer between IP and TCP? Interesting to see the trend continue in this proposal.
Without an easy way to pay for content online, this will be subject to the same distorting, ad-infiltrating forces as the web in general. Charging even tiny fees for content online resolves a host of problems... while of course creating others.
Something this has in common with similar efforts: the web page is treated as a document - here it's a set of documents, i.e. "boards" - and not an application. As a result, commerce is excluded. That's fine if it's what you want, but I think it limits the scale and appeal to nostalgia. I deprecate lots of things about the modern web, but I like being able to buy things and perform transactions online.
I mused on a very similar concept a few years back, without writing more than a brief treatment. I complicated it a bit for myself by also considering the idea of a periodic delay line loop that has some of the properties of a reblog - a way to circulate ideas repeatedly and add those of others to your own, making the page fresh even when unattended.
But I think the basic idea is fine.
Sounds like Fraidycat