Parasitic Web Re-Assembly

1 day ago

Is it a thing? A better way to "store" code?

Take a bunch of website DOMS, use compression to store the location of code within the DOM. Checksum. Spit out a functioning completely different website.

Lots of problems obviously, but interesting ones.

1) Probably storing a compressed packet that just has URI + strloc is smaller than an entire website. But is it small enough to be sent via copy/paste limitations on most devices, say under 1k characters?

2) How do you efficiently search for the pieces of your site? <ahref> this and that, sure tags can be repeated, but the english language, javascript logic, css layout, fonts, image data, how do you determine which pieces of your site are available and which are not before you even start searching? Lets say a website has a <head> tag with a bunch of spider instructions, html version, stored font locations, javascript versions, etc. You could have a central repository sure, but you're not searching for "Hot mommies in my area", you're searching for <script src="uniqueName.js"></script> FAIL <script src=" SUCCESS uniqueName.js"></script> FAIL ... you can see how this can easily lead into recursive madness. You could potentially have a dictionary, so the compiler in your browser would know, if you're using node or vue or something, essentially you enforce a paradigm on the dev end, the browser would know what is and isn't within the standard language before it compiles, and may expect a github page or something for custom libraries. But enforcing a standard is silly, not because it's a bad concept, but because people don't do it. That's why we have NODE and Vue, because vanilla JS wasn't a good standard. But you can at least start with bullshit and see if people can build something better on top of it. 3) How do you handle unique cases where no code exists?

4) How do you handle broken bits due to information decay and loss on a network?

5) Ideally it's a serverless website. How do you verify security of a site when the entire contents of it can essentially be copy and pasted in a string and reassembled in

6) Rate limiting, what you save in space, you lose in bandwidth and communication. You will increase communication on a network exponentially in relation to the amount of web requests made.

7) Is it even the right solution? Due to tech monopoly, something that should be marginally expensive due to Moores law, i.e. server space, is instead extremely expensive producing large amounts of revenue for companies that are already worth billions. Is a silly little tech solution really a reasonable way to counteract monopoly power?

8) How do you compile and run a website like this? Browser extension? That allows browsers to choose whether or not to support them, some of which also offer server space in direct competition to the protocol. A separate browser entirely isn't reasonable for the average person.

2 really seems to be the biggest issue, and then 7, 8

Most issues with grabbing web data isn't from grabbing it, it's from companies (who often scrape and harvest data at ENORMOUS RATES) turning around and making it difficult for people to do the same to them.

And then just the recursive nature of failed search. It seems to have a similar solution to dict brute force password guessing.