Comment by baggy_trough

16 hours ago

A lot of this seems irrelevant these days with https everywhere.

It is not uncommon for enterprises to intercept HTTPS for inspection and logging. They may or may not also do caching of responses at the point where HTTPS is intercepted.

I previously experimented a bit with Squid Cache on my home network for web archival purposes, and set it up to intercept HTTPS. I then added the TLS certificate to the trust store on my client, and was able to intercept and cache HTTPS responses.

In the end, Squid Cache was a little bit inflexible in terms of making sure that the browsed data would be stored forever as was my goal.

This Christmas I have been playing with using mitmproxy instead. I previously used mitmproxy for some debugging, and found out now that I might be able to use it for archival by adding a custom extension written in Python.

It’s working well so far. I browse HTTPS pages in Firefox and I persist URLs and timestamps in SQLite and write out request and response headers plus response body to disk.

My main focus at the moment is archiving some video courses that I paid for in the past, so that even the site I bought the courses from ceased operation I will still have those video courses. After I finish archiving the video courses, I will proceed to archiving other digital things I’ve bought like VST plugins, sample packs, 3d assets etc.

And after that I will give another shot at archiving all the random pages on the open web that I’ve bookmarked etc.

For me, archiving things by using an intercepting proxy is the best way. I have various manually organised copies of files from all over the place, both paid stuff and openly accessible things. But having a sort of Internet Archive of my own with all of the associated pages where I bought things and all the JS and CSS and images surrounding things is the dream. And at the moment it seems to be working pretty well with this mitmproxy + custom Python extension setup.

I am also aware of various existing web scrapers and internet archival systems for self hosting and have tried a few of them. But for me the system I am doing is the ideal.

Some of it is different, but the basics are still the same and still relevant. Just today I've been working with some of this.

I took a Django app that's behind an Apache server and added cache-control and vary headers using Django view decorators, and added Header directives to some static files that Apache was serving. This had 2 effects:

* Meant I could add mod_cache to the Apache server and have common pages cached and served directly from Apache instead of going back to Django. Load testing with vegeta ( https://github.com/tsenart/vegeta ) shows the server can now handle multiples more simultaneous traffic than it could before.

* Meant users browsers now cache all the CSS/JS. As users move between HTML pages, there is now often only 1 request the browser makes. Good for snappier page loads with less server load.

But yeah, updating especially the sections on public vs private caches with regards to HTTPS would be good.

If you implement any of the ends of a HTTP communication caching is still very important.

This website is chock full of site operators raging mad at web crawlers created by people that didn't bother to implement proper caching mechanisms.

CDNs manage user TLS certificates and that is one of the advantages of using them.

A node server could negociate https close to the user, do caching stuff and create an other https connection to your local server (or reuse an existing one).

Https everywhere with your CDN in middle.

how is https making caching irrelevant?

  • At one point with http only your isp could do its own cache, large corporate it networks could have a cache, etc. which was very efficient for caching. But horrible for privacy. Now we have CDN edge caching etc but nothing like the multi layer caching that was available with http.