I know that sometimes the behavior of each archiver service is a bit different. For example, it's possible that both Archive.today and the Internet Archive say they have a copy of a page, but then when you open up the IA version, you might see that it renders completely differently or not at all. It might be caused because the webpage has like two scrollbars, or maybe there's a redirect that happens when a link to the page is loaded. I notice this seems to happen on documentation pages that are hosted by Salesforce. It can be a bit of a pain if you want to save to save a backup copy online of a release note or something like that for everyone to easily reference in the future.
You don't even need to do requests if you are the owner of the URL. Robot.txt changes are applied in retrospect, which means you can disallow crawls to /abc, request a re-crawl, and all snapshots from the past which match this new rule will be removed.
Trying to search the Wayback machine almost always gives me their made-up 498 error, and when I do get a result the interface for scrolling through dates is janky at best.
Archive.today has just about everything the archived site doesn't want archived. Archive.org doesn't, because it lets sites delete archives.
I know that sometimes the behavior of each archiver service is a bit different. For example, it's possible that both Archive.today and the Internet Archive say they have a copy of a page, but then when you open up the IA version, you might see that it renders completely differently or not at all. It might be caused because the webpage has like two scrollbars, or maybe there's a redirect that happens when a link to the page is loaded. I notice this seems to happen on documentation pages that are hosted by Salesforce. It can be a bit of a pain if you want to save to save a backup copy online of a release note or something like that for everyone to easily reference in the future.
Wayback machine removes archives upon request, so there’s definitely stuff they don’t make publicly available (they may still have it).
You don't even need to do requests if you are the owner of the URL. Robot.txt changes are applied in retrospect, which means you can disallow crawls to /abc, request a re-crawl, and all snapshots from the past which match this new rule will be removed.
Trying to search the Wayback machine almost always gives me their made-up 498 error, and when I do get a result the interface for scrolling through dates is janky at best.
Accounts to bypass paywalls? The audacity to do it?
Oh yeah those where a thing. As a public organization they can't really do that.
I personally just don't use websites that paywall important information.