Comment by ornornor
4 days ago
> cannot detect premature EOF during the file transfer. It keeps the incomplete file in the cache where the sha hash fails until you wipe your entire cache.
I wonder what circumstances led to saying “this is okay we’ll ship it like that”
I think we can blame the IO streaming API in NodeJS on this. It’s a callback and you just know you got another block. My guess is chunked mode and not checking whether the bytes expected and the bytes received matched.
Not to diminish the facepalm but the blame can be at least partially shared.
Our UI lead was getting the worst of this during Covid. I set up an nginx forward proxy mostly for him to tone this down a notch (fixed a separate issue but helped here a bit as well) so he could get work done on his shitty ISP.
Ignorance. Most programmers in open source operate on the "works on my machine"
True, and things that manifest only on old/slow hardware or on bad internet are the worst kind for this, since 100% of developers who have any say in the matter would never accept such circumstances at all, so they’re always approaching every issue with multi-gigabit speeds, zero latency, and this year’s $3,000 Mac. “What do you mean the page loads slowly?”
A customer of mine has the implementation of several API endpoints starting with a simulate_slow_connection method that basically sleeps for a random amount of ms. I think it sleeps 0 ms when running tests and definitely sleeps 0 ms in production. So it's never super fast even on $3,000 Macs.