Comment by PunchyHamster
1 day ago
Our developers managed to run around 750MB per website open once.
They have put in ticket with ops that the server is slow and could we look at it. So we looked. Every single video on a page with long video list pre-loaded a part of it. The single reason the site didn't ran like shit for them is coz office had direct fiber to out datacenter few blocks away.
We really shouldn't allow web developers more than 128kbit of connection speed, anything more and they just make nonsense out of it.
PSA for those who aren’t aware: Chromium/Firefox-based browsers have a Network tab in the developer tools where you can dial down your bandwidth to simulate a slower 3G or 4G connection.
Combined with CPU throttling, it's a decent sanity check to see how well your site will perform on more modest setups.
I once spent around an hour optimizing a feature because it felt slow - turns out that the slower simulated connection had just stayed enabled after a restart (can’t remember if it was just the browser or the OS, but I previously needed it and then later just forgot to turn it off). Good times, useful feature though!
Working as intended!
hahaha - I've done something similar. I had an automated vitest harness running and at one point it ended up leaving a bunch of zombie procs/threads just vampirically leeching the crap out of my resources.
I naturally assumed that it was my code that was the problem (because I'm often the programmer equivalent of the Seinfeld hipster doofus) and spent the next few hours optimizing the hell out of it. It turned out to be unnecessary but I'm kind of glad it forced me into that "profiling" mindset.
Have the same story but I forgot to disable tc netem on a server, luckily it was just staging.
Imagine the speed of those optimizations once you turned it off. lol. Love it!
1 reply →
I wonder if that works beneficial on old computers that freeze up when you try load the GB js ad-auction circus news circus website. I want to browse loaded pages while the new tabs load. If the client just hangs for 2 min it gets boring fast.
10 replies →
sounds like the MacOS network utility. I've been bit by leaving it on after testing ios apps :D
I still test mine on GPRS, because my website should work fine in the Berlin U-Bahn. I also spent a lot of time working from hotels and busses with bad internet, so I care about that stuff.
Developers really ought to test such things better.
Thank you for doing this! I really mean it. We need more developers who care about keeping websites lean and fast. There's no good reason a regular site shouldn't work on GPRS, except maybe if the main content is video.
1 reply →
It doesn't throttle Websockets, so be careful with that
For macOS users you can download the Network Link Conditioner preference pane (it still works in the System Settings app) to do this system wide. I think it's in the "Additional Tools for Xcode" download.
This made me chuckle.
I had a fairly large supplier that was so proud that they implemented a functionality that deliberately (in their JS) slows down reactions from http responses. So that they can showcase all the UI touches like progress bars and spinning circles. It was an option in system settings you could turn on globally.
My mind was blown, are they not aware of F12 in any major browser? They were not, it seems. After I quietly asked about that, they removed the whole thing equally quietly and never spoke of it again. It's still in release notes, though.
It was like 2 years ago, so browsers could do that for 10-14 years (depending how you count).
That's great. Well, just to let them know if they ever need something like that in the future, I'm available for hire as an overpriced consultant.
I guarantee with 100% satisfaction that my O(n^n) code will allow visitors sufficient time to fully appreciate the artistic glory of all the progress bars and spinners.
For Firefox users, here's where it's hidden (and it really is hidden): Hamburger menu -> More tools -> Web developer tools, then keep clicking on the ">>" until the Network tab appears, then scroll over on about the third menu bar down until you see "No throttling", that's a combobox that lets you set the speed you want.
Alternatively, run uBlock Origin and NoScript and you probably won't need it.
What a weird comment, not sure what you are trying to achieve. Any web developer knows how to find the network tab of the web developer tools in any browser including Firefox, and then the throttle option is immediately there.
You can make it look like any feature in any UI is hidden by choosing the longest path to reach it, using many words to describe it despite the target audience already knowing this stuff, and making your windows as small as possible.
Moreover, that a developer tool is a bit hidden in submenus in a UI designed for nontechnical users is fair game.
Even considering this, right click > inspect or Ctrl+shift+k also gets you the web developer tools. Not that hidden.
And then usually the network tab is visible immediately, it is one of the first tabs unless you moved it towards the end (even then, usually all the tabs are visible; but it's nice you can order the tabs as you want, and that a scroll button exists for when your window is too small -- and if the web developer panel is too small because it's docked at the left you can resize it, dock it to bottom or undock it).
This stuff is pretty standard across browsers, it's not like Firefox's UI is specifically weird for this. I don't have ideas for improving this a lot, it looks quite well designed and optimized to me already.
And then no, ublock Origin and No Script can't help you optimize the size of the web page you are working on. You ought to unblock everything to do this. They are a solution for end users, who have very few reasons to use the throttle feature. And unfortunately for end users, blocking scripts actually breaks too much to be a good, general workaround against web pages being too heavy. I know, I browse the web like this.
1 reply →
Peanuts! My wife’s workplace has an internal photo gallery page. If your device can cope with it and you wait long enough, it’ll load about 14GB of images (so far). In practice, it will crawl along badly and eventually just crash your browser (or more), especially if you’re on a phone.
The single-line change of adding loading=lazy to the <img> elements wouldn’t fix everything, but it would make the page at least basically usable.
Haha excellent. Presumably all the images are the full res haven’t been scaled down for the web at all?
Could it be any other way?
Amazing. Well, any employee that wants more ram could use that internal site as an excuse.
"Why do you want 64 GB RAM in your laptop?"
"I need that to load the gallery"
> We really shouldn't allow web developers more than 128kbit
Marketing dept. too. They're the primary culprits in all the tracking scripts.
Reserve a huge share of the blame for the “UX dEsIgNeRs”. Let’s demand to reimplement every single standard widget in a way that has 50% odds of being accessible, has bugs, doesn’t work correctly with autofill most of the time, and adds 600kB of code per widget. Our precious branding requires it.
> Let’s demand to reimplement every single standard widget in a way that has 50% odds of being accessible, has bugs, doesn’t work correctly with autofill most of the time, and adds 600kB of code per widget.
You're describing the web developers again. (Or, if UX has the power to demand this from software engineering, then the problem is not the UX designers.)
5 replies →
often we're told to add Google XSS-as-a-serv.. I mean Tag Manager, then the non-tech people in Marketing go ham without a care in the world beyond their metrics. Can't blame them, it's what they're measured on.
Marketing and managers should be restricted as well, because managers set the priorities.
We should 100% blame them.
I recently had to clean up a mess and after days asking what’s in use and what’s not, turns out nothing is really needed, and 80 tracking pixels were added “because that’s how we do it”.
You can still make a site unusable without having it load lots of data. Go to https://bunnings.com.au on a phone and try looking up an item. It's actually faster to walk around the store and find an employee and get them to look it up on an in-store terminal than it is to use their web site to find something. A quick visit to profiles.firefox.com indicates it's probably more memory than CPU, half a gigabyte of memory consumed if I'm interpreting the graphical bling correctly.
How gaslit I must be to remark how more painless this is to use than literally any NA store website I've used.
Less useless shit popping up (with ad block so I mean just the cookies, store location etc harassments) Store selector didn't request new pages every time I do anything; resulting in all the popups again. (just download our spyware and all these popups will go away!) Somehow my page loads are snappier than local stores despite being across the planet.
Not saying it's a good site. It's almost the same as Home Depot. Just slightly better. I mean there's an AI button for searching for a product so you can do agentic shopping with a superintelligence on your side.
You don't even need video for this: I once worked for a company that put a carousel with everything in the product line, and every element was just pointing to the high resolution photography assets: The one that maybe would be useful for full page print media ads. 6000x4000 pngs. It worked fine in the office, they said. Add another nice background that size, a few more to have on the sides as you scroll down...
I was asked to look at the site when it was already live, and some VP of the parent company decided to visit the site from their phone at home.
Many web application frameworks already have extensive built-in optimization features, though examples like the one that you shared indicate that there are fundamentals that many people contributing to the modern web simply don't grasp or understand that these frameworks won't just 'catch you out' on in many cases. It speaks to an overreliance on the tools and a critical lack of understanding of the technologies that they co-exist with.
Same for fancy computers. Dev on a fast one if you like, but test things out on a Chromebook.
“Craptop duty”[1]. (Third time in three years I’m posting an essentially identical comment, hah.)
[1] https://css-tricks.com/test-your-product-on-a-crappy-laptop/
I now wonder if it'd be a good idea to move our end to end tests to a pretty slow vm instead of beefy 8 core 32gb ram machine and check which timeouts will be triggered because our app may have been unoptimized for slower environments...
3 replies →
Gonna bookmark that article for tomorrow, craptop duty is such a funny way to put it.
Similarly, a colleague I had before insisted on using a crappy screen. Helped a lot to make sure things stay visible on customers’ low contrast screens with horrible viewing angles, which are still surprisingly common.
Music producers often have some shitty speakers known as grot boxes that they use to make sure their mix will sound as good as it can on consumer audio, not just on their extremely expensive studio monitors. Chromebooks are perfectly analogous. As a side note, today I learned that Grotbox is now an actual brand: https://grotbox.com
Doesn't having a brand for that kinda go against the definition?
1 reply →
Based on the damage rate for company laptop screens, one can usually be sure anything high-end will be out of your own pocket. =3
Should also give designers periodically small displays with low maximum contrast, and have them actually try to achieve everyday tasks with the UX they have designed.
There's essentially zero chance the developers get to make choices about the ads and ad tracking.
I wouldn't even guarantee it's developers adding it. I'm sure they have some sort of content management system for doing article and ad layout.
Yes, and a machine that is at least two generations behind the latest. That will cut down on bloat significantly.
If you want to see context aware pre-fetching done right go to mcmaster.com ...
There are good reasons to have a small cheap development staging server, as the rate-limited connection implicitly trains people what not to include. =3
And this! https://www.mcmaster.com/help/api/ Linked from the footer of every page!
I'm so happy to have seen their web site that I want to do business with them, even though I have no business to be done.
Making it easy to buy stuff from them definitely helps their bottom line. Unfortunately the few companies I've wanted to buy from but their website was horrible and made me go elsewhere, either completely ignored or dismissed my complaints about having just lost a customer.
Some CAD/CAM applications directly integrate a component toolbox. =3
[dead]
Well as long as the website was already full loaded and responsive, and the videos show a thumbnail/placeholder, you are not blocked by that. Preloading and even very agressive pre-loading are a thing nowaadays. It is hostile to the user (because it draws their network traffic they pay for) but project managers will often override that to maximize gains from ad revenue.
this is a general problem with lots of development. Network, Memory, GPU Speed. Designer / Engineer is on a modern Mac with 16-64 gig of ram and fast internet. They never try how their code/design works on some low end Intel UHD 630 or whatever. Lots of developers making 8-13 layer blob backgrounds that runs at 60 for 120fps on their modern mac but at 5-10fps on the average person's PC because of 15x overdraw.
I used the text web (https://text.npr.org and the like) thru Lyx. Also, Usenet, Gopher, Gemini, some 16 KBPS opus streams, everything under 2.7 KBPS when my phone data plan was throttled and I was using it in tethering mode. Tons of sites did work, but Gopher://magical.fish ran really fast.
Bitlbee saved (and still saves) my ass with tons of the protocols available via IRC using nearly nil data to connect. Also you can connect with any IRC client since early 90's.
Not just web developers. Electron lovers should be trottled with 2GB of RAM machines and some older Celeron/Core Duo machine with a GL 2.1 compatible video card. It it desktop 'app' smooth on that machine, your project it's ready.
I'm pretty damn sure those videos were put on the page because someone in marketing wanted them. I'm pretty sure then QA complained the videos loaded too slowly, so the preloading was added. Then, the upper management responsible for the mess shrugged their shoulders and let it ship.
You're not insightful for noticing a website is dog slow or that there is a ton of data being served (almost none of which is actually the code). Please stop blaming the devs. You're laundering blame. Almost no detail of a web site or app is ever up to the devs alone.
From the perspective of the devs, they expect that the infrastructure can handle what the business wanted. If you have a problem you really should punch up, not down.
> Please stop blaming the devs. You're laundering blame. Almost no detail of a web site or app is ever up to the devs alone.
If a bridge engineer is asked to build a bridge that would collapse under its own weight, they will refuse. Why should it be different for software engineers?
Because software engineers aren't real engineers. A real engineer has liability insurance.
Because bridge engineers can be sued if the bridge kills people
1 reply →
It's a website and not a bridge. Based on the description given, it's not a critical website either. If it was, the requirements would have specified it must be built differently.
You're not even arguing with me BTW. You're arguing against the entire premise of running a business. Priorities are not going to necessarily be what you value most.
3 replies →
this isn't purely laundering blame. it is frustrating for the infrastructure/operations side is that the dev teams routinely kick the can down to them instead of documenting the performance/reliability weak points. in this case, when someone complains about the performance of the site, both dev and qa should have documented artifacts that explain this potential. as an infrastructure and reliability person, i am happy to support this effort with my own analysis. i am less inclined to support the dev team that just says, "hey, i delivered what they asked for, it's up to you to make it functional."
> From the perspective of the devs, they expect that the infrastructure can handle what the business wanted. If you have a problem you really should punch up, not down.
this belittles the intelligence of the dev team. they should know better. it's like validating saying "i really thought i could pour vodka in the fuel tank of this porsche and everything would function correctly. must be porsche's fault."
Yes, but can you blame someone for trying when all the gas stations are 1000 miles away? That's the exact situation the devs are put in all the time.
Oh, and the rest of the business doesn't even know what a car or gasoline are!
1 reply →
"Developers" here clearly refers to the entire organization responsible. The internal politics of the foo.com providers are not relevant to Foo users.
I agree except for your definition of "developers". I see this all the time and can't understand why the blame can't just be the business as a whole instead of singling out "developers". In fact, the only time I ever hear "developers" used that way it's a gamer without a job saying it.
The blame clearly lies with the contradictory requirements provided by the broader business too divorced from implementation details to know they're asking for something dumb. Developers do not decide those.
And the devs are responsible for finding a good technical solution under these constraints. If they can't, for communicating their constraints to the rest of the team so a better tradeoff can be found.
Fuck that. I just left a job where the IT dept just said "yes and" to the executives for 30 years. It was the most fucked environment I've ever seen, and that's saying a lot coming from the MSP space. Professionals get hired to do these things so they can say "No, that's a terrible idea" when people with no knowledge of the domain make requests. Your attitude is super toxic.
I suppose the realities of teamwork can be seen as "toxic" by some individuals.
3 replies →
Sounds just like a "helpless" dev that shifts blame to anyone but themselves.
Do you have a suggestion how else to handle the situation I described?
8 replies →
The devs are the subject matter experts. Does marketing understand the consequences of preloading all those videos? Does upper management? Unlikely. It’s the experts’ job to educate them. That’s part of the job as much as writing code is.
In general, how people communicate internally and with the public is important.
https://en.wikipedia.org/wiki/Conway's_law
Have a wonderful day =3
From the perspective of the devs, they have a responsibility for saying something literally wont fly anywhere, ever, saying the business is responsible for every bad decision is a complete abrogation of your responsibilities.
Why don't you tell your boss or team something like that and see how well that flies.
The responsibility of the devs is to deliver what was asked. They can and probably do make notes of the results. So does QA. So do the other stakeholders. On their respective teams they get the same BS from everyone who isn't pleased with the outcome.
Ultimately things are on a deadline and the devs must meet requirements where the priority is not performance. It says nothing about their ability to write performant code. It says nothing about whether that performant code is even possible in a browser while meeting the approval of the dozens of people with their own agendas. It says everything about where you work.
6 replies →