Comment by lucb1e

20 hours ago

AWS has a similar RAM consumption. I close Signal to make sure it doesn't crash and corrupt the message history when I need to open more than one browser tab with AWS in the work VM. I think after you click a few pages, one AWS tab was something like 1.4GB (edit: found it in message history, yes it was "20% of 7GB" = 1.4GB precisely)

Does anyone else have the feeling they run into this sort of thing more often of late? Simple pages with just text on it that take gigabytes (AWS), or pages that look simple but it takes your browser everything it has to render it at what looks like 22 fps? (Reddit's new UI and various blogs I've come across.) Or the page runs smoothly but your CPU lifts off while the tab is in the foreground? (e.g. DeepL's translator)

Every time I wonder if they had an LLM try to get some new feature or bugfix to work and it made poor choices performance-wise, but it completes unit tests so the LLM thinks it's done and also visually looks good on their epic developer machines

I think a big problem is the fact that many web frameworks allow you to write these kind of complex apps that just "work" but performance is often not included in the equation

so it looks fine during basic testing but it scales really bad.

like for example claude/openAI web UIs, they at first would literally lag so bad because they'd just use simple updating mechanisms which would re-render the entire conversation history every time the new response text was updated

and with those console UIs, one thing that might be happening is that it's basically multiple webapps layered (per team/component/product) and they all load the same stuff multiple times etc...

  • The Grok android app is terrible in that sense. Just writing a question with a normal speed will make half of the characters not appear due to whatever unoptimized shit the app does after each keystroke.

    • Sounds quite overengineered. CEOs have basically no idea what they're doing these days. If this were my company, I'd start by cutting 80% of staff and 80% of the code bloat.

      4 replies →

  • it's unironically just react lmao, virtually every popular react app has an insane number of accidental rerenders triggered by virtually everything, causing it to lag a lot

    • well that's any framework with vdom, the GC of web frameworks, so I'd imagine it's also a problem with vue etc..

      I don't understand though why performance (I.e. using it properly) is not a consideration with these companies that are valued above $100 billion

      like, do these poor pitiful big tech companies only have the resources to do so when they hit the 2 trillion mark or something?

      9 replies →

    • I think linkedin is built with emberjs not react last i checked…

      The problem with performance in wep apps is often not the omg too much render. But is actually processing and memory use. Chromium loves to eat as much ram as possible and the state management world of web apps loves immutability. What happens when you create new state anytime something changes and v8 then needs to recompile an optimized structure for that state coupled with thrashing the gc? You already know.

      I hate the immutable trend in wep apps. I get it but the performance is dogshite. Most web apps i have worked on spend about 10% of their cpu time…garbage collecting and the rest doing complicated deep state comparisons every time you hover on a button.

      Rant over.

I was researching laptops at BestBuy and every page took ages to load, was choppy when scrolling, caused my iPhone 13 mini to get uncomfortably hot in my hand and drained my battery fast. It wouldn’t be noticeably different if they were crypto-mining on my iPhone as I browsed their inventory.

It’s astonishing how bad the experience was.

  • Best Buy is actually one of the worst and slowest websites from any large retailer. I cannot believe how bad it is. It's like they set out to make it pretty and accidentally stepped in molasses.

    • The irony! My router died literally an hour ago, and I was on bestbuy to buy a new one. Over 5g connection. That was probably the worst shopping experience I had in a while...

> Does anyone else have the feeling they run into this sort of thing more often of late? Simple pages with just text on it that take gigabytes (AWS), or pages that look simple but it takes your browser everything it has to render it at what looks like 22 fps?

It is to do with websites essentially baking in their own browser written in javascript to track as much user behavior as possible.

  • Spot on. It's why I quit adtech in 2015. Running realtime auctions server-side is one thing, but building what basically amounts to live-feed screen capture ..

    • I do live-feed screen capture and it doesn't really consume much and is barely unnoticeable. Running 100 live-feed screen capture is a different story though.

My company started using slack in 2015 and at that time I put in a bug report to slack that their desktop app was using more memory than my IDE on a 1M+LOC C++ project. I used to stop slack to compile…

I noticed that there's a developing trend of "who manages to use the most CSS filters" among web developers, and it was there even before LLMs. Now that most of the web is slop in one form or another, and LLMs seem to have been trained on the worst of the worst, every other website uses an obscene amount of CSS backdrop-filter blur, which slows down software renderers and systems with older GPUs to a crawl.

When it comes to DeepL specifically, I once opened their main page and left my laptop for an hour, only to come back to it being steaming hot. Turns out there's a video around the bottom of the page (the "DeepL AI Labs" section) that got stuck in a SEEKING state, repeatedly triggering a pile of NextJS/React crap which would seek the video back, causing the SEEKING event and thus itself to be triggered again.

I wish Google would add client-side resource use to Web Vitals and start demoting poorly performing pages. I'm afraid this isn't going to change otherwise; with first complaints dating back to mid-2010s, browsers and Electron apps hogging RAM are far from new and yet web developers have only been getting increasingly disconnected from reality.

Hit this exact wall with desktop wrappers. I was shipping an 800MB Electron binary just to orchestrate a local video processing pipeline.

Moved the backend to Tauri v2 and decoupled heavy dependencies (like ffmpeg) so they hydrate via Rust at launch. The macOS payload dropped to 30MB, and idle RAM settled under 80MB.

Skipping the default Chromium bundle saves an absurd amount of overhead.

So many sites.. they’re all built as web apps these days when they don’t need to be. And they’re all full of tracking and “telemetry”……..

Yes, its sometimes extreme. I often wondered if it was my FF browser, but then i'd switch to Opera or Brave, and i would see the same pattern.

Its quite insane

If you're talking about the AWS management UI, I haven't used it recently but can tell you that the Azure one is no better. One of the stupidest things I remember is that it somehow managed to reimplement a file upload form for one of their storage services such that it will attempt to read the whole file into memory before sending it to the server. For a storage service meant for very large files (dozens of gigabytes or more).