← Back to context

Comment by shermantanktop

15 hours ago

Well you can do a lot with 640k…if you try. We have 16G in base machines and very few people know how to try anymore.

The world has moved on, that code-golf time is now spent on ad algorithms or whatever.

Escaping the constraint delivered a different future than anticipated.

> you can do a lot with 640k…if you try.

it is economically not viable to try anymore.

"XYZ Corp" won't allow their developers to write their desktop app in Rust because they want to consume only 16MB RAM, then another implementation for mobile with Swift and/or Kotlin, when they can release good enough solution with React + Electron consuming 4GB RAM and reuse components with React Native.

  • Strangely enough, AI could turn this on its head. You can have your cake and eat it too, because you can tell Claude/Codex/whatever to build you a full-featured Swift version for iOS and Kotlin for Android and whatever you want on Windows and Mac. There's still QA for the different builds, but you already have to QA each platform separately anyway if you really care that they all work, so in theory that doesn't change.

    Of course, it's never that simple in reality; you need developers who know each platform for that to work, because you must run the builds and tell the AI what it's doing wrong and iterate. Currently, you can probably get away with churning out Electron slop and waiting for users to complain about problems instead of QAing every platform. Sad!

The simple fact is that a 16 GB RAM stick costs much less than the development time to make the app run on less.

  • > The simple fact is that a 16 GB RAM stick costs much less than the development time to make the app run on less.

    The costs are borne by different people: development by the company, RAM sticks by the customer.

    A company is potentially (silently?) adding to the cost of the product/service that the customer has to bear by needed to have more RAM (or have the same amount, but can't do as much with it).

    • Yep, and since companies care about TCO, they reward the software with the lower TCO, which happens to be the one that uses more RAM but is cheaper to produce.

      2 replies →

People get hung up on bad optimization. It you are the working at sufficiently large scale, yes, thinking about bytes might be a good use of your time.

But most likely, it's not. At a system level we don't want people to do that. It's a waste of resources. Making a virtue out of it is bad, unless you care more about bytes than humans.

  • These bytes are human lives. The bytes and the CPU cycles translate to software that takes longer to run, that is more frustrating, that makes people accomplish less in longer time than they could, or should. Take too much, and you prevent them from using other software in parallel, compounding the problem. Or you're forcing them to upgrade hardware early, taking away money they could better spend in different areas of their lives. All this scales with the number of users, so for most software with any user base, not caring about bytes and cycles is wasting much more people-hours than is saving in dev time.

    • Creating people able to do these optimizations costs human life, which is not spend on other things, like building the unoptimized version of another product.

      3 replies →