← Back to context

Comment by YZF

17 hours ago

I was looking at a production service we run that was using a few GBs of memory. When I add up all the actual data needed in a naive compact representation I end up with a few MBs. So much waste. That's before thinking of clever ways to compress, or de-duplicate or rearrange that data.

Back in the day getting the 16KB expansion pack for my 1KB RAM ZX81 was a big deal. And I also wrote code for PIC microcontrollers that have 768 bytes of program memory [and 25 bytes of RAM]. It's just so easy to not think about efficiency today, you write one line of code in a high level language and you blow away more bytes than these platforms had without doing anything useful.

Long ago working for a retail store chain, I made some excel DSL to encode business rules to update inventory spreadsheets. While coding I realized that their excel template had a bunch of cells with whitespace in them on row 100000. This forced excel to store the sparse matrix for 0:100000 region, adding 100s of Kb for no reason. Multiplied by 1000s of these files over their internal network. Out of curiosity I added empty cell cleaning in my DSL and I think I managed to fit the entire company excel file set on a small sd card (circa 2010).

I think you're right about the waste, but I'm not sure it's entirely "accidental"... a lot of it is traded for different kinds of efficiency

  • At some point, you just stop measuring the thing until the thing becomes a problem again. That lets you work a lot faster and make far more software for far less money.

    It's the "fast fashion" of software. In the middle ages, a shirt used to cost about what a car does now, and was just as precious. Now, most people can just throw away clothes they no longer like.

  • It usually is. I try to think of these things not as "waste" but as "cost." As in, what does it cost vs. the alternative? You're using 40Gb of some kind of storage. Let's say it's reasonably possible to reduce that to 20Gb. What's the cost of doing so compared to the status quo? That memory reduction effort, both the initial effort, and the ongoing maintenance, isn't free. Unless it costs a lot less to do that than to continue using more memory, we should probably continue to use the memory.

    Yeah, there may be other benefits, but as a first order of approximation, that works. And you'll usually find that it's cheaper to just use more memory.

Sure, if you don’t count safety features like memory management, crash handling, automatic bounds checks and encryption cyphers; as anything useful.

I do completely agree that there is a lot of waste in modern software. But equally there is also a lot more that has to be included in modern software that wasn’t ever a concern in the 80s.

Networking stacks, safety checks, encryption stacks, etc all contribute massively to software “bloat”.

You can see how this quickly adds up if you write a “hello world” CLI in assembly and compare that to the equivalent in any modern language that imports all these features into its runtime.

And this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater than anything supported by 80s micro computers.

  • Yes, but this doesn't prevent you from being mindful and selecting the right tools with smaller memory footprint while providing the features you need.

    Go's "GC disadvantage" is turned on its head by developing "Zero Allocation" libraries which run blazingly fast with fixed memory footprints. Similarly, rolling your own high performance/efficient code where it matters can save tremendous amounts of memory where it matters.

    Of course more features and safety nets will consume memory, but we don't have to waste it like there are no other things running on the system, no?

    > And this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater than anything supported by 80s micro computers.

    This demo [0] is a 4kB executable. 4096 bytes. A single file. All assets, graphics, music and whatnot, and can run at high resolutions with real time rendering.

    This is [1] 64kB and this [2] is 177kB. This game from the same group is 96kB with full 3D graphics [3].

    [0]: https://www.pouet.net/prod.php?which=52938

    [1]: https://www.pouet.net/prod.php?which=1221

    [2]: https://www.pouet.net/prod.php?which=30244

    [3]: https://en.wikipedia.org/wiki/.kkrieger

    • Programming these days, in some realms, is a lot like shopping for food - some people just take the box off the shelf, don't bother with reading the ingredients, throw it in with some heat and fluid and serve it up as a 3-star meal.

      Others carefully select the ingredients, construct the parts they don't already have, spend the time to get the temperatures and oxygenation aligned, and then sit down to a humble meal for one.

      Not many programmers, these days, do code-reading like baddies, as they should.

      However, kids, the more you do it the better you get at it, so there is simply no excuse for shipping someone elses bloat.

      Do you know how many blunt pointers are lined up underneath your BigFatFancyFeature, holding it up?

      2 replies →

    • > Go's "GC disadvantage" is turned on its head by developing "Zero Allocation" libraries which run blazingly fast with fixed memory footprints. Similarly, rolling your own high performance/efficient code where it matters can save tremendous amounts of memory where it matters.

      The savings there would be negligible (in modern terms) but the development cost would be significantly increased.

      > Of course more features and safety nets will consume memory, but we don't have to waste it like there are no other things running on the system, no?

      Safety nets are not a waste. They’re a necessary cost of working with modern requirements. For example, If your personal details were stolen from a MITM attack then I’m sure you’d be asking why that piece of software wasn’t encrypting that data.

      The real waste in modern software is:

      1. Electron: but we are back to the cost of hiring developers

      2. Application theming. But few actual users would want to go back to plain Windows 95 style widgets (many, like myself, on HN wouldn’t mind, but we are a niche and not the norm).

      > This demo [0] is a 4kB executable. 4096 bytes. A single file. All assets, graphics, music and whatnot, and can run at high resolutions with real time rendering.

      You quoted where i said that modern resolutions are literally orders of magnitude greater and assets stored in bitmaps / PCM then totally ignored that point.

      When you wrote audio data in the 80s, you effectively wrote midi files in machine code. Obviously it wasn’t literally midi, but you’d describe notes, envelopes etc. You’d very very rarely store that audio as a waveform because audio chips then simply don’t support a high enough bitrate to make that audio sound good (nor had the storage space to save it). Whereas these days, PCM (eg WAV, MP3, FLAC, etc) sound waaaay better than midi and are much easier for programmers to work with. But even a 2 second long 16bit mono PCM waveform is going to be more than 4KB.

      And modern graphics aren’t limited to 2 colour sprites (more colours were achieved via palette swapping) at 8x8 pixels. Scale that up to 32bits (not colours, bits) and you’re increasing the colour depth by literally 32 times. And that’s before you scale again from 64 pixels to thousands of pixels.

      You’re then talking exponential memory growth in all dimensions.

      I’ve written software for those 80s systems and modern systems too. And it’s simply ridiculous to Compare graphics and audio of those systems to modern systems without taking into account the differences in resolution, colour depth, and audio bitrates.

      9 replies →

    • I was sure once I saw the descriptions that what you're posting is Farbrausch prods! Do you know if anyone came close to this level since?

      1 reply →

  • I would also add internationalization. There were multi-language games back in the day, but the overhead of producing different versions for different markets was extremely high. Unicode has .. not quite trivialized this, but certainly made a lot of things possible that weren't.

    Much respect to people who've manage to retrofit it: there are guerilla translated versions of some Japanese-only games.

    > this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater

    Yes, people underestimate how much this contributes, especially to runtime memory usage.

    • The framebuffer size for a single 320x200 image with 16 colours is 32k, so nearly the same amount of memory as this entire game.

      320x200 being an area of screen not much larger than a postage stamp on my 4k monitor.

      The technical leap from 40 years ago never fails to astound me.

      1 reply →

  • > all contribute massively to software “bloat”.

    Could you point to an example where those gigs were really "massively" due crash handling and bounds checks etc?

    • Most software doesn’t consume multiple gigabytes of memory outside of games and web browsers.

      And it should be obvious why games and web browsers do.

      2 replies →

  • I implemented a system recently that is a drop in replacement for a component of ours, old used 250gb of memory, new one uses 6gb, exact same from the outside.

    Bad code is bad code, poor choices are poor choices — but I think it’s often times pretty fair to judge things harshly on resource usage sometimes.

    • Sure, but if you’re talking about 250GB of memory then you’re clearly discussing edge cases vs normal software running on an average persons computer. ;)

  • Back the day people had BASIC and some machines had Forth and it was like

            print "Hello world" 
    

    or

            ." Hello world " / .( Hello world )
    

    for Forth.

    By comparison, giving how they optimized the games for 8 and 16 bit machines I should have been able to compile Cataclysm DDA:BN under my potato netbook and yet it needs GIGABYTES of RAM to compile, it crazy that you need damn swap for something it required far less RAM 15 years ago for the same features.

    If the game was reimplemented in Golang it wouldn't feel many times slower. But no, we are suffering the worst from both sides of the coin: something that should have been replaced by Inferno -plan9 people, the C and Unix creators and now Golang, their cousin- with horrible compiline times, horrible and incompatible ABI's, featuritis, crazy syntax with templates and if you are lucky, memory safety.

    Meanwhile I wish the forked Inferno/Purgatorio got a seamless -no virtual desktops- mode so you fired the application in a VM integrated with the guest window manager -a la Java- and that's it. Limbo+Tk+Sqlite would have been incredible for CRUD/RAD software once the GUI was polished up a little, with sticky menus as TCL/Tk and the like. In the end, if you know Golang you could learn Limbo's syntax (same channels too) with ease.

    • BASIC was slow in the 80s. Games for the C64 (and similar machines) were written in machine code.

      > By comparison, giving how they optimized the games for 8 and 16 bit machines I should have been able to compile Cataclysm DDA:BN under my potato netbook and yet it needs GIGABYTES of RAM to compile, it crazy that you need damn swap for something it required far less RAM 15 years ago for the same features.

      That’s not crazy. You’re comparing interpreted, line delimited, ASCII, with a compiler that converts structured ASCII into machine code.

      The two processes are as different to one another as a driving a bus is to being a passenger on it.

      I don’t understand what your point is in the next two paragraphs. What Go, TCL, UNIX nor Inferno have to do with the C64 or modern software. So you’ll have to help out there.

      7 replies →

  • >Sure, if you don’t count safety features like memory management, crash handling, automatic bounds checks and encryption cyphers; as anything useful.

    >Networking stacks, safety checks, encryption stacks, etc all contribute massively to software “bloat”.

    They had most of this stuff in the 1980s, and even earlier really. Not on your little 8-bit microcomputer that cost $299 that might have had as a kid, but they certainly did exist on large time-sharing systems used in universities and industry and government. And those systems had only a tiny fraction of the memory that a typical x86-64 laptop has now.

    • > They had most of this stuff in the 1980s, and even earlier really. Not on your little 8-bit microcomputer that cost $299 that might have had as a kid

      Those are the systems we are talking about though.

      > but they certainly did exist on large time-sharing systems used in universities and industry and government. And those systems had only a tiny fraction of the memory that a typical x86-64 laptop has now.

      Actually this systems didn’t. In the early 80s most protocols were still ASCII based. Even remote shell connections weren’t encrypted. Remember that SSH wasn’t released until 1995. Likewise for SSL.

      Time sharing systems were notoriously bad for sandboxing users too. Smart pointers, while available since the 60s, weren’t popularised in C++ until the 90s. Memory overflow bugs were rife (and still are) in C-based languages.

      If you were using Fortran or ALGOL, then it was a different story. But by the time the 80s came around, mainframe OSs weren’t being written in FORTRAN / ALGOL any longer. Software running on top of it might, but you’re still at the mercy of all that insecure C code running beneath it.

      4 replies →

    • This. An old netbook cam emulate a PDP10 with ITS, Maclisp and some DECNET-TCP/IP clients and barely suffer any lag...

      Also the Amiga's have AmiSSL and it will run on a 68040 or some FPGA with same constraints. IRC over TLS, Gemini, JS-less web, Usenet, EMail... not requiring tons of GB.

      Nowadays even the Artemis crew can't properly launch Outlook. If I were the IT manager I'd just set Claws-mail/thunderbird with file attachments, MSMTP+ISYNC as backends (caching and batch sending/receiving emails, you know, high end technology inspired by the 80's) and NNCP to relay packets where cuts in space are granted and thus NNCP can just push packets on demand.

      The cost? my Atom n270 junk can run NNCP and it's written in damn Golang. Any user can understand Thunderbird/Claws Mail. They don't need to setup anything, the IT manager would set it all and the mail client would run seamlessly, you know, with a fancy GUI for everything.

      Yet we are suffering the 'wonders' of vibe coding and Electron programmers pushing fancy tecnology where the old one would just work as it's tested like crazy.

      1 reply →

The BASIC 10Liner competition wants you to know that there is a growing movement of hackers who recognize the bloat and see, with crystal clarity, where things kind of went wrong ...

https://basic10liner.com/

".. and time and again it leads to amazingly elegant, clever, and sometimes delightfully crazy solutions. Over the past 14 editions, more than 1,000 BASIC 10Liners have been created — each one a small experiment, a puzzle, or a piece of digital creativity .."

  • That website seems to be gone now, unless it’s supposed to redirect to a sketchy German wix ad…

    • The website is there as of this comment. Yes there's a wix ad, but it seems normal (it just points to a wix sign up page) and not sketchy to me.

      5 replies →

There was one time I was troubleshooting why an app used at a company would crash after some amount of time passed. Investigating the crash dumps showed it using 4GB of ram before it died, suspiciously the 32 bit limit of its application.

Turned out they never closed the files it worked on, so over time it just consumed ram until there wasn’t any more for it to access.