Comment by gizmo
1 day ago
This argument always feels like a motte and bailey to me. Users don't literally care what what tech is used to build a product. Of course not, why would they?
But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting. When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.
Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
> a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
This isn't true. It took me two seconds to create a new project, run `cargo build` followed by `ls -hl ./target/debug/helloworld`. That tells me it's 438K, not 3.7MB.
Also, this is a debug build, one that contains debug symbols to help with debugging. Release builds would be configured to strip them, and a release binary of hello world clocks in at 343K. And for people who want even smaller binaries, they can follow the instructions at https://github.com/johnthagen/min-sized-rust.
Older Rust versions used to include more debug symbols in the build, but they're now stripped out by default.
$ rustc --version && rustc hello.rs && ls -alh hello
rustc 1.84.1 e71f9a9a9 2025-01-27 -rwxr-xr-x 1 user user 9.1M hello
So 9.1 MB on my machine. And as I pointed out in a comment below, your release binary of 440k is still larger than necessary by a factor 2000x or so.
Windows 95 came on 13x 3.5" floppies, so 22MB. The rust compiler package takes up 240mb on my machine. That means rust is about 10x larger than a fully functional desktop OS from 30 years ago.
Fwiw, in something like hello world, most of the size is just the rust standard library that's statically linked in. Unused parts don't get removed as it is precompiled (unless there's some linker magic I am unaware of). A C program dynamically links to the system's libc so it doesn't pay the same cost.
1 reply →
Before a few days ago I would have told you that the smallest binary rustc has ever produced is 137 bytes, but I told that to someone recently and they tried to reproduce and got it down to 135.
The default settings don’t optimize for size, because for most people, this doesn’t matter. But if you want to, you can.
> And as I pointed out in a comment below, your release binary of 440k is still larger than necessary by a factor 2000x or so.
This is such a silly argument because nobody is optimizing compilers and standard libraries for Hello World utilities.
It's also ridiculous to compare debug builds in rust against release builds for something else.
If you want a minimum sized Hello World app in rust then you'd use nostd, use a no-op panic handler, and make the syscalls manually.
> The rust compiler package takes up 240mb on my machine. That means rust is about 10x larger than a fully functional desktop OS from 30 years ago.
Fortunately for all of us, storage costs and bandwidth prices have improved by multiple orders of magnitude since then.
Which is why we don't care. The added benefits of modern software are great.
You're welcome to go back and use a 30 year old desktop OS if you'd like, though.
rustc --version && rustc main.rs && ls -alh main
rustc 1.85.0 (4d91de4e4 2025-02-17) -rwxr-xr-x 1 user user 436K 21 Feb 17:17 main
What's your output for `rustup default`?
Also, what's your output when you follow min-sized-rust?
For me,
3.8M Feb 21 11:56 target/debug/helloworld
Why is not —release being passed to cargo? It’s not like the File Pilo mentioned by GP is released with debug symbols.
1 reply →
>When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.
No, it means that product quality is all that matters. The users don't care how you make it work, only that it works how they want it to.
I have never seen it used like that. I have always seen it used like parent said: to justify awful technical choices which hurt the user.
I have written performant high quality products in weird tech stacks where performance can be s bit tricky to get: Ruby, PL/PgSQL, Perl, etc. But it was done by a team who cared a lot about technology and their tech stack. Otherwise it would not have been possible to do.
This is a genuinely fascinating difference in perception to me. I don't remember ever hearing it used in the way you have. I've always heard it used to point out that devs often give more focus on what tools they use than they do on what actually matters to their customers.
2 replies →
TFA uses the phase that way.
> What truly makes a difference for users is your attention to the product and their needs.
> Learn to distinguish between tech choices that are interesting to you and those that are genuinely valuable for your product and your users.
Would like to echo this. I've seen this used to justify extracting more value from the user rather than spending time doing things that you can't ship next week with a marketing announcement.
I've also seen it used when discussing solutions that aren't stack pure (for instance, whether to stick with the ORM or write a more performant pure SQL version that uses database-engine specific features)
> I have never seen it used like that.
Then you need to read more, because that's what it means. The tech stack doesn't matter. Only the quality of the product. That quality is defined by the user. Not you. Not your opinion. Not your belief. But the user of the product.
> which hurt the user.
This will self correct.
Horrible tech choices have lead to world class products that people love and cherish. The perfect tech choices have lead to things people laugh at and serve as a reminder that the tech stack doesn't matter, and in fact, may be a red flag.
Look at every single discussion about Electron ;)
"It's a basic tool that sits hidden in my tray 99.9% of the time and it should not use 500MB of memory when it's not doing anything" is part of product quality.
Only 500MB? Now you're being charitable.
4 replies →
Using 500MB of memory while not doing anything isn’t really a problem. If RAM is scarce then it will get paged out and used by another app that is doing something.
11 replies →
Businesses need to learn that, like it or not, code quality and architecture quality is a part of product quality
You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time
This is why startups can outcompete incumbents sometimes
Suddenly there's a market shift and a startup can actually build your entire product and the new competitive edge in less time than it takes you to add just the new competitive edge, because your code and architecture has atrophied to the point it takes longer to update it than it would to rebuild from scratch
Maybe this isn't as common as I think, I don't know. But I am pretty sure it does happen
>You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time
While it's true that that can be partially due to tech debt, there are generally other factors as well. The more years you've had to accrue customers in various domains, the more years of decisions you have to maintain backwards compatibility with, the more regulatory regimes you conduct business under and build process around, the slower you're going to move compared to someone trying to move fast and break things.
> No, it means that product quality is all that matters
But it says that in such a roundabout way that non technical people use it as an argument for MBAs to dictate technical decisions in the name of moving fast and breaking things.
> product quality is all that matters
I don't know what technology was used to build the audio mixer that I got from Temu. I do know that it's a massive pile of garbage because I can hear it when I plug it in. The tech stack IS the product quality.
I don't think that's broadly true. The unfortunate truth about our profession is that there is no floor to how bad code can be while yet generating billions of dollars.
If it's making billions of dollars, somebody somewhere is getting a lot of what they want out of it. But it's possible that those people are actually the purchasing managers or advertisers rather than the users of the software. "Customers" probably would've been the more correct term. Or sometimes "shareholders".
As far as I can tell the article has been misinterpreted causing much lost hours by HN commenters. By saying users don't care about your tech stack, it is saying you should care about your tech stack, i.e it matters, presenting some bullet points on what to keep in mind when caring about your tech stack. Or to summarize, be methodological, not hype-following.
Agree the article is not clearly presented but it's crazy to see the gigantic threads here that seem to be based on a misunderstanding.
If users care so much about product quality why is everyone using the most shitty software ever produced — such as Teams?
For 99% of users, what you describe really isn't something they know or care about.
I might agree that 99% of users don't know what they want, but not that they don't care.
I feel like that's what it should mean, that quality is all that matters. But it's often used to excuse poor quality as well. Basically if you skinner box your app hard enough, you can get away with lower quality.
Yes, this is exactly what the article means.
"Use whatever technologies you know well and enjoy using" != "Use the tech stack that produces the highest quality product".
Well, the alternative and more charitable interpretation would be that you are more likely to build a better product in the stack you know well and enjoy.
I think when you get more concrete about what the statement is talking about, it becomes very hard to assert that they mean something else.
Like if you are skilled with, say, Ruby on Rails, you probably should just use that for your v1.0. The hypothetical better stack is often just a myth we tell ourselves as software engineers because we like to think that tech is everything when it's the product + launching that's everything.
I think the idea is that the use of a particular tech stack isn't a determinant factor in terms of product quality.
2 replies →
!= "Use the tech stack that cheapest developers can work with".
(Yes, for real. I've once witnessed this being said out loud and used to justify specific tech stack choice.)
> Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
While the difference is huge in your example, it doesn't sound too bad at first glance, because that hello world just includes some Rust standard libraries, so it's a bit bigger, right? But I remember a post here on HN about some fancy "terminal emulator" with GPU acceleration and written in Rust. Its binary size was over 100MB ... for a terminal emulator which didn't pass vttest and couldn't even do half of the things xterm could. Meanwhile xterm takes about 12MB including all its dependencies, which are shared by many progams. The xterm binary size itself is just about 850kB of these 12MB. That is where binary size starts to hurt, especially if you have multiple such insanely bloated programs installed on your system.
> If you want to make something that starts instantly you can't use electron or java.
Of course you can make something that starts instantly and is written in Java. That's why AOT compilation for Java is a thing now, with SubstrateVM (aka "GraalVM native-image"), precisely to eliminate startup overhead.
alacritty (in the arch repo) is 8MB decompressed
alacritty is also written in rust and gpu accelerated, so the other vte must just be just be plain bad
Edit: Just tried turning on a couple bin-size optimizations which yielded a 3.3M binary
> In practice this argument is used to justify bloated apps
Speaking of motte-and-bailey. But I actually disagree with the article's "what should you focus on". If you're a public-facing product, your focus should be on making something the user wants to use, and WILL use. And if your tech stack takes 30 seconds to boot up, that's probably not the case. However, if you spend much of your time trying eek out an extra millisecond of performance, that's also not focusing on the right thing (disclaimer: obviously if you have a successful, proven product/app already, performance gains are a good focus).
It's all about balance. Of course on HN people are going to debate microsecond optimizations, and this is perfect place to do so. But every so often, a post like this pops up as semi-rage bait, but mostly to reset some thinking. This post is simplistic, but that's what gets attention.
I think gaming is good example that illustrates a lot of this. The purpose of games is to appeal to others, and to actually get played. And there are SO many examples of very popular games built on slow, non-performant technologies because that's what the developer knew or could pick up easily. Somewhere else in this thread there is a mention of Minecraft. There are also games like Undertale, or even the most popular game last year Balatro. Those devs didn't build the games focusing on "performance", they made them focusing on playability.
I saw an HN post recently where a classic HN commentator was angry that another person was using .NET Blazor for a frontend; with the mandatory 2MB~3MB WASM module download.
He responded by saying that he wasn’t a front-end developer, and to build the fancy lightweight frontend would be extremely demanding of him, so what’s the alternative? His customers find immensely more value in the product existing at all, than by its technical prowess. Doing more things decently well is better than doing few things perfectly.
Although, look around here - the world’s great tech stack would be shredded here because the images weren’t perfectly resized to pixel-perfect fit their frames, forcing the browser to resize the image, which is slower, and wastes CPU cycles every time, when it could have been only once server side, oh the humanity, think about how much ice you’ve melted on the polar ice caps with your carelessness.
> As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
Debug symbols aren't cheap. A release build with a minimal configuration (linked below) gets that down to 263kb.
https://stackoverflow.com/questions/29008127/why-are-rust-ex...
My point was that many programmers have no conception of much functionality you can fit in a program of a few megabytes.
Here is a pastebin[1] of a Python program that creates a "Hello world" x64 elf binary.
How large do you think the ELF binary is? Not 1kb. Not 10kb. Not 100kb. Not 263kb.
The executable is 175 bytes.
[1] https://pastebin.com/p7VzLYxS
(Again, the point is not that Rust is bad or bloated but that people forget that 1 megabyte is actually a lot of data.)
And your point is completely wrong. It makes no sense for a language to by default optimize for the lowest possible binary size of a "hello world"-sized program. Nobody's in the business of shipping "hello world" to binary-size-sensitive customers.
Non-toy programs tend to be big and the size of their code will dwarf whatever static overhead there is, so your argument does not scale.
Even then, binary size is a low priority item for almost all use cases.
But then even if you do care about it, guess what, every low level language, Rust, C, whatever, will let you get close to the lowest size possible if you put in the effort.
So no, on no level does your argument make sense with any of the examples you've given.
1 reply →
> My point was that many programmers have no conception of much functionality you can fit in a program of a few megabytes.
Many of my real-world Rust backend services are in the 1-2MB range.
> Here is a pastebin[1] of a Python program that creates a "Hello world" x64 elf binary.
> How large do you think the ELF binary is? Not 1kb. Not 10kb. Not 100kb. Not 263kb.
> The executable is 175 bytes.
You can also disable the standard library and a lot of Rust features and manually write the syscall assembly into a Rust program. With enough tweaking of compiler arguments you'd probably get it to be a very small binary too.
But who cares? I can transfer a 10MB file in a trivial amount of time. Storage is cheap. Bandwidth is cheap. Playing code golf for programs that don't do anything is fun as a hobby, but using it as a debate about modern software engineering is nonsensical.
No disagreement here! Just curious how big the impact of debug symbols was and wanted to share my findings.
Thanks for pointing this out.
It does seem weird to complain about the file size of a debug build not a release build.
In my experience, the people who make these arguments often don't even know their own tech stack of choice well enough to make it work halfway efficiently. They say 10ms but that assumes someone who knows the tech stack, the tradeoffs and can optimize it. In their hands its going to be 1+ seconds and becomes such a tangled mess it can't be optimized down the line.
If you want to make something that starts instantly you can't use electron or java.
This technical requirement is only on the spec sheet created by HN goers. Nobody else cares. Don't take tech specs from your competitors, but do pay attention. The user is always enchanted by a good experience, and they will never even perceive what's underneath. You'd need a competitor to get in their ear about how it's using Electron. Everyone has a motive here, don't get it twisted.
> Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
Semi-dependent.
Default Java libraries being a piles upon piles of abstractions… those were, and for all I know still are, a performance hit.
But that didn't stop Mojang, amongst others. It can be written "fast", if you ignore all the stuff the standard library is set up for, if you think in low-level C-like manipulation of int arrays (not Integer the class, int the primitive type) rather than AbstractFactoryBean etc. — and keep going from there, with that attitude, because there's no "silver bullet" to software quality, no "one weird trick is all you need", software in (almost!) any language fast if you focus on doing that and refuse to accept solutions that are merely "ok" when we had DOOM running in real time with software rendering in the 90s on things less powerful than the microcontroller in your USB-C power supply[0] or HDMI dongle[1].
[0] http://www.righto.com/2015/11/macbook-charger-teardown-surpr...
[1] https://www.tomshardware.com/video-games/doom-runs-on-an-app...
Of course, these days you can't run applets in a browser plugin (except via a JavaScript abstraction layer :P), but a similar thing is true with the JavaScript language, though here the trick is to ignore all the de-facto "standard" JS libraries like jQuery or React and limit yourself to the basics, hence the joke-not-joke: https://vanilla-js.com
> But that didn't stop Mojang, amongst others
Stop them from... Making one of the most notoriously slow and bloated video game ever? Like, just look at the amount of results for "Minecraft" "slow"
> Like, just look at the amount of results for "Minecraft" "slow"
My niece was playing it just fine on a decade-old Mac mini. I've played it a bit on a Raspberry Pi.
The sales figures suggest my nice's experience is fairly typical, and things such as you quote are more like the typical noise that accompanies all things done in public — people complaining about performance is something which every video game gets. Sometimes the performance even becomes the butt of a comic strips' joke.
If Java automatically causing low performance, neither my niece's old desktop nor my Pi would have been able to run it.
If you get me ranting on the topic: Why does BG3 take longer to load a saved game file than my 1997 Performa 5200 took to cold boot? Likewise Civ VI, on large maps?
2 replies →
> one of the most notoriously slow and bloated video game ever?
I like how you conveniently focus on that aspect and not how that didn't prevent it from being one of the biggest video game hit of all time.
Those two can be true at the same time. And that's one thing that a lot of technical people don't get. Slow is generally bad. But you cannot take it out of context. Slow can mean different things to different people, and slow can be circumvented by strategies that do not involve "making the code run faster".
The fact that it was later bought by Microsoft (and rewritten in C++ so it can run on consoles[1]) is not relevant to the argument that you can, in fact, write a successful and performant game in Java, if you know how.
[1] My source is just comments in HN.
1 reply →
Are we talking modded? Cause Vanilla minecraft runs on a potato, especially if the device is connecting to a server (aka doesn't have to do the server-side updates itself).
Runs fine on an essentially single threaded 7350k with a 1050ti I built 8-9 years ago.
The server runs on a single thread on a NUC, too.
Microsoft rewrote Minecraft in C++, so maybe not the best example?
> If you want to make something that starts instantly you can't use electron
Hmmm, VScode starts instantly on my M1 Mac
Slack's success suggests you're wrong about bloat being an issue. Same with almost every mobile app.
The iOS Youtube app is 300meg. You could repo the functionality in 1meg. The TikTok app is 597meg. Instagram 370meg. X app 385meg, Slack app 426meg, T-Life (no idea what it is but it's #3 on the app store, 600meg)
Users don't care about bloat.
"All else equal users will absolutely choose the zippiest products."
As a "user", it is not only "zippiest" that matters to me. Size matters, too. And, in some cases, the two are related. (Rust? Try compiling on an underpowered computer.^1)
"If you want to make something that starts instantly you can't use Electron or Java."
Nor Python.
1. I do this everyday with C.
> All else equal users will absolutely choose the zippiest products.
Only a small subset of users actually do this, because there are many other reasons that people choose software. There are plenty of examples of bloated software that is successful, because those pieces of software deliver value other than being quick.
Vanishingly few people are going to choose a 1mb piece of software that loads in 10ms over a 100mb piece of software that loads in 500ms, because they won't notice the difference.
Yes, there are other reasons people choose software. That's why GP said all else equal. You just ignored the most important part of his post.
Yes, hypothetically if all else is equal, and the difference isn't noticeable, then the users experience is equal. But it's a hypothetical that doesn't actually exist in the real world.
Competing software equal in every way but speed doesn't exist except for some very few contrived examples. Different pieces of software typically have different user interfaces, different features, different marketing, different functionality, etc.
That's a poor example, as users genuinely don't care about download file size or installed size, within reason. Nobody in the West is sweating a 200MB download.
Users will generally balk at 2000MB though. ie, there's a cutoff point somewhere between 200MB and 2000MB, and every engineering decision that adds to the package size gets you closer to it.
For an installed desktop app... vast majority of folks aren't going to batt an eye at 2G.
Hell - the most exposure the average person gets to installing software is game downloads, sadly (100G+). After that it's the stuff like MSOffice (~5-10G).
---
I want to be clear, I definitely agree there are cases where "performance is the feature". That said, package size is a bad example.
Disk is SO incredibly cheap that users are being conditioned to not even consider it on mobile systems. And networks are good enough I can pull a multi-gig file down with just my phone's tethering bandwidth in minutes basically across the country.
When I want performance as a user, it's for an action I have to do multiple times repeatedly. I want the app itself to be fast, I want buttons to respond quickly, I want pages to show up without loaders, I want search to keep up with my keystrokes.
Use as much disk and ram as you can to get that level of performance. Don't optimize for computer nerd stats like package size (or the ram usage harpies...) when even semi-technical folks can't tell you the difference between kb/mb/gb, and have no idea what ram does.
Users care about performance in the same way that users buy cars. Most don't give a fuck about the numbers, they want to like the way it drives.
Your tech stack can definitely influence that, but you still have to make the right value decisions. Unless your audience is literally "software developers" like that file explorer, lay off the "software developer stats".
1 reply →
This comments makes no sense.
The reason why people aren't sweating 200mb is because everything has gotten to be that big. Change that number to 2 terabytes.
Adn guess what? In 5 years time, someone will say "Nobody in the West is seating a 2 TB download" because it keeps increasing.
Yes, that's because you're all measuring the wrong factor for user satisfaction.
Users don't care about download size, they care about:
* will it fit on my storage device
* can I download it in a convenient amount of time
* does it run with acceptable performance
It really doesn't matter if it's a kilobyte or a petabyte.
Someone in the 80s, probably:
This comments makes no sense.
The reason why people aren't sweating 1mb is because everything has gotten to be that big. Change that number to 20mb.
And guess what? In 5 years time, someone will say "Nobody in the West is seating a 200mb download" because it keeps increasing.
1 reply →
I like this take, though deadlines do force you to make some tradeoffs. That's the conclusion I've come to.
I do think people nowadays over-index on iteration/shipping speed over quality. It's an escape. And it shows, when you "ship".
Yes, as a user I definitely shudder at Electron-based apps or anything built on the JVM.
I know plenty of non-technical users who still dislike Java, because its usage was quite visible in the past (you gotta install JVM) and lots of desktop apps made with were horrible.
My father for example, as it was the tech chosen by his previous bank.
Electron is way sneakier, so people just complain about Teams or something like that.
Nowadays users aren’t expected to install the vm. Desktop apps now jlink a vm and it’s like any other application. I’ve even seen the trimmed vm size get down to 30-50mb.
Unfortunately you still get developers who don’t set the correct memory settings and then it ends up eating 25% of a users available ram.
I strongly agree with this sentiment. And I realize that we might not be the representative of the typical user, but nonetheless, I think these things definitely matter for some subset of users.
Said it better than I could. It's always a deflection from the fact that whoever is saying doesn't know anything about their industry.
It’s a little off topic but I used to think Go binaries were too big. I did a little evaluation at some point and changed my mind. Sounds like they are still bigger than Rust.
https://joeldare.com/small-go-binaries
Had it been 18mb or 180mb, would it change anything? It takes seconds to download, seconds to install. Modern computers are fast.
It’s probably best to compare download sizes on a logarithmic scale.
18mb vs 180mb is probably the difference between an instant download and ~30 seconds. 1.8gb is gonna make someone stop and think about what they’re doing.
But 18mb vs 9mb is not significant in most cases.
Love this reply and learned a new term from it.
> But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting.
It's an incredibly effective argument to shut down people pushing for the new shiny thing just because they want to try it.
Some people are gullible enough to read some vague promises on the homepage of a new programming language or library or database and they'll start pushing to rewrite major components using the new shiny thing.
Case in point: i've worked at two very successful companies (one of them reached unicorn-level valuation) that were fundamentally built using PHP. Yeah, that thing that people claim has been dead for the last 15 years. It's alive, kicking and screaming. And it works beautifully.
> If you want to make something that starts instantly you can't use electron or java.
You picked the two technologies that are the worst examples for this.
Electron: electron has essentially breathed new life into GUI development, that essentially nobody was doing anymore.
Java: modern java is crazy fast nowadays , and on decent computer your code gets to the entrypoint (main) in less than a second. Whatever slows it down is codebase problem, it's not the jvm.
Users don't care if your binary is 3 or 4 mb. They might care if the binary was 3 or 400 mb. But then I also look at our company that uses Jira and Confluence and it takes 10+ seconds to load a damn page. Sometimes the users don't have a say.
> If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice.
Most users do not care at all.
If someone is sitting down for an hour-long gaming session with their friends, it doesn't matter if Discord takes 1 second or 20 seconds to launch.
If someone is sitting down to do hours of work for the day, it doesn't matter if their JetBrains IDE or Photoshop or Solidworks launches instantly or takes 30 seconds. It's an entirely negligible amount.
What they do care about is that the app works, gives them the features they want, and gets the job done.
We shouldn't carelessly let startup times grow and binaries become bloated for no reason, but it's also not a good idea to avoid helpful libraries and productivity-enhancing frameworks to optimize for startup time and binary size. Those are two dimensions of the product that matter the least.
> All else equal users will absolutely choose the zippiest products.
"All else equal" is doing a lot of work there. In real world situations, the products with more features and functionality tend to be a little heavier and maybe a little slower.
Dealing with a couple seconds of app startup time is nothing in the grand scheme of people's work. Entirely negligible. It makes sense to prioritize features and functionality over hyper-optimizing a couple seconds out of a person's day.
> As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb.
Okay. Comparing a debug build to a released app is a blatantly dishonest argument tactic.
I have multiple deployed Rust services with binary sizes in the 1-2MB range. I do not care at all how large a "Hello World" app is because I'm not picking Rust to write Hello World apps.
> what they really mean is that product quality doesn’t matter.
But does it matter ? I think the only metric with optimising for is latency. Other stuff is something we do.
> In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting.
Yes, yes it is. But they were going to do it anyway. Even if people were to stop accepting this argument, they'll just start using another one.
Startup culture is never going to stop being startup culture and complacent corporations are never going to stop being complacent.
As the famous adage goes: If you want it done right, you gotta do it yourself.
> File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download.
File Pilot is... seemingly a fully-featured GUI file explorer in 1.8mb, complete with animations?
Dude. What.
Yes, there is a generation of programmers that doesn't believe something like File Pilot is even possible.
They'd be very confused at something like "A mind is born" then?
https://linusakesson.net/scene/a-mind-is-born/
7030 times smaller than the file pilot.
3 replies →
Generation? Are you for real?
I just don't really see drawing like that by binaries that small. Though I've seen plenty of demoscene, not really the same.
I just checked one of the project pages (behind some links - here: https://filepilot.handmade.network) and the first post says C with OpenGL. Boy is it rare to see one of those nowadays.
Pretty sure only techies care about that; an average user on their 10 year old device, couldn't care whether it took 0.1s or 5s to start.
Nice to have, not a must.
There's been a fair bit of research on this. People don't like slow interfaces. They may not necessarily _recognise_ that that's why they don't like the interface, but even slowdowns in the 10s of ms range can make a measurable difference to user sentiment.
And yet even Amazon, eBay, and Wikipedia don’t see value in building an SPA. Chew on that.
2 replies →
Most regular people buy a new phone when their old one has "gotten slow". And why do phones get slow? "What Andy giveth, Bill taketh away."
In tech circles regular people are believed to be stupid and blind. They are neither. People notice when apps get slower and less usable over time. It's impossible not to.
And then they spend their own money making them faster, not linking the slowness to the software, but to some belief their hardware is getting worse over time.
> Most regular people buy a new phone when their old one has "gotten slow".
Uh...
If this statement were true (IF), it would be even a better argument for software developers to NOT optimize their apps.
The big problem is that most of time users do not have options. Very often there is no better performing alternatives.
Apart from when it is optional consumption say games.