This argument always feels like a motte and bailey to me. Users don't literally care what what tech is used to build a product. Of course not, why would they?
But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting. When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.
Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
> a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
This isn't true. It took me two seconds to create a new project, run `cargo build` followed by `ls -hl ./target/debug/helloworld`. That tells me it's 438K, not 3.7MB.
Also, this is a debug build, one that contains debug symbols to help with debugging. Release builds would be configured to strip them, and a release binary of hello world clocks in at 343K. And for people who want even smaller binaries, they can follow the instructions at https://github.com/johnthagen/min-sized-rust.
Older Rust versions used to include more debug symbols in the build, but they're now stripped out by default.
$ rustc --version && rustc hello.rs && ls -alh hello
rustc 1.84.1 e71f9a9a9 2025-01-27
-rwxr-xr-x 1 user user 9.1M hello
So 9.1 MB on my machine. And as I pointed out in a comment below, your release binary of 440k is still larger than necessary by a factor 2000x or so.
Windows 95 came on 13x 3.5" floppies, so 22MB. The rust compiler package takes up 240mb on my machine. That means rust is about 10x larger than a fully functional desktop OS from 30 years ago.
I have never seen it used like that. I have always seen it used like parent said: to justify awful technical choices which hurt the user.
I have written performant high quality products in weird tech stacks where performance can be s bit tricky to get: Ruby, PL/PgSQL, Perl, etc. But it was done by a team who cared a lot about technology and their tech stack. Otherwise it would not have been possible to do.
"It's a basic tool that sits hidden in my tray 99.9% of the time and it should not use 500MB of memory when it's not doing anything" is part of product quality.
Businesses need to learn that, like it or not, code quality and architecture quality is a part of product quality
You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time
This is why startups can outcompete incumbents sometimes
Suddenly there's a market shift and a startup can actually build your entire product and the new competitive edge in less time than it takes you to add just the new competitive edge, because your code and architecture has atrophied to the point it takes longer to update it than it would to rebuild from scratch
Maybe this isn't as common as I think, I don't know. But I am pretty sure it does happen
> No, it means that product quality is all that matters
But it says that in such a roundabout way that non technical people use it as an argument for MBAs to dictate technical decisions in the name of moving fast and breaking things.
I don't know what technology was used to build the audio mixer that I got from Temu. I do know that it's a massive pile of garbage because I can hear it when I plug it in. The tech stack IS the product quality.
I don't think that's broadly true. The unfortunate truth about our profession is that there is no floor to how bad code can be while yet generating billions of dollars.
I feel like that's what it should mean, that quality is all that matters. But it's often used to excuse poor quality as well. Basically if you skinner box your app hard enough, you can get away with lower quality.
> Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
While the difference is huge in your example, it doesn't sound too bad at first glance, because that hello world just includes some Rust standard libraries, so it's a bit bigger, right? But I remember a post here on HN about some fancy "terminal emulator" with GPU acceleration and written in Rust. Its binary size was over 100MB ... for a terminal emulator which didn't pass vttest and couldn't even do half of the things xterm could. Meanwhile xterm takes about 12MB including all its dependencies, which are shared by many progams. The xterm binary size itself is just about 850kB of these 12MB. That is where binary size starts to hurt, especially if you have multiple such insanely bloated programs installed on your system.
> If you want to make something that starts instantly you can't use electron or java.
Of course you can make something that starts instantly and is written in Java. That's why AOT compilation for Java is a thing now, with SubstrateVM (aka "GraalVM native-image"), precisely to eliminate startup overhead.
> In practice this argument is used to justify bloated apps
Speaking of motte-and-bailey. But I actually disagree with the article's "what should you focus on". If you're a public-facing product, your focus should be on making something the user wants to use, and WILL use. And if your tech stack takes 30 seconds to boot up, that's probably not the case. However, if you spend much of your time trying eek out an extra millisecond of performance, that's also not focusing on the right thing (disclaimer: obviously if you have a successful, proven product/app already, performance gains are a good focus).
It's all about balance. Of course on HN people are going to debate microsecond optimizations, and this is perfect place to do so. But every so often, a post like this pops up as semi-rage bait, but mostly to reset some thinking. This post is simplistic, but that's what gets attention.
I think gaming is good example that illustrates a lot of this. The purpose of games is to appeal to others, and to actually get played. And there are SO many examples of very popular games built on slow, non-performant technologies because that's what the developer knew or could pick up easily. Somewhere else in this thread there is a mention of Minecraft. There are also games like Undertale, or even the most popular game last year Balatro. Those devs didn't build the games focusing on "performance", they made them focusing on playability.
I saw an HN post recently where a classic HN commentator was angry that another person was using .NET Blazor for a frontend; with the mandatory 2MB~3MB WASM module download.
He responded by saying that he wasn’t a front-end developer, and to build the fancy lightweight frontend would be extremely demanding of him, so what’s the alternative? His customers find immensely more value in the product existing at all, than by its technical prowess. Doing more things decently well is better than doing few things perfectly.
Although, look around here - the world’s great tech stack would be shredded here because the images weren’t perfectly resized to pixel-perfect fit their frames, forcing the browser to resize the image, which is slower, and wastes CPU cycles every time, when it could have been only once server side, oh the humanity, think about how much ice you’ve melted on the polar ice caps with your carelessness.
In my experience, the people who make these arguments often don't even know their own tech stack of choice well enough to make it work halfway efficiently. They say 10ms but that assumes someone who knows the tech stack, the tradeoffs and can optimize it. In their hands its going to be 1+ seconds and becomes such a tangled mess it can't be optimized down the line.
If you want to make something that starts instantly you can't use electron or java.
This technical requirement is only on the spec sheet created by HN goers. Nobody else cares. Don't take tech specs from your competitors, but do pay attention. The user is always enchanted by a good experience, and they will never even perceive what's underneath. You'd need a competitor to get in their ear about how it's using Electron. Everyone has a motive here, don't get it twisted.
> Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
Semi-dependent.
Default Java libraries being a piles upon piles of abstractions… those were, and for all I know still are, a performance hit.
But that didn't stop Mojang, amongst others. It can be written "fast", if you ignore all the stuff the standard library is set up for, if you think in low-level C-like manipulation of int arrays (not Integer the class, int the primitive type) rather than AbstractFactoryBean etc. — and keep going from there, with that attitude, because there's no "silver bullet" to software quality, no "one weird trick is all you need", software in (almost!) any language fast if you focus on doing that and refuse to accept solutions that are merely "ok" when we had DOOM running in real time with software rendering in the 90s on things less powerful than the microcontroller in your USB-C power supply[0] or HDMI dongle[1].
Of course, these days you can't run applets in a browser plugin (except via a JavaScript abstraction layer :P), but a similar thing is true with the JavaScript language, though here the trick is to ignore all the de-facto "standard" JS libraries like jQuery or React and limit yourself to the basics, hence the joke-not-joke: https://vanilla-js.com
> If you want to make something that starts instantly you can't use electron
Hmmm, VScode starts instantly on my M1 Mac
Slack's success suggests you're wrong about bloat being an issue. Same with almost every mobile app.
The iOS Youtube app is 300meg. You could repo the functionality in 1meg. The TikTok app is 597meg. Instagram 370meg. X app 385meg, Slack app 426meg, T-Life (no idea what it is but it's #3 on the app store, 600meg)
> All else equal users will absolutely choose the zippiest products.
Only a small subset of users actually do this, because there are many other reasons that people choose software. There are plenty of examples of bloated software that is successful, because those pieces of software deliver value other than being quick.
Vanishingly few people are going to choose a 1mb piece of software that loads in 10ms over a 100mb piece of software that loads in 500ms, because they won't notice the difference.
That's a poor example, as users genuinely don't care about download file size or installed size, within reason. Nobody in the West is sweating a 200MB download.
Users will generally balk at 2000MB though. ie, there's a cutoff point somewhere between 200MB and 2000MB, and every engineering decision that adds to the package size gets you closer to it.
"All else equal users will absolutely choose the zippiest products."
As a "user", it is not only "zippiest" that matters to me. Size matters, too. And, in some cases, the two are related. (Rust? Try compiling on an underpowered computer.^1)
"If you want to make something that starts instantly you can't use Electron or Java."
It’s a little off topic but I used to think Go binaries were too big. I did a little evaluation at some point and changed my mind. Sounds like they are still bigger than Rust.
It’s probably best to compare download sizes on a logarithmic scale.
18mb vs 180mb is probably the difference between an instant download and ~30 seconds. 1.8gb is gonna make someone stop and think about what they’re doing.
I know plenty of non-technical users who still dislike Java, because its usage was quite visible in the past (you gotta install JVM) and lots of desktop apps made with were horrible.
My father for example, as it was the tech chosen by his previous bank.
Electron is way sneakier, so people just complain about Teams or something like that.
I strongly agree with this sentiment. And I realize that we might not be the representative of the typical user, but nonetheless, I think these things definitely matter for some subset of users.
> But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting.
It's an incredibly effective argument to shut down people pushing for the new shiny thing just because they want to try it.
Some people are gullible enough to read some vague promises on the homepage of a new programming language or library or database and they'll start pushing to rewrite major components using the new shiny thing.
Case in point: i've worked at two very successful companies (one of them reached unicorn-level valuation) that were fundamentally built using PHP. Yeah, that thing that people claim has been dead for the last 15 years. It's alive, kicking and screaming. And it works beautifully.
> If you want to make something that starts instantly you can't use electron or java.
You picked the two technologies that are the worst examples for this.
Electron: electron has essentially breathed new life into GUI development, that essentially nobody was doing anymore.
Java: modern java is crazy fast nowadays , and on decent computer your code gets to the entrypoint (main) in less than a second. Whatever slows it down is codebase problem, it's not the jvm.
> If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice.
Most users do not care at all.
If someone is sitting down for an hour-long gaming session with their friends, it doesn't matter if Discord takes 1 second or 20 seconds to launch.
If someone is sitting down to do hours of work for the day, it doesn't matter if their JetBrains IDE or Photoshop or Solidworks launches instantly or takes 30 seconds. It's an entirely negligible amount.
What they do care about is that the app works, gives them the features they want, and gets the job done.
We shouldn't carelessly let startup times grow and binaries become bloated for no reason, but it's also not a good idea to avoid helpful libraries and productivity-enhancing frameworks to optimize for startup time and binary size. Those are two dimensions of the product that matter the least.
> All else equal users will absolutely choose the zippiest products.
"All else equal" is doing a lot of work there. In real world situations, the products with more features and functionality tend to be a little heavier and maybe a little slower.
Dealing with a couple seconds of app startup time is nothing in the grand scheme of people's work. Entirely negligible. It makes sense to prioritize features and functionality over hyper-optimizing a couple seconds out of a person's day.
> As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb.
Okay. Comparing a debug build to a released app is a blatantly dishonest argument tactic.
I have multiple deployed Rust services with binary sizes in the 1-2MB range. I do not care at all how large a "Hello World" app is because I'm not picking Rust to write Hello World apps.
Users don't care if your binary is 3 or 4 mb. They might care if the binary was 3 or 400 mb. But then I also look at our company that uses Jira and Confluence and it takes 10+ seconds to load a damn page. Sometimes the users don't have a say.
There's been a fair bit of research on this. People don't like slow interfaces. They may not necessarily _recognise_ that that's why they don't like the interface, but even slowdowns in the 10s of ms range can make a measurable difference to user sentiment.
Most regular people buy a new phone when their old one has "gotten slow". And why do phones get slow? "What Andy giveth, Bill taketh away."
In tech circles regular people are believed to be stupid and blind. They are neither. People notice when apps get slower and less usable over time. It's impossible not to.
> They won’t notice those extra 10 milliseconds you save
They won't notice if this decision happens once, no. But if you make a dozen such decisions over the course of developing a product, then the user will notice. And if the user has e.g. old hardware or slow Internet, they will notice before a dozen such decisions are made.
In my career of writing software most developers are fully incapable of measuring things. They instead make completely unfounded assertions that are wrong more than 80% of the time and when they are wrong they tend to be wrong by several orders of magnitude.
And yes, contrary to many comments here, users will notice that 10ms saved if it’s on every key stroke and mouse action. Closer to reality though is sub-millisecond savings that occurs tens of thousands of times on each user interaction that developers disregard as insignificant and users always notice. The only way to tell is to measure things.
When I was at Google, our team kept RUM metrics for a bunch of common user actions. We had a zero regression policy and a common part of coding a new feature was running benchmarks to show that performance didn't regress. We also had a small "forbidden list" of JavaScript coding constructs that we measured to be particularly slow in at least one of Chrome/Firefox/Internet Explorer.
Outside contributors to our team absolutely hated us for it (and honestly some of the people on the team hated it too); everyone likes it when their product is fast, and nobody likes being held to the standard of keeping it that way. When you ask them to rewrite their functional coding as a series of `for` loops because the function overhead is measurably 30% slower across browsers[0], they get so mad.
[0] This was in 2010, I have no idea what the performance difference is in the Year of Our Lord 2025.
> They instead make completely unfounded assertions that are wrong more than 80% of the time and when they are wrong they tend to be wrong by several orders of magnitude.
I completely agree. It blows my mind how fifteen minutes of testing something gets replaced with a guess. The most common situation I see this in (over and over again) is with DB indexes.
The query is slow? Add a bunch of random indexes. Let's not look at the EXPLAIN and make sure the index improves the situation.
I just recently worked with a really strong engineer that kept saying we were going to need to shard our DB soon, but we're way to small of a company for that to be justified. Our DB shouldn't be working that hard (it was all CPU load), there had to be a bad query in there. He even started drafting plans for sharding because he was adamant that it was needed. Then we checked RDS Performance Insights and saw it was one rogue query (as one should expect). It was about about a 45 minute fix and after downsizing one notch on RDS, we're sitting at about 4% most of the time on the DB.
But this is a common thing. Some engineers will _think_ there's going to be an issue, or when there is one, completely guess what it is without getting any data.
Another anecdote from a past company was them upsizing their RDS instance way more than they should need for their load because they dealt with really high connection counts. There was no way this number of connections should be going on based on request frequency. After a very small amount of digging, I found that they would open a new DB connection per object they created (this was PHP). Sometimes they'd create 20 objects in a loop. All the code was synchronous. You ended up with some very simple HTTP requests that would cause 30 DB connections to be established and then dropped.
My plex server was down and my lazy solution was to connect directly to the NAS. I was surprised just how much I noticed the responsiveness after getting used to web players. A week ago I wouldn't have said web player bothered me at all. Now I can't not notice.
Can you show me any longitudinal studies that show examples of a causal connection between incrementality of latency and churn? It’s easy to make such a claim and follow up with “go measure it”. That takes work. There are numerous other things a company may choose to measure instead that are stronger predictors of business impact.
There is probably some connection. Anchoring to 10ms is a bit extreme IMO because it’s indirectly implying that latency is incredibly important which isn’t universally true - each product’s metrics that are predictive of success are much more nuanced and may even have something akin to the set of LLM neurons called “polysemantic” - it may be a combination of several metrics expressed via some nontrivial function that are the best predictor.
For SaaS, if we did want to simplify things and pick just one - usage. That’s the strongest churn signal.
Takeaway: don’t just measure. Be deliberate about what you choose to measure. Measuring everything creates noise and can actually be detrimental.
Soooooo, my "totally in the works" post about how direct connection to your RDMS is the next API may not be so tongue in cheek. No rest, no graphQL, no http overhead. Just plain SQL over the wire.
Authentication? Already baked-in. Discoverability? Already in. Authorization? You get it almost for free. User throttling? Some offer it.
I find it fascinating that HN comments always assert that 10ms matters in the context of user interactions.
60Hz screens don't even update every 10ms.
What's even more amazing is that the average non-tech person probably won't even notice the difference between a 60Hz and a 120Hz screen. I've held 120Hz and 60Hz phones side by side in front of many people, scrolled on both of them, and had the other person shrug because they don't really see a difference.
The average user does not care about trivial things. As long as the app does what they want in a reasonable amount of time, it's fine. 10ms is nothing.
Multiply that by the number of users and total hours your software is used, and suddenly it's a lot of wasted Watts of energy people rarely talk about.
But they still don't care about your stack. They care that you made something slow.
Fix that however you like but don't pretend your non-technical users directly care that you used go vs java or whatever. The only time that's relevant to them is if you can use it for marketing.
> Amazon Found Every 100ms of Latency Cost them 1% in Sales
I see this quoted but Amazon has become 5x slower (guestimate) and it doesn't seem like they are working on it as much. Sure the home page loads "fast" ~800ms over fiber, but clicking on a product routinely takes 2-3 seconds to load.
Amazon nowadays has a near monopoly powered by ad money due to the low margin on selling products versus ad spend. So unless you happen to be in the same position using them nowadays as an example isn't going to be very helpful. If they increased sales 20% at the cost of 1% less ad spend then they'd probably be at a net loss as a result.
So you're kinda falling into a fallacy here. You're taking a specific example and trying to make a general rule out of it. I also think the author of the article is doing the same thing, just in a different way.
Users don't care about the specifics of your tech stack (except when they do) but they do care about whether it solves their problem today and in the future. So they indirectly care about your tech stack. So, in the example you provided, the user cares about performance (I assume Rippling know their customer). In other examples, if your tech stack is stopping you from easily shipping new features, then your customer doesn't care about the tech debt. They do care, however, that you haven't given them any meaningful new value in 6 months but your competitor has.
I recall an internal project where a team discussed switching a Python service with Go. They wanted to benchmark the two to see if there was a performance difference. I suggested from outside that they should just see if the Python service was hitting the required performance goals. If so, why waste time benchmarking another language? It wasn't my team, so I think they went ahead with it anyway.
I think there's a balance to be struck. While users don't directly care about the specific tech, they do care about the results – speed, reliability, features. So, the stack is indirectly important. Picking the right tools (even if there are several "good enough" options) can make a difference in delivering a better user experience. It's about optimizing for the things users do notice.
They absolutely do not. In fact, relatively few do. Every single Electron app (which is a depressing number of apps) is a bloated mess. Most web pages are a bloated mess where you load the page and it isn't actually loaded, visibly loading more elements as the "loaded" page sits there.
Software sucks to use in 2025 because developers have stopped giving a shit about performance.
This is so true, and yet. . . Bad, sluggish performance is everywhere. I sometimes use my phone for online shopping, and I'm always amazed how slow ecommerce companies can make something as simple as opening a menu.
This is a mixed bag of advice. While it seems wise at the surface, and certainly works as an initial model, the reality is a bit more complex than aphorisms.
For example, what you know might not provide the cost benefit ratio your client would. Or the performance. If you only know Cloud Spanner but now there is a need for a small relational table? These maxims have obvious limitations.
I do agree that the client doesn't care about the tech stack. Or that seeking a golden standard is a McGuffin. But it does much deeper than that. Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.
A good engineer balances tradeoffs and solves problems in a satisfying way sufficing all requirements. That can be MySQL and Node. But it can also be C++ and Oracle Coherence. Shying away from a tool just because it has a reputation is just as silly as using it for a hype.
> Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.
Your customer does care about how quickly you can iterate new features over time, and product stability. A stack with a complex mix of technologies is likely to be harder to maintain over the longer term.
That's also an aphorism that may or may not correspond to reality.
Not only there are companies with highly capable teams that are able to move fast using a complex mix of technologies, but also there are customers who have very little interest in new features.
This is the point of my comment: these maxims are not universal truths, and taking them as such is a mistake. They are general models of good ideas, but they are just starter models.
A company needs to attend to its own needs and solve its own problems. The way this goes might be surprisingly different from common sense.
One person writing a stack in 6 languages is different from a team of 100 using 6 languages.
The problem emerges if you have some eccentric person who likes using a niche language no one else on the team knows. Three months into development they decide they hate software engineering and move to a farm in North Carolina.
Who else is going to be able to pick up their tasks, are you going to be able to quickly on board someone else. Or are you going to have to hire someone new with a specialty in this specific language.
This is a part of why NodeJS quickly ate the world. A lot of web studios had a bunch of front end programmers who were already really good with JavaScript. While NodeJS and frontend JS aren't 100% the same, it's not hard to learn both.
Try to get a front end dev to learn Spring in a week...
> Your customer does care about how quickly you can iterate new features over time
How true this is depends on your particular target market. There is a very large population of customers that are displeased by frequent iterations and feature additions/changes.
The author didn't say listen to the opinion of other, hype or not. The author said "set aside time to explore new technologies that catch your interest ... valuable for your product and your users. Finding the right balance is key to creating something truly impactful.".
It means we should make our own independent, educated judgement based on the need of the product/project we are working on.
I heavily dislike GraphQL for all of the reasons. But I'll say that for a lot of developers, if you are already setting up an API gateway, you might as well batch the calls, and simplify the frontend code.
C++ is often the best answer for users, but this is about how bad the other options are, and not that C++ is good. Options like Rust doesn't have the mature frameworks that C++ does. (rust-qt is often used as a hack instead of a pure rust framework). There is a big difference between modern C++ and the old C++98 as well, and the more you force you code to be modern C++ the less the footguns in C++ will hit you. The C++ committee is also driving forward in eliminating the things people don't like about C++.
Users don't care about your tech stack. They care about things like battery life, and how fast your program runs, how fast your program starts - places where C++ does really well. (C, rust... also do very well). Remember this is real world benchmarks, you can find micro benchmarks where python is just as fast as well written C, but if write a large application in python they will be 30-60 times slower than the same written in C++.
Note however that users only care about security after it is too late. C++ can be much better than C, but since it is really easy to write C style code in C++ you need a lot more care than you would want.
If for your application Rust or ada does have mature enough frameworks to work with then I wouldn't write C++, but all too often the long history of C++ means it is the best choice. In some applications managed languages like Java works well, but in others the limits of the runtime (startup, worse battery life) make it a bad choice. Many things are scripts you won't run very much and so python is just fine despite how slow it is. Make the right choice, but don't call C++ a bad choice just because for you it is bad.
It's true, and of course, all models are wrong, especially as you go into deeper detail, so I can't really argue an edge case here. Indeed, C++ is rarely the best answer. But we all know of trading systems and gaming engines that rely heavily on C++ (for now, may Rust keep growing).
It would be funny if it weren’t tragic. So many of the comments here echo the nonsense of my software career: developers twisting themselves in knots to justify writing slow software.
I've not seen a compelling reason start the performance fight in ordinary companies doing CRUD apps. Even if I was good at performance, I wouldn't give that away for free, and I'd prefer to go to companies where it's a requirement (HFT or games), which only furthers the issue about slowness being ubiquitous.
For example, I dropped a 5s paginated query doing a weird cross join to ~30ms and all I got for that is a pat on the back. It wasn't skill, but just recognizing we didn't need the cross join part.
We'd need to start firing people who write slow queries, forcing them to become good, or pay more for developers who know how to measure and deliver performance, which I also don't think is happening.
For 99% of apps slow software is compensated by fast hardware. In almost all cases, the speed of your software does not matter anymore.
Unless speed is critical, you can absolutely justify writing slow software if its more maintainable that way.
And thus when I clicked on the link to a NPR story just now it was 10 seconds before the page was reasable on my computer.
Now my computer (pinebook pro) was never known as fast, but still it runs circles around the first computer I ran a browser. (I'm not sure which computer that was, but likely the CPU was running at 25mhz, could have been a 80486 or a Sparc CPU though - now get off my lawn you kids)
This is a fairly classic rant. Many have gone before, and many will come after.
I have found that it's best to focus on specific tools, and become good with them, but always be ready to change. ADHD-style "buzzword Bingo," means that you can impress a lot of folks at tech conferences, but may have difficulty reliably shipping.
I have found that I can learn new languages and "paradigms," fairly quickly, but becoming really good at it, takes years.
That said, it's a fast-changing world, and we need to make sure that we keep up. Clutching onto old tech, like My Precioussss, is not likely to end well.
What do you think of Elixir in that regard? It seems to be evolving in parallel to current trends, but it still seems a bit too niche for my taste. I‘m asking because I‘m on the fence on whether I should/want to base my further server side career on it. My main income will likely come from iOS development for at least a few more years, but some things feel off in the Apple ecosystem, and I feel the urge to divest.
Ive been working in Elixir since 2015. I love the ecosystem and think its the best choice for building a web app from a pure tech/stability/scalability/productivity perspective (I also have a decade+ experience in Ruby on rails, Nodejs, and Php laravel, plus Rust to a lesser extent).
I am however having trouble in the human side of it. Ive got a strong resume but I was laid off in Nov 2024 and Im having trouble even getting Elixir interviews (with 9+ years of production Elixir experience!). Hiring people with experience was also hard when I was the hiring manager. It is becoming less niche these days. I love it too much to leave for other ecosystems in the web sphere
Elixir can be used for scripting tasks, config and test rig are usually scripts. In theory you can use the platform for desktop GUI too, one of the bespoke monitoring tools is built that way. Since a few years back there are libraries for numeric and ML computing too.
> Look at what 37Signals is able to do with [1] 5 Product Software Engineers. their output is literally 100x of their competitors.
The linked Tweet thread says they have 16 Software Engineers and a separate ops team that he's not counting for some reason.
There are also comments further down that thread about how their "designers" also code, so there is definitely some creative wordplay happening to make the number of programmers sound as small as possible.
Basecamp (37Signals) also made headlines for losing a lot of employees in recent years. They had more engineers in the past when they were building their products.
Basecamp is also over 20 years old and, to be honest, not very feature filled. It's okay-ish if your needs fit within their features, but there's a reason it's not used by a lot of people.
DHH revealed their requests per second rate in a Twitter argument a while ago and it was a surprisingly low number. This was in the context of him claiming that he could host it all on one or two very powerful servers, if I recall correctly.
When discussing all things Basecamp (37Signals) it's really important to remember that their loud internet presence makes them seem like they have a lot more users than they really do. They've also been refining basically the same product for two decades and had larger teams working in the past.
Just joining all the other comments to say there's a split between:
- users don't care about your tech stack
- you shouldn't care about your tech stack
I don't on-paper care what metal my car is going to be made of, I don't know enough information to have an opinion. But I reeaaally hope the person designing it has a lot of thoughts on the subject.
I find it funny that is message resurfaces on the front page once or twice a year for at least 10 years now.
Product quality is often not the main argument advanced when deciding on a tech stack, only indirectly. Barring any special technical requirements, in the beginning what matters is:
- Can we build quickly without making a massive mess?
- Will we find enough of the right people who can and want to work with this stack?
- Will this tech stack continue to serve us in the future?
Imagine it's 2014 and you're deciding between two hot new framework ember and react, this is not just a question about what is hot or shiny and new.
There's an obvious solution to "language doesn't matter". Let the opinionated people pick the stack. Then you satisfy the needs of the people who care and those who don't care.
This discussion is not about technology. It's about technical people learning that business, product and users are actually important. The best advice I can give technical people about working at startups is that you should learn everything you can about business. You can do that at a startup much easier than at a big tech company. Spend as much time as you can with your actual users, watching them use your product. It will help you communicate with the rest of the team, prioritize your technical tasks, and help you elevate your impact.
Problem is your hiring manager at a startup will still care whether you're an expert in the stack-du-jour. So technical people aren't incentivised to care about the business.
> Questions like “Is this language better than that one?” or “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that. They won’t notice those extra 10 milliseconds you saved, nor will their experience magically improve just because you’re using the latest JavaScript framework.
Users care about performance. The per user action latency is >10 ms (unless your in bootstrap phase).
> What truly makes a difference for users is your attention to the product and their needs.
False dichotomy.
> Every programming language shines in specific contexts. Every framework is born to solve certain problems. But none of these technical decisions, in isolation, will define the success of your product from a user’s perspective.
Yes so evaluate your product's context, choose your tools, frameworks, and languages accordingly.
I worked on a small team with 2 or 3 backend Elixir devs as the sole JavaScript / TypeScript front end, React Native app, micro services running in node, browser automation developer. It was easiest for me to write a backend service in JavaScript and expose an interface for them to integrate with rather than wait for them to get around to building it. The services were usually small under a couple thousand lines of code and if they wanted to, they could translate the service to Elixir since the business logic was usually hardened and solved. One service might scrape data, store it in S3 buckets, and then process it when requested storing / caching the results in a Postgres database.
Here is the important part: the automated browser agents I built were core to the company's business model. Even today nobody can accomplish in any other language than JavaScript what I was doing because it requires injecting JavaScript into third party websites with the headless browser. Even if the surface area was small, the company was 100% dependent on JavaScript.
The issue is that they are huge Elixir fanboys. Monday morning meeting G. would start talking about how much JavaScript sucks and they should start to move all the front end code to LiveView. Every couple weeks .... "we should migrate to LiveView." Dude, I'm sitting right here, I can hear you. Moreover, your job depends on at least one person writing JavaScript as shitty a language as it might be. I don't think he understood that he is threatening my job. The fanboy Elixir conversations between the 3 of of them always made me feel like a second class citizen.
I'm one of those fanboys. I've done the react, angular etc front end thing. LiveView just absolutely smokes SPAs and other JS rats-nests in terms of productivity, performance and deployment simplicity (for certain types of apps). The fact that you don't have to write 6 layers of data layer abstraction alone is worth it.
And don't get me wrong, I even like things like Nuxt and have a few products on Astro (great framework btw). Agree regarding browser automation, not many options there so your gig is safe for now. But do play with LiveView, it's pretty special.
I'm going to agree with you that Elixir / Erlang is the most productive and will back it up with some data; elixir developers are the highest paid because they generate the most value. [0] Nonetheless, LiveView isn't a viable solution for a lot of what I do. Because of that, it is important to have developers who know and understand how to use JavaScript.
A mixture of contempt and resentment towards JavaScript makes developers worse engineers.
What will happen now you're gone, is, one of them will encounter a trivial problem working in a language they don't understand. They could solve this by consulting some documentation, but they won't do that. Instead they will make a huge fuss and embark on a six month project to rewrite everything in LiveView.
Like all rewrites it will be much harder than they think, come out worse than they hoped and fail to solve any customer problems.
Many comments arguing that the right stack and good "clean" code will then lead to user-appreciated performance improvements.
More often I've seen this used by developers as an excuse to yak-shave "optimizations" that deliver no (or negative) performance improvements. e.g. "Solving imaginary scaling problems ... at scale!"
Maybe not, but I do, and I hope anyone else who works in the space does as well. I'm not a big fan of this argument of cutting all "inefficient attention" out of what should be our craft. I want to take pride in my work, even if my users don't share that feeling
Indeed, hence why I always ask back when someone proposes rewrites, what is the business value.
Meaning the amount of money spent on developer salaries for the rewrite timeframe, how does that reflect in what business is doing, and how is the business going to get that investment back.
> There are no “best” languages or frameworks. There are only technologies designed to solve specific problems, and your job is to pick the ones that fit your use case
What if multiple technologies fit our use case? There will be one that fits the use case “best”.
The user has never been the primary factor in my choice of tech stack. Just like my employer doesn't care what car I drive to get to work. It's mostly about the developers and the best tools available to them at that point in time.
No! This is a great advise if you are working on a personal project but a terrible advise in all other scenarios. Use the stack that solves your problem, not the stack you are simply comfortable with.
I don't agree with the article's central premise. It assumes that tech stack choices should be driven solely by what the end user cares about.
In reality, selecting a tech stack is about more than the user side of things, it’s a strategic decision where factors like cost efficiency, the ease of hiring new developers, long-term maintainability, and the likelihood that the technology will still be relevant in five years are all critical.
These considerations directly impact how quickly and effectively a team can build and scale a product, even if the end user never sees the tech stack at work.
I am working as a contractor on a product where my principal incentives revolve around meeting customer-facing milestones. I am not being paid a dime for time spent on fancy technology experiments.
It has been quite revealing working with others on the same project who are regular salaried employees. The degree to which we ensure technology is well-aligned with actual customer expectations seems to depend largely on how we are compensated.
I think the folks who lost access to funds due to an implementation choice of Synapse would disagree with this statement.
They don't care about most implementation details, but choice of 3rd parties to manage functionality and data does matter to your users when it breaks. And it will.
Sure, the language and framework you choose are less important to them, per parent post. But it's a slippery slope; don't assume that they are agnostic to _all_ your technical choices.
More importantly for open source projects with a lot of users, don't advertise a bunch of techno mumbo-jumbo on your website. That either means nothing to users, or may even put them off. Sure, have a link for developers that goes in to all that stuff so they can decide to contribute or build for source or whatever. Just keep it off the main page - it's meaningless to the general public.
This feels like one of those comments you hear in a tech stack meeting—something that doesn’t change the discussion but fills space, like a “nothing from my end.”
“So, <language x> gives us 1% faster load times, but <language y> offers more customizability—”
'Hey guys, users don’t care about the tech stack. We need to focus on their needs.'
“Uh… right. So, users like speed, yeah? Anyway, <language x> gives us 1% faster load times, but <language y> offers more customizability—”
Nobody is making the argument that users care about your tech stack. I've literally never heard a dev justify using a library because "users care about it". Nobody.
They do not care about your stack, but do care that the stack works. Use what you're familiar with, sure, but if that does not produce a reliable system, your user will not use it.
Sometimes I have to remind myself of this. Take for example Bring a Trailer. It is a wordpress site. I know you're rolling your eyes or groaning at the mention of wordpress. It works. It is hugely successful in the niche it is in.
This is dumb. Of course they don't care about the tech stack directly but they obviously care about the things that it affects - performance, reliability, features etc.
To top it off, it also includes a classic "none of them are perfect so they are all the same and it doesn't matter which you choose". I recently learnt the name for this: the fallacy of grey.
People are using machines dozens of times more powerful than machines from 15 years ago, but do they do things that are materially different to what they did before? Not really.
They absolutely do care, even if they cannot articulate whats wrong.
`I still often find myself in discussions where the main focus is the choice of technologies. Questions like “Is this language better than that one?” or “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that.'
When developers are having those discussions, are they ever doing so in relation to some hypothetical user caring? This feels like a giant misdirection strawman.
When I discuss technology and framework choices with other developers, the context is the experience for developers. And of course business considerations come into play as well: Can we find lots of talent experienced in this set of technologies? Is it going to efficiently scale in a cost effective manner? Are members of the team going to feel rewarded mastering this stack, gaining marketable skills? And so on.
There’s a lot of vacancies of technical co-founders with a preference of stack, which usually comes from some advisor or investor. A pretty dumb filter, given that it’s often a Node/React combo. It is understandable where it comes from, but still… dumb.
Lately, I've been thinking that LLMs will lift programming anyways to another level: the level of specification in natural language and some formal descriptions mixed in. LLMs will take care of transforming this into actual code. So not only users don't care about programming but also the developers. Switching the tech stack might become a matter of minutes.
If it simply generates code from natural language then I am still fundamentally working with code. Aider as an example is useful for this, but anything that isn't a common function/component/class it falls apart even with flagship models.
If I actually put my "natural language code" under git then it'll lack specificity at compile time likely leading to large inconsistencies between versions. This is horrible user experience - like the random changes Excel makes every few years, but every week instead.
And everyone that has migrated a somewhat large database knows it isn't doable within minutes.
I don't think one would put only the specification in Git. LLMs are not a reliable compiler.
Actual code is still the important part of a business. However, how this code is developed will drastically change (some people actually work already with Cursor etc.). Imagine: If you want a new feature, you update the spec., ask an LLM for the code and some tests, test the code personally and ship it.
I guess no one would hand over the control of committing and deployment to an AI. But for coding yes.
This argument always feels like a motte and bailey to me. Users don't literally care what what tech is used to build a product. Of course not, why would they?
But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting. When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.
Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
> a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
This isn't true. It took me two seconds to create a new project, run `cargo build` followed by `ls -hl ./target/debug/helloworld`. That tells me it's 438K, not 3.7MB.
Also, this is a debug build, one that contains debug symbols to help with debugging. Release builds would be configured to strip them, and a release binary of hello world clocks in at 343K. And for people who want even smaller binaries, they can follow the instructions at https://github.com/johnthagen/min-sized-rust.
Older Rust versions used to include more debug symbols in the build, but they're now stripped out by default.
$ rustc --version && rustc hello.rs && ls -alh hello
rustc 1.84.1 e71f9a9a9 2025-01-27 -rwxr-xr-x 1 user user 9.1M hello
So 9.1 MB on my machine. And as I pointed out in a comment below, your release binary of 440k is still larger than necessary by a factor 2000x or so.
Windows 95 came on 13x 3.5" floppies, so 22MB. The rust compiler package takes up 240mb on my machine. That means rust is about 10x larger than a fully functional desktop OS from 30 years ago.
5 replies →
For me,
3.8M Feb 21 11:56 target/debug/helloworld
2 replies →
>When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.
No, it means that product quality is all that matters. The users don't care how you make it work, only that it works how they want it to.
I have never seen it used like that. I have always seen it used like parent said: to justify awful technical choices which hurt the user.
I have written performant high quality products in weird tech stacks where performance can be s bit tricky to get: Ruby, PL/PgSQL, Perl, etc. But it was done by a team who cared a lot about technology and their tech stack. Otherwise it would not have been possible to do.
7 replies →
Look at every single discussion about Electron ;)
"It's a basic tool that sits hidden in my tray 99.9% of the time and it should not use 500MB of memory when it's not doing anything" is part of product quality.
17 replies →
Businesses need to learn that, like it or not, code quality and architecture quality is a part of product quality
You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time
This is why startups can outcompete incumbents sometimes
Suddenly there's a market shift and a startup can actually build your entire product and the new competitive edge in less time than it takes you to add just the new competitive edge, because your code and architecture has atrophied to the point it takes longer to update it than it would to rebuild from scratch
Maybe this isn't as common as I think, I don't know. But I am pretty sure it does happen
1 reply →
> No, it means that product quality is all that matters
But it says that in such a roundabout way that non technical people use it as an argument for MBAs to dictate technical decisions in the name of moving fast and breaking things.
1 reply →
> product quality is all that matters
I don't know what technology was used to build the audio mixer that I got from Temu. I do know that it's a massive pile of garbage because I can hear it when I plug it in. The tech stack IS the product quality.
I don't think that's broadly true. The unfortunate truth about our profession is that there is no floor to how bad code can be while yet generating billions of dollars.
1 reply →
If users care so much about product quality why is everyone using the most shitty software ever produced — such as Teams?
For 99% of users, what you describe really isn't something they know or care about.
1 reply →
I feel like that's what it should mean, that quality is all that matters. But it's often used to excuse poor quality as well. Basically if you skinner box your app hard enough, you can get away with lower quality.
Yes, this is exactly what the article means.
"Use whatever technologies you know well and enjoy using" != "Use the tech stack that produces the highest quality product".
5 replies →
> Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
While the difference is huge in your example, it doesn't sound too bad at first glance, because that hello world just includes some Rust standard libraries, so it's a bit bigger, right? But I remember a post here on HN about some fancy "terminal emulator" with GPU acceleration and written in Rust. Its binary size was over 100MB ... for a terminal emulator which didn't pass vttest and couldn't even do half of the things xterm could. Meanwhile xterm takes about 12MB including all its dependencies, which are shared by many progams. The xterm binary size itself is just about 850kB of these 12MB. That is where binary size starts to hurt, especially if you have multiple such insanely bloated programs installed on your system.
> If you want to make something that starts instantly you can't use electron or java.
Of course you can make something that starts instantly and is written in Java. That's why AOT compilation for Java is a thing now, with SubstrateVM (aka "GraalVM native-image"), precisely to eliminate startup overhead.
alacritty (in the arch repo) is 8MB decompressed
alacritty is also written in rust and gpu accelerated, so the other vte must just be just be plain bad
Edit: Just tried turning on a couple bin-size optimizations which yielded a 3.3M binary
> In practice this argument is used to justify bloated apps
Speaking of motte-and-bailey. But I actually disagree with the article's "what should you focus on". If you're a public-facing product, your focus should be on making something the user wants to use, and WILL use. And if your tech stack takes 30 seconds to boot up, that's probably not the case. However, if you spend much of your time trying eek out an extra millisecond of performance, that's also not focusing on the right thing (disclaimer: obviously if you have a successful, proven product/app already, performance gains are a good focus).
It's all about balance. Of course on HN people are going to debate microsecond optimizations, and this is perfect place to do so. But every so often, a post like this pops up as semi-rage bait, but mostly to reset some thinking. This post is simplistic, but that's what gets attention.
I think gaming is good example that illustrates a lot of this. The purpose of games is to appeal to others, and to actually get played. And there are SO many examples of very popular games built on slow, non-performant technologies because that's what the developer knew or could pick up easily. Somewhere else in this thread there is a mention of Minecraft. There are also games like Undertale, or even the most popular game last year Balatro. Those devs didn't build the games focusing on "performance", they made them focusing on playability.
I saw an HN post recently where a classic HN commentator was angry that another person was using .NET Blazor for a frontend; with the mandatory 2MB~3MB WASM module download.
He responded by saying that he wasn’t a front-end developer, and to build the fancy lightweight frontend would be extremely demanding of him, so what’s the alternative? His customers find immensely more value in the product existing at all, than by its technical prowess. Doing more things decently well is better than doing few things perfectly.
Although, look around here - the world’s great tech stack would be shredded here because the images weren’t perfectly resized to pixel-perfect fit their frames, forcing the browser to resize the image, which is slower, and wastes CPU cycles every time, when it could have been only once server side, oh the humanity, think about how much ice you’ve melted on the polar ice caps with your carelessness.
> As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
Debug symbols aren't cheap. A release build with a minimal configuration (linked below) gets that down to 263kb.
https://stackoverflow.com/questions/29008127/why-are-rust-ex...
My point was that many programmers have no conception of much functionality you can fit in a program of a few megabytes.
Here is a pastebin[1] of a Python program that creates a "Hello world" x64 elf binary.
How large do you think the ELF binary is? Not 1kb. Not 10kb. Not 100kb. Not 263kb.
The executable is 175 bytes.
[1] https://pastebin.com/p7VzLYxS
(Again, the point is not that Rust is bad or bloated but that people forget that 1 megabyte is actually a lot of data.)
4 replies →
Thanks for pointing this out.
It does seem weird to complain about the file size of a debug build not a release build.
In my experience, the people who make these arguments often don't even know their own tech stack of choice well enough to make it work halfway efficiently. They say 10ms but that assumes someone who knows the tech stack, the tradeoffs and can optimize it. In their hands its going to be 1+ seconds and becomes such a tangled mess it can't be optimized down the line.
I like this take, though deadlines do force you to make some tradeoffs. That's the conclusion I've come to.
I do think people nowadays over-index on iteration/shipping speed over quality. It's an escape. And it shows, when you "ship".
If you want to make something that starts instantly you can't use electron or java.
This technical requirement is only on the spec sheet created by HN goers. Nobody else cares. Don't take tech specs from your competitors, but do pay attention. The user is always enchanted by a good experience, and they will never even perceive what's underneath. You'd need a competitor to get in their ear about how it's using Electron. Everyone has a motive here, don't get it twisted.
> Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
Semi-dependent.
Default Java libraries being a piles upon piles of abstractions… those were, and for all I know still are, a performance hit.
But that didn't stop Mojang, amongst others. It can be written "fast", if you ignore all the stuff the standard library is set up for, if you think in low-level C-like manipulation of int arrays (not Integer the class, int the primitive type) rather than AbstractFactoryBean etc. — and keep going from there, with that attitude, because there's no "silver bullet" to software quality, no "one weird trick is all you need", software in (almost!) any language fast if you focus on doing that and refuse to accept solutions that are merely "ok" when we had DOOM running in real time with software rendering in the 90s on things less powerful than the microcontroller in your USB-C power supply[0] or HDMI dongle[1].
[0] http://www.righto.com/2015/11/macbook-charger-teardown-surpr...
[1] https://www.tomshardware.com/video-games/doom-runs-on-an-app...
Of course, these days you can't run applets in a browser plugin (except via a JavaScript abstraction layer :P), but a similar thing is true with the JavaScript language, though here the trick is to ignore all the de-facto "standard" JS libraries like jQuery or React and limit yourself to the basics, hence the joke-not-joke: https://vanilla-js.com
> But that didn't stop Mojang, amongst others
Stop them from... Making one of the most notoriously slow and bloated video game ever? Like, just look at the amount of results for "Minecraft" "slow"
8 replies →
Microsoft rewrote Minecraft in C++, so maybe not the best example?
> If you want to make something that starts instantly you can't use electron
Hmmm, VScode starts instantly on my M1 Mac
Slack's success suggests you're wrong about bloat being an issue. Same with almost every mobile app.
The iOS Youtube app is 300meg. You could repo the functionality in 1meg. The TikTok app is 597meg. Instagram 370meg. X app 385meg, Slack app 426meg, T-Life (no idea what it is but it's #3 on the app store, 600meg)
Users don't care about bloat.
> All else equal users will absolutely choose the zippiest products.
Only a small subset of users actually do this, because there are many other reasons that people choose software. There are plenty of examples of bloated software that is successful, because those pieces of software deliver value other than being quick.
Vanishingly few people are going to choose a 1mb piece of software that loads in 10ms over a 100mb piece of software that loads in 500ms, because they won't notice the difference.
Yes, there are other reasons people choose software. That's why GP said all else equal. You just ignored the most important part of his post.
1 reply →
That's a poor example, as users genuinely don't care about download file size or installed size, within reason. Nobody in the West is sweating a 200MB download.
Users will generally balk at 2000MB though. ie, there's a cutoff point somewhere between 200MB and 2000MB, and every engineering decision that adds to the package size gets you closer to it.
2 replies →
This comments makes no sense.
The reason why people aren't sweating 200mb is because everything has gotten to be that big. Change that number to 2 terabytes.
Adn guess what? In 5 years time, someone will say "Nobody in the West is seating a 2 TB download" because it keeps increasing.
3 replies →
"All else equal users will absolutely choose the zippiest products."
As a "user", it is not only "zippiest" that matters to me. Size matters, too. And, in some cases, the two are related. (Rust? Try compiling on an underpowered computer.^1)
"If you want to make something that starts instantly you can't use Electron or Java."
Nor Python.
1. I do this everyday with C.
It’s a little off topic but I used to think Go binaries were too big. I did a little evaluation at some point and changed my mind. Sounds like they are still bigger than Rust.
https://joeldare.com/small-go-binaries
Had it been 18mb or 180mb, would it change anything? It takes seconds to download, seconds to install. Modern computers are fast.
It’s probably best to compare download sizes on a logarithmic scale.
18mb vs 180mb is probably the difference between an instant download and ~30 seconds. 1.8gb is gonna make someone stop and think about what they’re doing.
But 18mb vs 9mb is not significant in most cases.
Said it better than I could. It's always a deflection from the fact that whoever is saying doesn't know anything about their industry.
Yes, as a user I definitely shudder at Electron-based apps or anything built on the JVM.
I know plenty of non-technical users who still dislike Java, because its usage was quite visible in the past (you gotta install JVM) and lots of desktop apps made with were horrible.
My father for example, as it was the tech chosen by his previous bank.
Electron is way sneakier, so people just complain about Teams or something like that.
1 reply →
I strongly agree with this sentiment. And I realize that we might not be the representative of the typical user, but nonetheless, I think these things definitely matter for some subset of users.
Love this reply and learned a new term from it.
> But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting.
It's an incredibly effective argument to shut down people pushing for the new shiny thing just because they want to try it.
Some people are gullible enough to read some vague promises on the homepage of a new programming language or library or database and they'll start pushing to rewrite major components using the new shiny thing.
Case in point: i've worked at two very successful companies (one of them reached unicorn-level valuation) that were fundamentally built using PHP. Yeah, that thing that people claim has been dead for the last 15 years. It's alive, kicking and screaming. And it works beautifully.
> If you want to make something that starts instantly you can't use electron or java.
You picked the two technologies that are the worst examples for this.
Electron: electron has essentially breathed new life into GUI development, that essentially nobody was doing anymore.
Java: modern java is crazy fast nowadays , and on decent computer your code gets to the entrypoint (main) in less than a second. Whatever slows it down is codebase problem, it's not the jvm.
> If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice.
Most users do not care at all.
If someone is sitting down for an hour-long gaming session with their friends, it doesn't matter if Discord takes 1 second or 20 seconds to launch.
If someone is sitting down to do hours of work for the day, it doesn't matter if their JetBrains IDE or Photoshop or Solidworks launches instantly or takes 30 seconds. It's an entirely negligible amount.
What they do care about is that the app works, gives them the features they want, and gets the job done.
We shouldn't carelessly let startup times grow and binaries become bloated for no reason, but it's also not a good idea to avoid helpful libraries and productivity-enhancing frameworks to optimize for startup time and binary size. Those are two dimensions of the product that matter the least.
> All else equal users will absolutely choose the zippiest products.
"All else equal" is doing a lot of work there. In real world situations, the products with more features and functionality tend to be a little heavier and maybe a little slower.
Dealing with a couple seconds of app startup time is nothing in the grand scheme of people's work. Entirely negligible. It makes sense to prioritize features and functionality over hyper-optimizing a couple seconds out of a person's day.
> As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb.
Okay. Comparing a debug build to a released app is a blatantly dishonest argument tactic.
I have multiple deployed Rust services with binary sizes in the 1-2MB range. I do not care at all how large a "Hello World" app is because I'm not picking Rust to write Hello World apps.
Users don't care if your binary is 3 or 4 mb. They might care if the binary was 3 or 400 mb. But then I also look at our company that uses Jira and Confluence and it takes 10+ seconds to load a damn page. Sometimes the users don't have a say.
> what they really mean is that product quality doesn’t matter.
But does it matter ? I think the only metric with optimising for is latency. Other stuff is something we do.
> In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting.
Yes, yes it is. But they were going to do it anyway. Even if people were to stop accepting this argument, they'll just start using another one.
Startup culture is never going to stop being startup culture and complacent corporations are never going to stop being complacent.
As the famous adage goes: If you want it done right, you gotta do it yourself.
> File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download.
File Pilot is... seemingly a fully-featured GUI file explorer in 1.8mb, complete with animations?
Dude. What.
Yes, there is a generation of programmers that doesn't believe something like File Pilot is even possible.
5 replies →
Pretty sure only techies care about that; an average user on their 10 year old device, couldn't care whether it took 0.1s or 5s to start.
Nice to have, not a must.
There's been a fair bit of research on this. People don't like slow interfaces. They may not necessarily _recognise_ that that's why they don't like the interface, but even slowdowns in the 10s of ms range can make a measurable difference to user sentiment.
3 replies →
Most regular people buy a new phone when their old one has "gotten slow". And why do phones get slow? "What Andy giveth, Bill taketh away."
In tech circles regular people are believed to be stupid and blind. They are neither. People notice when apps get slower and less usable over time. It's impossible not to.
2 replies →
The big problem is that most of time users do not have options. Very often there is no better performing alternatives.
Apart from when it is optional consumption say games.
> They won’t notice those extra 10 milliseconds you save
They won't notice if this decision happens once, no. But if you make a dozen such decisions over the course of developing a product, then the user will notice. And if the user has e.g. old hardware or slow Internet, they will notice before a dozen such decisions are made.
In my career of writing software most developers are fully incapable of measuring things. They instead make completely unfounded assertions that are wrong more than 80% of the time and when they are wrong they tend to be wrong by several orders of magnitude.
And yes, contrary to many comments here, users will notice that 10ms saved if it’s on every key stroke and mouse action. Closer to reality though is sub-millisecond savings that occurs tens of thousands of times on each user interaction that developers disregard as insignificant and users always notice. The only way to tell is to measure things.
When I was at Google, our team kept RUM metrics for a bunch of common user actions. We had a zero regression policy and a common part of coding a new feature was running benchmarks to show that performance didn't regress. We also had a small "forbidden list" of JavaScript coding constructs that we measured to be particularly slow in at least one of Chrome/Firefox/Internet Explorer.
Outside contributors to our team absolutely hated us for it (and honestly some of the people on the team hated it too); everyone likes it when their product is fast, and nobody likes being held to the standard of keeping it that way. When you ask them to rewrite their functional coding as a series of `for` loops because the function overhead is measurably 30% slower across browsers[0], they get so mad.
[0] This was in 2010, I have no idea what the performance difference is in the Year of Our Lord 2025.
3 replies →
> They instead make completely unfounded assertions that are wrong more than 80% of the time and when they are wrong they tend to be wrong by several orders of magnitude.
I completely agree. It blows my mind how fifteen minutes of testing something gets replaced with a guess. The most common situation I see this in (over and over again) is with DB indexes.
The query is slow? Add a bunch of random indexes. Let's not look at the EXPLAIN and make sure the index improves the situation.
I just recently worked with a really strong engineer that kept saying we were going to need to shard our DB soon, but we're way to small of a company for that to be justified. Our DB shouldn't be working that hard (it was all CPU load), there had to be a bad query in there. He even started drafting plans for sharding because he was adamant that it was needed. Then we checked RDS Performance Insights and saw it was one rogue query (as one should expect). It was about about a 45 minute fix and after downsizing one notch on RDS, we're sitting at about 4% most of the time on the DB.
But this is a common thing. Some engineers will _think_ there's going to be an issue, or when there is one, completely guess what it is without getting any data.
Another anecdote from a past company was them upsizing their RDS instance way more than they should need for their load because they dealt with really high connection counts. There was no way this number of connections should be going on based on request frequency. After a very small amount of digging, I found that they would open a new DB connection per object they created (this was PHP). Sometimes they'd create 20 objects in a loop. All the code was synchronous. You ended up with some very simple HTTP requests that would cause 30 DB connections to be established and then dropped.
My plex server was down and my lazy solution was to connect directly to the NAS. I was surprised just how much I noticed the responsiveness after getting used to web players. A week ago I wouldn't have said web player bothered me at all. Now I can't not notice.
Can you show me any longitudinal studies that show examples of a causal connection between incrementality of latency and churn? It’s easy to make such a claim and follow up with “go measure it”. That takes work. There are numerous other things a company may choose to measure instead that are stronger predictors of business impact.
There is probably some connection. Anchoring to 10ms is a bit extreme IMO because it’s indirectly implying that latency is incredibly important which isn’t universally true - each product’s metrics that are predictive of success are much more nuanced and may even have something akin to the set of LLM neurons called “polysemantic” - it may be a combination of several metrics expressed via some nontrivial function that are the best predictor.
For SaaS, if we did want to simplify things and pick just one - usage. That’s the strongest churn signal.
Takeaway: don’t just measure. Be deliberate about what you choose to measure. Measuring everything creates noise and can actually be detrimental.
3 replies →
Soooooo, my "totally in the works" post about how direct connection to your RDMS is the next API may not be so tongue in cheek. No rest, no graphQL, no http overhead. Just plain SQL over the wire.
Authentication? Already baked-in. Discoverability? Already in. Authorization? You get it almost for free. User throttling? Some offer it.
Caching is for weak apps.
I find it fascinating that HN comments always assert that 10ms matters in the context of user interactions.
60Hz screens don't even update every 10ms.
What's even more amazing is that the average non-tech person probably won't even notice the difference between a 60Hz and a 120Hz screen. I've held 120Hz and 60Hz phones side by side in front of many people, scrolled on both of them, and had the other person shrug because they don't really see a difference.
The average user does not care about trivial things. As long as the app does what they want in a reasonable amount of time, it's fine. 10ms is nothing.
60hz screens update every 16.7 ms. So if you add 10 ms to your frame time you will probably miss that 16.7 ms window.
Almost all can see the difference between 60 and 120 fps. Most probably don't care though.
Multiply that by the number of users and total hours your software is used, and suddenly it's a lot of wasted Watts of energy people rarely talk about.
But they still don't care about your stack. They care that you made something slow.
Fix that however you like but don't pretend your non-technical users directly care that you used go vs java or whatever. The only time that's relevant to them is if you can use it for marketing.
That's fine, but I am responding directly to the article.
> [Questions like] “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that.
> extra 10 milliseconds you saved
Strawmen arguments are no fun so let's look at an actual example:
https://www.rippling.com/blog/the-garbage-collector-fights-b...
P99 of 3 SECONDS. App stalls for 2-4 SECONDS. All due to Python.
Their improved p99 is 1.5 seconds. Tons of effort and still could only get 1.5 seconds.
https://www.gigaspaces.com/blog/amazon-found-every-100ms-of-...
> Amazon Found Every 100ms of Latency Cost them 1% in Sales
I've seen e-commerce companies with 1 second p50 latencies due to language choices. Not good for sales.
> Amazon Found Every 100ms of Latency Cost them 1% in Sales
I see this quoted but Amazon has become 5x slower (guestimate) and it doesn't seem like they are working on it as much. Sure the home page loads "fast" ~800ms over fiber, but clicking on a product routinely takes 2-3 seconds to load.
Amazon nowadays has a near monopoly powered by ad money due to the low margin on selling products versus ad spend. So unless you happen to be in the same position using them nowadays as an example isn't going to be very helpful. If they increased sales 20% at the cost of 1% less ad spend then they'd probably be at a net loss as a result.
So you're kinda falling into a fallacy here. You're taking a specific example and trying to make a general rule out of it. I also think the author of the article is doing the same thing, just in a different way.
Users don't care about the specifics of your tech stack (except when they do) but they do care about whether it solves their problem today and in the future. So they indirectly care about your tech stack. So, in the example you provided, the user cares about performance (I assume Rippling know their customer). In other examples, if your tech stack is stopping you from easily shipping new features, then your customer doesn't care about the tech debt. They do care, however, that you haven't given them any meaningful new value in 6 months but your competitor has.
I recall an internal project where a team discussed switching a Python service with Go. They wanted to benchmark the two to see if there was a performance difference. I suggested from outside that they should just see if the Python service was hitting the required performance goals. If so, why waste time benchmarking another language? It wasn't my team, so I think they went ahead with it anyway.
I think there's a balance to be struck. While users don't directly care about the specific tech, they do care about the results – speed, reliability, features. So, the stack is indirectly important. Picking the right tools (even if there are several "good enough" options) can make a difference in delivering a better user experience. It's about optimizing for the things users do notice.
All modern tech stacks have those properties in 2025.
They absolutely do not. In fact, relatively few do. Every single Electron app (which is a depressing number of apps) is a bloated mess. Most web pages are a bloated mess where you load the page and it isn't actually loaded, visibly loading more elements as the "loaded" page sits there.
Software sucks to use in 2025 because developers have stopped giving a shit about performance.
This is so true, and yet. . . Bad, sluggish performance is everywhere. I sometimes use my phone for online shopping, and I'm always amazed how slow ecommerce companies can make something as simple as opening a menu.
1 reply →
Having worked on similar solutions that use Java and Python, I can't say I agree (the former obviously being much faster).
2 replies →
This is a mixed bag of advice. While it seems wise at the surface, and certainly works as an initial model, the reality is a bit more complex than aphorisms.
For example, what you know might not provide the cost benefit ratio your client would. Or the performance. If you only know Cloud Spanner but now there is a need for a small relational table? These maxims have obvious limitations.
I do agree that the client doesn't care about the tech stack. Or that seeking a golden standard is a McGuffin. But it does much deeper than that. Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.
A good engineer balances tradeoffs and solves problems in a satisfying way sufficing all requirements. That can be MySQL and Node. But it can also be C++ and Oracle Coherence. Shying away from a tool just because it has a reputation is just as silly as using it for a hype.
> Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.
Your customer does care about how quickly you can iterate new features over time, and product stability. A stack with a complex mix of technologies is likely to be harder to maintain over the longer term.
That's also an aphorism that may or may not correspond to reality.
Not only there are companies with highly capable teams that are able to move fast using a complex mix of technologies, but also there are customers who have very little interest in new features.
This is the point of my comment: these maxims are not universal truths, and taking them as such is a mistake. They are general models of good ideas, but they are just starter models.
A company needs to attend to its own needs and solve its own problems. The way this goes might be surprisingly different from common sense.
2 replies →
How big is your team?
One person writing a stack in 6 languages is different from a team of 100 using 6 languages.
The problem emerges if you have some eccentric person who likes using a niche language no one else on the team knows. Three months into development they decide they hate software engineering and move to a farm in North Carolina.
Who else is going to be able to pick up their tasks, are you going to be able to quickly on board someone else. Or are you going to have to hire someone new with a specialty in this specific language.
This is a part of why NodeJS quickly ate the world. A lot of web studios had a bunch of front end programmers who were already really good with JavaScript. While NodeJS and frontend JS aren't 100% the same, it's not hard to learn both.
Try to get a front end dev to learn Spring in a week...
5 replies →
> Your customer does care about how quickly you can iterate new features over time
How true this is depends on your particular target market. There is a very large population of customers that are displeased by frequent iterations and feature additions/changes.
The author didn't say listen to the opinion of other, hype or not. The author said "set aside time to explore new technologies that catch your interest ... valuable for your product and your users. Finding the right balance is key to creating something truly impactful.".
It means we should make our own independent, educated judgement based on the need of the product/project we are working on.
> , the reality is a bit more complex than aphorisms.
This is the entire tech blog, social media influencer, devx schtick though. Nuance doesn't sell. Saying "It depends" doesn't get clicks.
> Shying away from a tool just because it has a reputation is just as silly as using it for a hype.
Trying to explain this to a team is one of the most frustrating things ever. Most of the time people pick / reject tools because of "feels".
On a related note, I never understood the hype around GraphQL for example.
I heavily dislike GraphQL for all of the reasons. But I'll say that for a lot of developers, if you are already setting up an API gateway, you might as well batch the calls, and simplify the frontend code.
I don't buy it :) but I can see the reasoning.
I'd saw nowadays, C++ is rarely the best answer, especially for the users.
C++ is often the best answer for users, but this is about how bad the other options are, and not that C++ is good. Options like Rust doesn't have the mature frameworks that C++ does. (rust-qt is often used as a hack instead of a pure rust framework). There is a big difference between modern C++ and the old C++98 as well, and the more you force you code to be modern C++ the less the footguns in C++ will hit you. The C++ committee is also driving forward in eliminating the things people don't like about C++.
Users don't care about your tech stack. They care about things like battery life, and how fast your program runs, how fast your program starts - places where C++ does really well. (C, rust... also do very well). Remember this is real world benchmarks, you can find micro benchmarks where python is just as fast as well written C, but if write a large application in python they will be 30-60 times slower than the same written in C++.
Note however that users only care about security after it is too late. C++ can be much better than C, but since it is really easy to write C style code in C++ you need a lot more care than you would want.
If for your application Rust or ada does have mature enough frameworks to work with then I wouldn't write C++, but all too often the long history of C++ means it is the best choice. In some applications managed languages like Java works well, but in others the limits of the runtime (startup, worse battery life) make it a bad choice. Many things are scripts you won't run very much and so python is just fine despite how slow it is. Make the right choice, but don't call C++ a bad choice just because for you it is bad.
For real time audio synthesis or video game engines then C++ is the industry standard.
It's true, and of course, all models are wrong, especially as you go into deeper detail, so I can't really argue an edge case here. Indeed, C++ is rarely the best answer. But we all know of trading systems and gaming engines that rely heavily on C++ (for now, may Rust keep growing).
...unless you do HFT...
It would be funny if it weren’t tragic. So many of the comments here echo the nonsense of my software career: developers twisting themselves in knots to justify writing slow software.
I've not seen a compelling reason start the performance fight in ordinary companies doing CRUD apps. Even if I was good at performance, I wouldn't give that away for free, and I'd prefer to go to companies where it's a requirement (HFT or games), which only furthers the issue about slowness being ubiquitous.
For example, I dropped a 5s paginated query doing a weird cross join to ~30ms and all I got for that is a pat on the back. It wasn't skill, but just recognizing we didn't need the cross join part.
We'd need to start firing people who write slow queries, forcing them to become good, or pay more for developers who know how to measure and deliver performance, which I also don't think is happening.
For 99% of apps slow software is compensated by fast hardware. In almost all cases, the speed of your software does not matter anymore. Unless speed is critical, you can absolutely justify writing slow software if its more maintainable that way.
And thus when I clicked on the link to a NPR story just now it was 10 seconds before the page was reasable on my computer.
Now my computer (pinebook pro) was never known as fast, but still it runs circles around the first computer I ran a browser. (I'm not sure which computer that was, but likely the CPU was running at 25mhz, could have been a 80486 or a Sparc CPU though - now get off my lawn you kids)
Those are things developers say to keep themselves employable.
Your users feel otherwise. If you actually care at all about the quality of the software you produce, stop rationalizing slow software.
This is a fairly classic rant. Many have gone before, and many will come after.
I have found that it's best to focus on specific tools, and become good with them, but always be ready to change. ADHD-style "buzzword Bingo," means that you can impress a lot of folks at tech conferences, but may have difficulty reliably shipping.
I have found that I can learn new languages and "paradigms," fairly quickly, but becoming really good at it, takes years.
That said, it's a fast-changing world, and we need to make sure that we keep up. Clutching onto old tech, like My Precioussss, is not likely to end well.
What do you think of Elixir in that regard? It seems to be evolving in parallel to current trends, but it still seems a bit too niche for my taste. I‘m asking because I‘m on the fence on whether I should/want to base my further server side career on it. My main income will likely come from iOS development for at least a few more years, but some things feel off in the Apple ecosystem, and I feel the urge to divest.
Ive been working in Elixir since 2015. I love the ecosystem and think its the best choice for building a web app from a pure tech/stability/scalability/productivity perspective (I also have a decade+ experience in Ruby on rails, Nodejs, and Php laravel, plus Rust to a lesser extent).
I am however having trouble in the human side of it. Ive got a strong resume but I was laid off in Nov 2024 and Im having trouble even getting Elixir interviews (with 9+ years of production Elixir experience!). Hiring people with experience was also hard when I was the hiring manager. It is becoming less niche these days. I love it too much to leave for other ecosystems in the web sphere
2 replies →
I couldn't even begin to speak to Elixir. Never used it.
Most of my work is client-side (native Apple app development, in Swift).
For server-side stuff, I tend to use PHP (not a popular language, hereabouts). Works great.
1 reply →
Elixir can be used for scripting tasks, config and test rig are usually scripts. In theory you can use the platform for desktop GUI too, one of the bespoke monitoring tools is built that way. Since a few years back there are libraries for numeric and ML computing too.
Users don't care - but users care how reliable your software is, users care about how quickly you can ship the features they request.
Tech stack determines software quality depending on the authors of the software of course.
But certain stacks allow devs to ship faster, fix bugs faster and accommodate user needs.
Look at what 37Signals is able to do with [1] 5 Product Software Engineers. their output is literally 100x of their competitors.
[1]: https://x.com/jorgemanru/status/1889989498986958958
> Look at what 37Signals is able to do with [1] 5 Product Software Engineers. their output is literally 100x of their competitors.
The linked Tweet thread says they have 16 Software Engineers and a separate ops team that he's not counting for some reason.
There are also comments further down that thread about how their "designers" also code, so there is definitely some creative wordplay happening to make the number of programmers sound as small as possible.
Basecamp (37Signals) also made headlines for losing a lot of employees in recent years. They had more engineers in the past when they were building their products.
Basecamp is also over 20 years old and, to be honest, not very feature filled. It's okay-ish if your needs fit within their features, but there's a reason it's not used by a lot of people.
DHH revealed their requests per second rate in a Twitter argument a while ago and it was a surprisingly low number. This was in the context of him claiming that he could host it all on one or two very powerful servers, if I recall correctly.
When discussing all things Basecamp (37Signals) it's really important to remember that their loud internet presence makes them seem like they have a lot more users than they really do. They've also been refining basically the same product for two decades and had larger teams working in the past.
Just joining all the other comments to say there's a split between:
- users don't care about your tech stack - you shouldn't care about your tech stack
I don't on-paper care what metal my car is going to be made of, I don't know enough information to have an opinion. But I reeaaally hope the person designing it has a lot of thoughts on the subject.
I find it funny that is message resurfaces on the front page once or twice a year for at least 10 years now. Product quality is often not the main argument advanced when deciding on a tech stack, only indirectly. Barring any special technical requirements, in the beginning what matters is: - Can we build quickly without making a massive mess? - Will we find enough of the right people who can and want to work with this stack? - Will this tech stack continue to serve us in the future?
Imagine it's 2014 and you're deciding between two hot new framework ember and react, this is not just a question about what is hot or shiny and new.
There's an obvious solution to "language doesn't matter". Let the opinionated people pick the stack. Then you satisfy the needs of the people who care and those who don't care.
The opinionated people disagree.
The opinionated people I disagree with sure like saying "language doesn't matter", as long as it preserves their status quo.
This discussion is not about technology. It's about technical people learning that business, product and users are actually important. The best advice I can give technical people about working at startups is that you should learn everything you can about business. You can do that at a startup much easier than at a big tech company. Spend as much time as you can with your actual users, watching them use your product. It will help you communicate with the rest of the team, prioritize your technical tasks, and help you elevate your impact.
Problem is your hiring manager at a startup will still care whether you're an expert in the stack-du-jour. So technical people aren't incentivised to care about the business.
TFA isn't coherent.
> Questions like “Is this language better than that one?” or “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that. They won’t notice those extra 10 milliseconds you saved, nor will their experience magically improve just because you’re using the latest JavaScript framework.
Users care about performance. The per user action latency is >10 ms (unless your in bootstrap phase).
> What truly makes a difference for users is your attention to the product and their needs.
False dichotomy.
> Every programming language shines in specific contexts. Every framework is born to solve certain problems. But none of these technical decisions, in isolation, will define the success of your product from a user’s perspective.
Yes so evaluate your product's context, choose your tools, frameworks, and languages accordingly.
I worked on a small team with 2 or 3 backend Elixir devs as the sole JavaScript / TypeScript front end, React Native app, micro services running in node, browser automation developer. It was easiest for me to write a backend service in JavaScript and expose an interface for them to integrate with rather than wait for them to get around to building it. The services were usually small under a couple thousand lines of code and if they wanted to, they could translate the service to Elixir since the business logic was usually hardened and solved. One service might scrape data, store it in S3 buckets, and then process it when requested storing / caching the results in a Postgres database.
Here is the important part: the automated browser agents I built were core to the company's business model. Even today nobody can accomplish in any other language than JavaScript what I was doing because it requires injecting JavaScript into third party websites with the headless browser. Even if the surface area was small, the company was 100% dependent on JavaScript.
The issue is that they are huge Elixir fanboys. Monday morning meeting G. would start talking about how much JavaScript sucks and they should start to move all the front end code to LiveView. Every couple weeks .... "we should migrate to LiveView." Dude, I'm sitting right here, I can hear you. Moreover, your job depends on at least one person writing JavaScript as shitty a language as it might be. I don't think he understood that he is threatening my job. The fanboy Elixir conversations between the 3 of of them always made me feel like a second class citizen.
I'm one of those fanboys. I've done the react, angular etc front end thing. LiveView just absolutely smokes SPAs and other JS rats-nests in terms of productivity, performance and deployment simplicity (for certain types of apps). The fact that you don't have to write 6 layers of data layer abstraction alone is worth it.
And don't get me wrong, I even like things like Nuxt and have a few products on Astro (great framework btw). Agree regarding browser automation, not many options there so your gig is safe for now. But do play with LiveView, it's pretty special.
I'm going to agree with you that Elixir / Erlang is the most productive and will back it up with some data; elixir developers are the highest paid because they generate the most value. [0] Nonetheless, LiveView isn't a viable solution for a lot of what I do. Because of that, it is important to have developers who know and understand how to use JavaScript.
[0] https://survey.stackoverflow.co/2024/technology#4-top-paying...
1 reply →
A mixture of contempt and resentment towards JavaScript makes developers worse engineers.
What will happen now you're gone, is, one of them will encounter a trivial problem working in a language they don't understand. They could solve this by consulting some documentation, but they won't do that. Instead they will make a huge fuss and embark on a six month project to rewrite everything in LiveView.
Like all rewrites it will be much harder than they think, come out worse than they hoped and fail to solve any customer problems.
Many comments arguing that the right stack and good "clean" code will then lead to user-appreciated performance improvements.
More often I've seen this used by developers as an excuse to yak-shave "optimizations" that deliver no (or negative) performance improvements. e.g. "Solving imaginary scaling problems ... at scale!"
As a user I actively look for apps in a specific tech stack because I know they will be much leaner and more enjoyable to use
Maybe not, but I do, and I hope anyone else who works in the space does as well. I'm not a big fan of this argument of cutting all "inefficient attention" out of what should be our craft. I want to take pride in my work, even if my users don't share that feeling
Indeed, hence why I always ask back when someone proposes rewrites, what is the business value.
Meaning the amount of money spent on developer salaries for the rewrite timeframe, how does that reflect in what business is doing, and how is the business going to get that investment back.
> There are no “best” languages or frameworks. There are only technologies designed to solve specific problems, and your job is to pick the ones that fit your use case
What if multiple technologies fit our use case? There will be one that fits the use case “best”.
> There will be one that fits the use case “best”.
You answered your own question: best that fits the use case, still not THE BEST
> There will be one that fits the use case “best”.
And how much time do you want to spend evaluating those technologies? Do a MVP with each?
Or maybe sell to customers / venture capitalists based on the first working MVP and move on?
The user has never been the primary factor in my choice of tech stack. Just like my employer doesn't care what car I drive to get to work. It's mostly about the developers and the best tools available to them at that point in time.
> Use what you enjoy working with.
No! This is a great advise if you are working on a personal project but a terrible advise in all other scenarios. Use the stack that solves your problem, not the stack you are simply comfortable with.
> Users Don’t Care About Your Tech Stack
Oh, but they really should: the less proprietary and supporting the most platforms — the better.
Just ask the Russians porting stuff from something like MS SQL Server or Oracle to Postgres
> There are no “best” languages or frameworks. There are only technologies designed to solve specific problems
Let's not forget that for virtually any random problem, you have a plenty of technologies which solve it.
I don't agree with the article's central premise. It assumes that tech stack choices should be driven solely by what the end user cares about.
In reality, selecting a tech stack is about more than the user side of things, it’s a strategic decision where factors like cost efficiency, the ease of hiring new developers, long-term maintainability, and the likelihood that the technology will still be relevant in five years are all critical.
These considerations directly impact how quickly and effectively a team can build and scale a product, even if the end user never sees the tech stack at work.
I am working as a contractor on a product where my principal incentives revolve around meeting customer-facing milestones. I am not being paid a dime for time spent on fancy technology experiments.
It has been quite revealing working with others on the same project who are regular salaried employees. The degree to which we ensure technology is well-aligned with actual customer expectations seems to depend largely on how we are compensated.
I think the folks who lost access to funds due to an implementation choice of Synapse would disagree with this statement.
They don't care about most implementation details, but choice of 3rd parties to manage functionality and data does matter to your users when it breaks. And it will.
Sure, the language and framework you choose are less important to them, per parent post. But it's a slippery slope; don't assume that they are agnostic to _all_ your technical choices.
More importantly for open source projects with a lot of users, don't advertise a bunch of techno mumbo-jumbo on your website. That either means nothing to users, or may even put them off. Sure, have a link for developers that goes in to all that stuff so they can decide to contribute or build for source or whatever. Just keep it off the main page - it's meaningless to the general public.
This feels like one of those comments you hear in a tech stack meeting—something that doesn’t change the discussion but fills space, like a “nothing from my end.”
“So, <language x> gives us 1% faster load times, but <language y> offers more customizability—”
'Hey guys, users don’t care about the tech stack. We need to focus on their needs.'
“Uh… right. So, users like speed, yeah? Anyway, <language x> gives us 1% faster load times, but <language y> offers more customizability—”
Yes, it is right they do not care. This is the reason RubyOnRails teached us how to go fast on web development.
The problem is that there is no good rule to choose the RIGHT stack, and that is the challenge!
For instance Joel Sposlky's team chose ASP.NET for "Stack overflow" simply because they know it better than, say PHP or Java Struts 1.x
> Users don't care about your tech stack
Just give them the fucking .exe
https://github.com/sherlock-project/sherlock/issues/2019
Throwaway article used to get ads.
Nobody is making the argument that users care about your tech stack. I've literally never heard a dev justify using a library because "users care about it". Nobody.
Spoiler alert: the real answer is, like almost everything else in this industry (and most others), "it depends."
This makes me think about an interview Lex did with Pieter Levels. It is amazing how fast he gets something to market with just PHP.
> They won’t notice those extra 10 milliseconds you saved
Depends what you're doing. In my case I'm saving microseconds on the step time of an LLM used by hundreds of millions of people.
Beauty of scale. Saving ten milliseconds a hundred times is just a second. But do it a billion times and you've shaved off ~4 months.
If you work at Google or whatever else is popular or monopolistic this week.
In most real jobs those ten milliseconds will add up to what, 5 seconds to a minute?
3 replies →
But your next interviewer will drill you for hours about it
If you make a game users definitely care if you are 10 ms slower per frame.
Users don't care about tech stacks. But they care about their experience and tech stacks can definitely influence that.
How could they? They don't have the required information.
Why should they? That's pretty much our responsibility.
users don't care about semi ugly UI either, as long as it's fast and legible.
They do not care about your stack, but do care that the stack works. Use what you're familiar with, sure, but if that does not produce a reliable system, your user will not use it.
At least the article has the length which the topic deserves.
TLDR: Dev: "It's written in Rust", User: "What?"
Unless, of course, the user of your code is a financial institution, or a carmaker, building planes, medical devices, ...
Sometimes I have to remind myself of this. Take for example Bring a Trailer. It is a wordpress site. I know you're rolling your eyes or groaning at the mention of wordpress. It works. It is hugely successful in the niche it is in.
This is dumb. Of course they don't care about the tech stack directly but they obviously care about the things that it affects - performance, reliability, features etc.
To top it off, it also includes a classic "none of them are perfect so they are all the same and it doesn't matter which you choose". I recently learnt the name for this: the fallacy of grey.
https://www.readthesequences.com/The-Fallacy-Of-Gray
Don't fall for it like this guy has.
but but but what about "made in rust(tm)"
You can always oxidize later.
Then users can write their own software.
If I even have to think about installing node.js or dealing with some Python rat's nest, I absolutely care.
People are using machines dozens of times more powerful than machines from 15 years ago, but do they do things that are materially different to what they did before? Not really.
They absolutely do care, even if they cannot articulate whats wrong.
`I still often find myself in discussions where the main focus is the choice of technologies. Questions like “Is this language better than that one?” or “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that.'
When developers are having those discussions, are they ever doing so in relation to some hypothetical user caring? This feels like a giant misdirection strawman.
When I discuss technology and framework choices with other developers, the context is the experience for developers. And of course business considerations come into play as well: Can we find lots of talent experienced in this set of technologies? Is it going to efficiently scale in a cost effective manner? Are members of the team going to feel rewarded mastering this stack, gaining marketable skills? And so on.
Stacking tech against users you don’t care about is not nice.
But investors do, and most startup founders are in a bigger need of investors than users
Sure if you want to use Fortran to build your app that's probably a no-go. But does it really matter if it's Go, JS, Java, Ruby, Python or PHP?
There’s a lot of vacancies of technical co-founders with a preference of stack, which usually comes from some advisor or investor. A pretty dumb filter, given that it’s often a Node/React combo. It is understandable where it comes from, but still… dumb.
Sometimes they do. Examples:
- Electron vs native (yes, electron/chromium bloat is a popular discussion point even amongst non-engineers)
- Mobile vs web app
- When users comment on how “clunky” the UI feels, they probably mean that a 5 year old Bootstrap implementation should be replaced with Tailwind (/s)
- When users fall in love with linear or figma, they are falling in love with the sync engine and multiplayer tech stack
Even if users don’t have the words to describe the stack, they do care when their needs overlap with the characteristics of the stack
[dead]
Lately, I've been thinking that LLMs will lift programming anyways to another level: the level of specification in natural language and some formal descriptions mixed in. LLMs will take care of transforming this into actual code. So not only users don't care about programming but also the developers. Switching the tech stack might become a matter of minutes.
How will that work out?
If it simply generates code from natural language then I am still fundamentally working with code. Aider as an example is useful for this, but anything that isn't a common function/component/class it falls apart even with flagship models.
If I actually put my "natural language code" under git then it'll lack specificity at compile time likely leading to large inconsistencies between versions. This is horrible user experience - like the random changes Excel makes every few years, but every week instead.
And everyone that has migrated a somewhat large database knows it isn't doable within minutes.
I don't think one would put only the specification in Git. LLMs are not a reliable compiler.
Actual code is still the important part of a business. However, how this code is developed will drastically change (some people actually work already with Cursor etc.). Imagine: If you want a new feature, you update the spec., ask an LLM for the code and some tests, test the code personally and ship it.
I guess no one would hand over the control of committing and deployment to an AI. But for coding yes.