There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.
It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].
It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.
The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.
There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
This is also the exact reason why all the bright-eyed pieces that some technology would increase worker's productivity and therefore allow more leisure time for the worker (20 hour workweek etc) are either hopelessly naive or pure propaganda.
Increased productivity means that the company has a new option to either reduce costs or increase output at no additional cost, one of which it has to do to stay ahead in the rat-race of competitors. Investing the added productivity into employee leisure time would be in the best case foolish and in the worst case suicidal.
You're on the right track, but missing an important aspect.
In most cases the company making the inferior product didn't spend less. But they did spend differently. As in, they spent a lot on marketing.
You were focused on quality, and hoped for viral word of mouth marketing. Your competitors spent the same as you, but half their budget went to marketing. Since people buy what they know, they won.
Back in the day MS made Windows 95. IBM made OS/2. MS spend a billion $ on marketing Windows 95. That's a billion back when a billion was a lot. Just for the launch.
Techies think that Quality leads to sales. If does not. Marketing leads to sales. There literally is no secret to business success other than internalizing that fact.
> What I realized is that lower costs, and therefore lower quality,
This implication is the big question mark. It's often true but it's not at all clear that it's necessarily true. Choosing better languages, frameworks, tools and so on can all help with lowering costs without necessarily lowering quality. I don't think we're anywhere near the bottom of the cost barrel either.
I think the problem is focusing on improving the quality of the end products directly when the quality of the end product for a given cost is downstream of the quality of our tools. We need much better tools.
For instance, why are our languages still obsessed with manipulating pointers and references as a primary mode of operation, just so we can program yet another linked list? Why can't you declare something as a "Set with O(1) insert" and the language or its runtime chooses an implementation? Why isn't direct relational programming more common? I'm not talking programming in verbose SQL, but something more modern with type inference and proper composition, more like LINQ, eg. why can't I do:
let usEmployees = from x in Employees where x.Country == "US";
func byFemale(Query<Employees> q) =>
from x in q where x.Sex == "Female";
let femaleUsEmployees = byFemale(usEmployees);
These abstract over implementation details that we're constantly fiddling with in our end programs, often for little real benefit. Studies have repeatedly shown that humans can write less than 20 lines of correct code per day, so each of those lines should be as expressive and powerful as possible to drive down costs without sacrificing quality.
You must have read that the Market for Lemons is a type of market failure or collapse. Market failure (in macroeconomics) does not yet mean collapse. It describes a failure to allocate resources in the market such that the overall welfare of the market participants decreases. With this decrease may come a reduction in trade volume. When the trade volume decreases significantly, we call it a market collapse. Usually, some segment of the market that existed ceases to exist (example in a moment).
There is a demand for inferior goods and services, and a demand for superior goods. The demand for superior goods generally increases as the buyer becomes wealthier, and the demand for inferior goods generally increases as the buyer becomes less wealthy.
In this case, wealthier buyers cannot buy the superior relevant software previously available, even if they create demand for it. Therefore, we would say a market fault has developed as the market could not organize resources to meet this demand. Then, the volume of high-quality software sales drops dramatically. That market segment collapses, so you are describing a market collapse.
> There’s probably another name for this
You might be thinking about "regression to normal profits" or a "race to the bottom." The Market for Lemons is an adjacent scenario to both, where a collapse develops due to asymmetric information in the seller's favor. One note about macroecon — there's never just one market force or phenomenon affecting any real situation. It's always a mix of some established and obscure theories.
My wife has a perfume business. She makes really high quality extrait de parfums [1] with expensive materials and great formulations. But the market is flooded with eau de parfums -- which are far more diluted than a extrait -- using cheaper ingredients, selling for about the same price. We've had so many conversations about whether she should dilute everything like the other companies do, but you lose so much of the beauty of the fragrance when you do that. She really doesn't want to go the route of mediocrity, but that does seem to be what the market demands.
I had the same realization but with car mechanics. If you drive a beater you want to spend the least possible on maintenance. On the other hand, if the car mechanic cares about cars and their craftsmanship they want to get everything to tip-top shape at high cost. Some other mechanics are trying to scam you and get the most amount of money for the least amount of work. And most people looking for car mechanics want to pay the least amount possible, and don't quite understand if a repair should be expensive or not. This creates a downward pressure on price at the expense of quality and penalizes the mechanics that care about quality.
Exactly. People on HN get angry and confused about low software quality, compute wastefulness, etc, but what's happening is not a moral crisis: the market has simply chosen the trade-off it wants, and industry has adapted to it
If you want to be rewarded for working on quality, you have to find a niche where quality has high economic value. If you want to put effort into quality regardless, that's a very noble thing and many of us take pleasure in doing so, but we shouldn't act surprised when we aren't economically rewarded for it
I actually disagree. I think that people will pay more for higher quality software, but only if they know the software is higher quality.
It's great to say your software is higher quality, but the question I have is whether or not is is higher quality with the same or similar features, and second, whether the better quality is known to the customers.
It's the same way that I will pay hundreds of dollars for Jetbrains tools each year even though ostensibly VS Code has most of the same features, but the quality of the implementation greatly differs.
If a new company made their IDE better than jetbrains though, it'd be hard to get me to fork over money. Free trials and so on can help spread awareness.
I used to write signal processing software for land mobile radios. Those radios were used by emergency services. For the most part, our software was high quality in that it gave good quality audio and rarely had problems. If it did have a problem, it would recover quickly enough that the customer would not notice.
Our radios got a name for reliability: such as feedback from customers about skyscrapers in New York being on fire and the radios not skipping a beat during the emergency response. Word of mouth traveled in a relatively close knit community and the "quality" did win customers.
Oddly we didn't have explicit procedures to maintain that quality. The key in my mind was that we had enough time in the day to address the root cause of bugs, it was a small enough team that we knew what was going into the repository and its effect on the system, and we developed incrementally. A few years later, we got spread thinner onto more products and it didn't work so well.
I kind of see this in action when I'm comparing products on Amazon. When comparing two products on Amazon that are substantially the same, the cheaper one will have way more reviews. I guess this implies that it has captured the majority of the market.
There's an analogy with evolution. In that case, what survives might be the fittest, but it's not the fittest possible. It's the least fit that can possibly win. Anything else represents an energy expenditure that something else can avoid, and thus outcompete.
I had the exact same experience trying to build a startup. The thing that always puzzled me was Apple: they've grown into one of the most profitable companies in the world on the basis of high-quality stuff. How did they pull it off?
I'm thinking out loud but it seems like there's some other factors at play. There's a lower threshold of quality that needs to happen (the thing needs to work) so there's at least two big factors, functionality and cost. In the extreme, all other things being equal, if two products were presented at the exact same cost but one was of superior quality, the expectation is that the better quality item would win.
There's always the "good, fast, cheap" triangle but with Moore's law (or Wright's law), cheap things get cheaper, things iterate faster and good things get better. Maybe there's an argument that when something provides an order of magnitude quality difference at nominal price difference, that's when disruption happens?
So, if the environment remains stable, then mediocrity wins as the price of superior quality can't justify the added expense. If the environment is growing (exponentially) then, at any given snapshot, mediocrity might win but will eventually be usurped by quality when the price to produce it drops below a critical threshold.
You're laying it out like it's universal, in my experience there are products where people will seek for the cheapest good enough but there are also other product that people know they want quality and are willing to pay more.
Take cars for instance, if all people wanted the cheapest one then Mercedes or even Volkswagen would be out of business.
Same for professional tools and products, you save more by buying quality product.
And then, even in computer and technology. Apple iPhone aren't cheap at all, MacBook come with soldered ram and storage, high price, yet a big part of people are willing to buy that instead of the usual windows bloated spyware laptop that run well enough and is cheap.
If you’re trying to sell a product to the masses, you either need to make it cheap or a fad.
You cannot make a cheap product with high margins and get away with it. Motorola tried with the RAZR. They had about five or six good quarters from it and then within three years of initial launch were hemorrhaging over a billion dollars a year.
You have to make premium products if you want high margins. And premium means you’re going for 10% market share, not dominant market share. And if you guess wrong and a recession happens, you might be fucked.
Yes, I was in this place too when I had a consulting company. We bid on projects with quotes for high quality work and guaranteed delivery within the agreed timeframe. More often than not we got rejected in favor of some students who submitted a quote for 4x less. I sometimes asked those clients how the project went, and they'd say, well, those guys missed the deadline and asked for more money several times
> We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
These economic forces exist in math too. Almost every mathematician publishes informal proofs. These contain just enough discussion in English (or other human language) to convince a few other mathematicians in the same field that they their idea is valid. But it is possible to make errors. There are other techniques: formal step-by-step proof presentations (e.g. by Leslie Lamport) or computer-checked proofs that would be more reliable. But almost no mathematician uses these.
I'm a layman, but in my opinion building quality software can't really be a differentiator because anyone can build quality software given enough time and resources. You could take two car mechanics and with enough training, time, assistance from professional dev consultants, testing, rework, so and so forth, make a quality piece of software. But you'd have spent $6 million to make a quality alarm clock app.
A differentiator would be having the ability to have a higher than average quality per cost. Then maybe you're onto something.
There is an exception: luxury goods. Some are expensive, but people don't mind them being overpriced because e.g. they are social status symbols. Is there such a thing like "luxury software"? I think Apple sort of has this reputation.
I see another dynamic "customer value" features get prioritized and eventually product reaches a point of crushing tech debt. It results in "customer value" features delivery velocity grinding to a halt. Obviously subject to other forces but it is not infrequent for someone to come in and disrupt the incumbents at this point.
Many high-quality open-source designs suggest this is a false premise, and as a developer who writes high-quality and reliable software for much much lower rates than most, cost should not be seen as a reliable indicator of quality.
Capitalism? Marx's core belief was that capitalists would always lean towards paying the absolute lowest price they could for labor and raw materials that would allow them to stay in production. If there's more profit in manufacturing mediocrity at scale than quality at a smaller scale, mediocrity it is.
Not all commerce is capitalistic. If a commercial venture is dedicated to quality, or maximizing value for its customers, or the wellbeing of its employees, then it's not solely driven by the goal of maximizing capital. This is easier for a private than a public company, in part because of a misplaced belief that maximizing shareholder return is the only legally valid business objective. I think it's the corporate equivalent of diabetes.
But do you think you could have started with a bug laden mess? Or is it just the natural progression down the quality and price curve that comes with scale
> People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality).
Sure, but what about the people who consider quality as part of their product evaluation? All else being equal everyone wants it cheaper, but all else isn't equal. When I was looking at smart lighting, I spent 3x as much on Philips Hue as I could have on Ikea bulbs: bought one Ikea bulb, tried it on next to a Hue one, and instantly returned the Ikea one. It was just that much worse. I'd happily pay similar premiums for most consumer products.
But companies keep enshittifying their products. I'm not going to pay significantly more for a product which is going to break after 16 months instead of 12 months. I'm not going to pay extra for some crappy AI cloud blockchain "feature". I'm not going to pay extra to have a gaudy "luxury" brand logo stapled all over it.
Companies are only interested in short-term shareholder value these days, which means selling absolute crap at premium prices. I want to pay extra to get a decent product, but more and more it turns out that I can't.
>There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
It's the same concept as the age old "only an engineer can build a bridge that just barely doesn't fall down" circle jerk but for a more diverse set of goods than just bridges.
Maybe you could compete by developing new and better products? Ford isn't selling the same car with lower and lower costs every year.
It's really hard to reconcile your comment with Silicon Valley, which was built by often expensive innovation, not by cutting costs. Were Apple, Meta, Alphabet, Microsoft successful because they cut costs? The AI companies?
I’d argue this exists for public companies, but there are many smaller, private businesses where there’s no doctrine of maximising shareholder value
These companies often place a greater emphasis on reputation and legacy
Very few and far between, Robert McNeel & Associates (American) is one that comes to mind (Rhino3D), as his the Dutch company Victron (power hardware)
The former especially is not known for maximising their margins, they don’t even offer a subscription-model to their customers
Victron is an interesting case, where they deliberately offer few products, and instead of releasing more, they heavily optimise and update their existing models over many years in everything from documentation to firmware and even new features. They’re a hardware company mostly so very little revenue is from subscriptions
I'm proud of you, it often takes people multiple failures before they learn to accept their worldview that regulations aren't necessary and the tragedy of Commons is a myth are wrong.
The problem with your thesis is that software isn't a physical good, so quality isn't tangible. If software does the advertised thing, it's good software. That's it.
With physical items, quality prevents deterioration over time. Or at least slows it. Improves function. That sort of thing.
Software just works or doesn't work. So you want to make something that works and iterate as quickly as possible. And yes, cost to produce it matters so you can actually bring it to market.
> the market sells as if all goods were high-quality
The phrase "high-quality" is doing work here. The implication I'm reading is that poor performance = low quality. However, the applications people are mentioning in this comment section as low performance (Teams, Slack, Jira, etc) all have competitors with much better performance. But if I ask a person to pick between Slack and, say, a a fast IRC client like Weechat... what do you think the average person is going to consider low-quality? It's the one with a terminal-style UI, no video chat, no webhook integrations, and no custom avatars or emojis.
Performance is a feature like everything else. Sometimes, it's a really important feature; the dominance of Internet Explorer was destroyed by Chrome largely because it was so much faster than IE when it was released, and Python devs are quickly migrating to uv/ruff due to the performance improvement. But when you start getting into the territory of "it takes Slack 5 seconds to start up instead of 10ms", you're getting into the realm where very few people care.
You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Performance is not just a quantitative issue. It leaks into everything, from architecture to delivery to user experience. Bad performance has expensive secondary effects, because we introduce complexity to patch over it like horizontal scaling, caching or eventual consistency. It limits our ability to make things immediately responsive and reliable at the same time.
If you're being honest, compare Slack and Teams not with weechat, but with Telegram. Its desktop client (along with other clients) is written by an actually competent team that cares about performance, and it shows. They have enough money to produce a native client written in C++ that has fantastic performance and is high quality overall, but these software behemoths with budgets higher than most countries' GDP somehow never do.
In an efficient market people buy things based on a value which in the case of software, is derived from overall fitness for use. "Quality" as a raw performance metric or a bug count metric aren't relevant; the criteria is "how much money does using this product make or save me versus its competition or not using it."
In some cases there's a Market of Lemons / contract / scam / lack of market transparency issue (ie - companies selling defective software with arbitrary lock-ins and long contracts), but overall the slower or more "defective" software is often more fit for purpose than that provided by the competition. If you _must_ have a feature that only a slow piece of software provides, it's still a better deal to acquire that software than to not. Likewise, if software is "janky" and contains minor bugs that don't affect the end results it provides, it will outcompete an alternative which can't produce the same results.
I don't think it's necessarily a market for lemons. That involves information asymmetry.
Sometimes that happens with buggy software, but I think in general, people just want to pay less and don't mind a few bugs in the process. Compare and contrast what you'd have to charge to do a very thorough process with multiple engineers checking every line of code and many hours of rigorous QA.
I once did some software for a small book shop where I lived in Padova, and created it pretty quickly and didn't charge the guy - a friend - much. It wasn't perfect, but I fixed any problems (and there weren't many) as they came up and he was happy with the arrangement. He was patient because he knew he was getting a good deal.
I have worked for large corporations that have foisted awful HR, expense reporting, time tracking and insurance "portals" that were so awful I had to wonder if anyone writing the checks had ever seen the product. I brought up the point several times that if my team tried to tell a customer that we had their project all done but it was full of as many bugs and UI nightmares as these back office platforms, I would be chastised, demoted and/or fired.
I used to work at a large company that had a lousy internal system for doing performance evals and self-reviews. The UI was shitty, it was unreliable, it was hard to use, it had security problems, it would go down on the eve of reviews being due, etc. This all stressed me out until someone in management observed, rather pointedly, that the reason for existence of this system is that we are contractually required to have such a system because the rules for government contracts mandate it, and that there was a possibility (and he emphasized the word possibility knowingly) that the managers actully are considering their personal knowledge of your performance rather than this performative documentation when they consider your promotions and comp adjustments. It was like being hit with a zen lightning bolt: this software meets its requirements exactly, and I can stop worrying about it. From that day on I only did the most cursory self-evals and minimal accomplishents, and my career progressed just fine.
You might not think about this as “quality” but it does have the quality of meeting the perverse functional requirements of the situation.
> I had to wonder if anyone writing the checks had ever seen the product
Probably not, and that's like 90% of the issue with enterprise software. Sadly enterprise software products are often sold based mainly on how many boxes they check in the list of features sent to management, not based on the actual quality and usability of the product itself.
What you're describing is Enterprise(tm) software. Some consultancy made tens of millions of dollars building, integrating, and deploying those things. This of course was after they made tens of millions of dollars producing reports exploring how they would build, integrate, and deploy these things and all the various "phases" involved. Then they farmed all the work out to cheap coders overseas and everyone went for golf.
Meanwhile I'm a founder of startup that has gotten from zero to where it is on probably what that consultancy spends every year on catering for meetings.
> the market buys bug-filled, inefficient software about as well as it buys pristine software
In fact, the realization is that the market buy support.
And that includes google and other companies that lack much of human support.
This is the key.
Support is manifested in many ways:
* There is information about it (docs, videos, blogs, ...)
* There is people that help me ('look ma, this is how you use google')
* There is support for the thing I use ('OS, Browser, Formats, ...')
* And for my way of working ('Excel let me do any app there...')
* And finally, actual people (that is the #1 thing that keep alive even the worst ERP on earth). This also includes marketing, sales people, etc. This are signal of having support even if is not exactly the best. If I go to enterprise and only have engineers that will be a bad signal, because well, developers then to be terrible at other stuff and the other stuff is support that matters.
If you have a good product, but there is not support, is dead.
And if you wanna fight a worse product, is smart to reduce the need to support for ('bugs, performance issues, platforms, ...') for YOUR TEAM because you wanna reduce YOUR COSTS but you NEED to add support in other dimensions!
The easiest for a small team, is just add humans (that is the MOST scarce source of support). After that, it need to be creative.
(also, this means you need to communicate your advantages well, because there is people that value some kind of support more than others 'have the code vs propietary' is a good example. A lot prefer the proprietary with support more than the code, I mean)
So you're telling me that if companies want to optimize profitability, they’d release inefficient, bug-ridden software with bad UI—forcing customers to pay for support, extended help, and bug fixes?
Suddenly, everything in this crazy world is starting to make sense.
Even if end-users had the data to reasonably tie-break on software quality and performance, as I scroll my list of open applications not a single one of them can be swapped out with another just because it were more performant.
For example: Docker, iterm2, WhatsApp, Notes.app, Postico, Cursor, Calibre.
I'm using all of these for specific reasons, not for reasons so trivial that I can just use the best-performing solution in each niche.
So it seems obviously true that it's more important that software exists to fill my needs in the first place than it pass some performance bar.
I’m surprised in your list because it contains 3 apps that I’ve replaced specifically due to performance issues (docker, iterm and notes). I don’t consider myself particularly performance sensitive (at home) either. So it might be true that the world is even _less_ likely to pay for resource efficiency than we think.
Except you’ve already swapped terminal for iterm, and orbstack already exists in part because docker left so much room for improvement, especially on the perf front.
> But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies
I'd take this one step further, 99% of the software written isn't being done with performance in mind. Even here in HN, you'll find people that advocate for poor performance because even considering performance has become a faux pas.
That means you L4/5 and beyond engineers are fairly unlikely to have any sort of sense when it comes to performance. Businesses do not prioritize efficient software until their current hardware is incapable of running their current software (and even then, they'll prefer to buy more hardware is possible.)
Is this really tolerance and not just monopolistic companies abusing their market position? I mean workers can't even choose what software they're allowed to use, those choices are made by the executive/management class.
> The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
That's where FOSS or even proprietary "shared source" wins. You know if the software you depend on is generally badly or generally well programmed. You may not be able to find the bugs, but you can see how long the functions are, the comments, and how things are named. YMMV, but conscientiousness is a pretty great signal of quality; you're at least confident that their code is clean enough that they can find the bugs.
Basically the opposite of the feeling I get when I look at the db schemas of proprietary stuff that we've paid an enormous amount for.
The used car market is market for lemons because it is difficult to distinguish between a car that has been well maintained and a car close to breaking down. However, the new car market is decidedly not a market for lemons because every car sold is tested by the state, and reviewed by magazines and such. You know exactly what you are buying.
Software is always sold new. Software can increase in quality the same way cars have generally increased in quality over the decades. Creating standards that software must meet before it can be sold. Recalling software that has serious bugs in it. Punishing companies that knowingly sell shoddy software. This is not some deep insight. This is how every other industry operates.
A hallmark of well-designed and well-written software is that it is easy to replace, where bug-ridden spaghetti-bowl monoliths stick around forever because nobody wants to touch them.
Just through pure Darwinism, bad software dominates the population :)
Right now, the market buys bug-filled, inefficient software because you can always count on being able to buy hardware that is good enough to run it. The software expands to fill the processing specs of the machine it is running on - "What Andy giveth, Bill taketh away" [1]. So there is no economic incentive to produce leaner, higher-quality software that does only the core functionality and does it well.
But imagine a world where you suddenly cannot get top-of-the-line chips anymore. Maybe China invaded Taiwan and blockaded the whole island, or WW3 broke out and all the modern fabs were bombed, or the POTUS instituted 500% tariffs on all electronics. Regardless of cause, you're now reduced to salvaging microchips from key fobs and toaster ovens and pregnancy tests [2] to fulfill your computing needs. In this world, there is quite a lot of economic value to being able to write tight, resource-constrained software, because the bloated stuff simply won't run anymore.
Carmack is saying that in this scenario, we would be fine (after an initial period of adjustment), because there is enough headroom in optimizing our existing software that we can make things work on orders-of-magnitude less powerful chips.
I have that washing machine btw. I saw the AI branding and had a chuckle. I bought it anyway because it was reasonably priced (the washer was $750 at Costco).
I worked in a previous job on a product with 'AI' in the name. It was a source of amusement to many of us working there that the product didn't, and still doesn't use any AI.
A big part of why I like shopping at Costco is that they generally don't sell garbage. Their filter doesn't always match mine, but they do have a meaningful filter.
You must be referring only to security bugs because you would quickly toss Excel or Photoshop if it were filled with performance and other bugs. Security bugs are a different story because users don't feel the consequences of the problem until they get hacked and even then, they don't know how they got hacked. There are no incentives for developers to actually care.
Developers do care about performance up to a point. If the software looks to be running fine on a majority of computers why continue to spend resources to optimize further? Principle of diminishing returns.
> This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI.
The user cannot but a good AI might itself allow the average user to bridge the information asymmetry. So as long as we have a way to select a good AI assistant for ourselves...
> The user cannot but a good AI might itself allow the average user to bridge the information asymmetry. So as long as we have a way to select a good AI assistant for ourselves...
In the end it all hinges on the users ability to assess the quality of the product. Otherwise, the user cannot judge whether an assistant recommends quality products and the assistant has an incentive to suggest poorly (e.g. sellout to product producers).
These days I feel like I'd be willing to pay more for a product that explicitly disavowed AI. I mean, that's vulnerable to the same kind of marketing shenanigans, but still. :-)
That's generally what I think as well. Yes, the world could run on older hardware, but you keep making faster and adding more CPU's so, why bother making the code more efficient?
The argument that a buyer can't verify quality is simply false. Specially if the cost of something is large. And for lots of things, such verification isn't that hard.
The Market for Lemons story is about a complex thing that most people don't understand and is to low value. But even that paper misses many real world solution people have found for this.
> The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI.
Why then did people pay for ChatGPT when Google claimed it had something better? Because people quickly figured out that Google solution wasn't better.
Its easy to share results, its easy to look up benchmarks.
The claim that anything that claims its AI will automatically be able to demand some absurd prices is simply not true. At best people just slap AI on everything and its as if everybody stands up in a theater, nobody is better off.
1. Sometimes speed = money. Being the first to market, meeting VC-set milestones for additional funding, and not running out of runway are all things cheaper than the alternatives. Software maintenance costs later don't come close to opportunity costs if a company/project fails.
2. Most of the software is disposable. It's made to be sold, and the code repo will be chucked into a .zip on some corporate drive. There is no post-launch support, and the software's performance after launch is irrelevant for the business. They'll never touch the codebase again. There is no "long-term" for maintenance. They may harm their reputation, but that depends on whether their clients can talk with each other. If they have business or govt clients, they don't care.
3. The average tenure in tech companies is under 3 years. Most people involved in software can consider maintenance "someone else's problem." It's like the housing stock is in bad shape in some countries (like the UK) because the average tenure is less than 10 years. There isn't a person in the property's owner history to whom an investment in long-term property maintenance would have yielded any return. So now the property is dilapidated. And this is becoming a real nationwide problem.
4. Capable SWEs cost a lot more money. And if you hire an incapable IC who will attempt to future-proof the software, maintenance costs (and even onboarding costs) can balloon much more than some inefficient KISS code.
5. It only takes 1 bad engineering manager in the whole history of a particular piece of commercial software to ruin its quality, wiping out all previous efforts to maintain it well. If someone buys a second-hand car and smashes it into a tree hours later, was keeping the car pristinely maintained for that moment (by all the previous owners) worth it?
And so forth. What you say is true in some cases (esp where a company and its employees act in good faith) but not in many others.
What does "make in the long-term" even mean? How do you make a sandwich in the long-term?
Bad things are cheaper and easier to make. If they weren't, people would always make good things. You might say "work smarter," but smarter people cost more money. If smarter people didn't cost more money, everyone would always have the smartest people.
In my experiences, companies can afford to care about good software if they have extreme demands (e.g. military, finance) or amortize over very long timeframes (e.g. privately owned). It's rare for consumer products to fall into either of these categories.
> And one of them is the cheapest software you could make.
I actually disagree a bit. Sloppy software is cheap when you're a startup but it's quite expensive when you're big. You have all the costs of transmission and instances you need to account for. If airlines are going to cut an olive from the salad why wouldn't we pay programmers to optimize? This stuff compounds too.
We're currently operate in a world where new features are pushed that don't interest consumers. While they can't tell the difference between slop and not at purchase they sure can between updates. People constantly complain about stuff getting slower. But they also do get excited when things get faster.
Imo it's in part because we turned engineers into MBAs. Wherever I ask why can't we solve a problem some engineer always responds "well it's not that valuable". The bug fix is valuable to the user but they always clarify they mean money. Let's be honest, all those values are made up. It's not the job of the engineer to figure out how much profit a big fix will result in, it's their job to fix bugs.
Famously Coke doesn't advertise to make you aware of Coke. They advertise to associate good feelings. Similarly, car companies advertise to get their cars associated with class. Which is why sometimes they will advertise to people who have no chance of buying the car. What I'm saying is that brand matters. The problem right now is that all major brands have decided brand doesn't matter or brand decisions are always set in stone. Maybe they're right, how often do people switch? But maybe they're wrong, switching seems to just have the same features but a new UI that you got to learn from scratch (yes, even Apple devices aren't intuitive)
the thing is - countries have set down legal rules preventing selling of food that actively harms the consumer(expired, known poisonous, addition of addictive substances(opiates) etc) to continue your food analogy.
in software the regulations can be boiled down to 'lol lmao' in pre-GDPR era. and even now i see GDPR violations daily.
My partner was diagnosed with Parkinson’s almost 5 years ago. His disease has progressed significantly in the past year, and he begun to have delusions. He also had side effects from carbidopa/levodopa, which we decided to stop, and our primary physician decided he should start on PD-5 formula 4 months ago from UINE HEALTH CENTER. He now sleeps soundly, works out frequently, and is now very active since we started him on the PD-5 formula. It doesn’t make the Parkinson’s disease go away, but it did give him a better quality of life. We got the treatment from www. uineheathcentre. com
I like to point out that since ~1980, computing power has increased about 1000X.
If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.
If you went back in time to 1980 and offered the following choice:
I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.
People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...
Personally I think the 1000Xers kinda ruined things for the rest of us.
Am I taking crazy pills or are programs not nearly as slow as HN comments make them out to be? Almost everything loads instantly on my 2021 MacBook and 2020 iPhone. Every program is incredibly responsive. 5 year old mobile CPUs load modern SPA web apps with no problems.
The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.
Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.
If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.
The major slowdown of modern applications is network calls. Spend 50-500ms a pop for a few kilos of data. Many modern applications will spin up a half dozen blocking network calls casually.
Even worse, our bottom most abstraction layers pretend that we are running on a single core system from the 80s. Even Rust got hit by that when it pulled getenv from C instead of creating a modern and safe replacement.
I made a vendor run their buggy and slow software on a Sparc 20 against their strenuous complaints to just let them have an Ultra, but when they eventually did optimize their software to run efficiently (on the 20) it helped set the company up for success in the wider market. Optimization should be treated as competitive advantage, perhaps in some cases one of the most important.
> Optimization should be treated as competitive advantage
That's just so true!
The right optimizations at the right moment can have a huge boost for both the product and the company.
However the old tenet regarding premature optimization has been cargo-culted and expanded to encompass any optimization, and the higher-ups would rather have ICs churn out new features instead, shifting the cost of the bloat to the customer by insisting on more and bigger machines.
It's good for the economy, surely, but it's bad for both the users and the eventual maintainers of the piles of crap that end up getting produced.
> If dynamic array bounds checking cost 5% (narrator: it is far less than that)
It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.
It's not that simple either - normally, if you're doing some loops over a large array of pixels, say, to perform some operation to them, there will only be a couple of bounds checks before the loop starts, checking the starting and ending conditions of the loops, not re-doing the bounds check for every pixel.
So very rarely should it be anything like 3-4x the cost, though some complex indexing could cause it to happen, I suppose. I agree scopes are a decent way to handle it!
> It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
Your understanding of how bounds checking works in modern languages and compilers is not up to date. You're not going to find a situation where bounds checking causes an algorithm to take 3-4X longer.
A lot of people are surprised when the bounds checking in Rust is basically negligible, maybe 5% at most. In many cases if you use iterators you might not see a hit at all.
Then again, if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong.
> This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
Do you have any examples at all? Or is this just speculation?
Your argument is exactly why we ended up with the abominations of C and C++ instead of the safety of Pascal, Modula-2, Ada, Oberon, etc. Programmers at the time didn't realize how little impact safety features like bounds checking have. The bounds only need to be checked once for a for loop, not on each iteration.
IPC could be 80x higher when taking into account SIMD and then you have to multiply by each core. Mainstream CPUs are more like 1 to 2 million times faster than what was there in the 80s.
You can get full refurbished office computers that are still in the million times faster range for a few hundred dollars.
The things you are describing don't have much to do with computers being slow and feeling slow, but they are happening anyway.
Scripting languages that are constantly allocating memory to any small operation and pointer chasing ever variable because the type is dynamic is part of the problem, then you have people writing extremely inefficient programs in an already terrible environment.
Most programs are written now in however way the person writing them wants to work, not how someone using it wishes they were written.
Most people have actually no concept of optimization or what runs faster than something else. The vast majority of programs are written by someone who gets it to work and thinks "this is how fast this program runs".
The idea that the same software can run faster is a niche thought process, not even everyone on hacker news thinks about software this way.
The cost of bounds checking, by itself, is low. The cost of using safe languages generally can be vastly higher.
Garbage collected languages often consume several times as much memory. They aren't immediately freeing memory no longer being used, and generally require more allocations in the first place.
I recently watched a video that can be summarised quite simply as: "Computers today aren't that much faster than the computers of 20 years ago, unless you specifically code for them".
It's a little bit ham-fisted, as the author was shirking decades of compile optimisations also, and it's not apples to apples as he's comparing desktop class hardware with what is essentially laptop hardware; but it's also interesting to see that a lot of the performance gains really weren't that great actually. he observes a doubling of performance in 15 years! Truth be told most people use laptops now, and truth be told 20 years ago most people used desktops, so it's not totally unfair.
I agree with the sentiment and analysis that most humans prefer short term gains over long term ones. One correction to your example, though. Dynamic bounds checking does not solve security. And we do not know of a way to solve security. So, the gains are not as crisp as you are making them seem.
Bounds checking solves one tiny subset of security. There are hundreds of other subsets that we know how to solve. However these days the majority of the bad attacks are social and no technology is likely to solve them - as more than 10,000 years of history of the same attack has shown. Technology makes the attacks worse because they now scale, but social attacks have been happening for longer than recorded history (well there is every reason to believe that - there is unlikely to evidence going back that far).
You don't have to "solve" security in order to improve security hygiene by a factor of X, and thus risk of negative consequences by that same factor of X.
Security through compartmentalization approach actually works. Compare the number of CVEs of your favorite OS with those for Qubes OS: https://www.qubes-os.org/security/qsb/
Don't forget the law of large numters. 5% performance hit on one system is one thing, 5% across almost all of the current computing landscape is still a pretty huge value.
But it's not free for the taking. The point is that we'd get more than that 5%'s worth in exchange. So sure, we'll get significant value "if software optimization was truly a priority", but we get even more value by making other things a priority.
Saying "if we did X we'd get a lot in return" is similar to the fallacy of inverting logical implication. The question isn't, will doing something have significant value, but rather, to get the most value, what is the thing we should do? The answer may well be not to make optimisation a priority even if optimisation has a lot of value.
The first reply is essentially right. This isn't what happened at all, just because C is still prevalent. All the inefficiency is everything down the stack, not in C.
I think on year 2001 GHz CPU should be a performance benchmark that every piece of basic non-high performance software should execute acceptably on.
This is kind of been a disappointment to me of AI when I've tried it. This has kind of been a disappointment to me of AI when I've tried it. Llm should be able to Port things. It should be able to rewrite things with the same interface. It should be able to translate from inefficient languages to more efficient ones.
It should even be able to optimize existing code bases automatically, or at least diagnose or point out poor algorithms, cache optimization, etc.
Heck I remember powerbuilder in the mid 90s running pretty well on 200 mhz CPUs. It doesn't even really interpreted stuff. It's just amazing how slow stuff is. Do rounded corners and CSS really consume that much CPU power?
My limited experience was trying to take the unix sed source code and have AI port it into a jvm language, and it could do the most basic operations, but utterly failed at even the intermediate sed capabilities. And then optimize? Nope
Of course there's no desire for something like that. Which really shows what the purpose of all this is. It's to kill jobs. It's not to make better software. And it means AI is going to produce a flood of bad software. Really bad software.
I've pondered this myself without digging into the specifics. The phrase "sufficiently smart compiler" sticks in my head.
Shower thoughts include whether there are languages that have features, other than through their popularity and representation in training corpuses, help us get from natural language to efficient code?
I was recently playing around with a digital audio workstation (DAW) software package called Reaper that honestly surprised me with its feature set, portability (Linux, macOS, Windows), snappiness etc. The whole download was ~12 megabytes. It felt like a total throwback to the 1990s in a good way.
It feels like AI should be able to help us get back to small snappy software, and in so doing maybe "pay its own way" with respect to CPU and energy requirements. Spending compute cycles to optimize software deployed millions of times seems intuitively like a good bargain.
I don't trust that shady-looking narrator. 5% of what exactly? Do you mean that testing for x >= start and < end is only 5% as expensive as assigning an int to array[x]?
Or would bounds checking in fact more than double the time to insert a bunch of ints separately into the array, testing where each one is being put? Or ... is there some gimmick to avoid all those individual checks, I don't know.
>Personally I think the 1000Xers kinda ruined things for the rest of us.
Reminds me of when NodeJS came out that bridged client and server side coding. And apparently their repos can be a bit of a security nightmare nowadays- so the minimalist languages with limited codebase do have their pros.
That's about a 168x difference. That was from before Moores law started petering out.
For only a 5x speed difference you need to go back to the 4th or 5th generation Intel Core processors from about 10 years ago.
It is important to note that the speed figure above is computed by adding all of the cores together and that single core performance has not increased nearly as much. A lot of that difference is simply from comparing a single core processor with one that has 20 cores. Single core performance is only about 8 times faster than that ancient Pentium 4.
As the other guy said, top of the line CPUs today are roughly ~100x faster than 20 years ago. A single core is ~10x faster (in terms of instructions per second) and we have ~10x the number of cores.
You can always install DOS as your daily driver and run 1980's software on any hardware from the past decade, and then tell me how that's slow.
1000x referred to the hardware capability, and that's not a rarity that is here.
The trouble is how software has since wasted a majority of that performance improvement.
Some of it has been quality of life improvements, leading nobody to want to use 1980s software or OS when newer versions are available.
But the lion's share of the performance benefit got chucked into the bin with poor design decisions, layers of abstractions, too many resources managed by too many different teams that never communicate making any software task have to knit together a zillion incompatible APIs, etc.
So I've worked for Google (and Facebook) and it really drives the point home of just how cheap hardware is and how not worth it optimizing code is most of the time.
More than a decade ago Google had to start managing their resource usage in data centers. Every project has a budget. CPU cores, hard disk space, flash storage, hard disk spindles, memory, etc. And these are generally convertible to each other so you can see the relative cost.
Fun fact: even though at the time flash storage was ~20x the cost of hard disk storage, it was often cheaper net because of the spindle bottleneck.
Anyway, all of these things can be turned into software engineer hours, often called "mili-SWEs" meaning a thousandth of the effort of 1 SWE for 1 year. So projects could save on hardware and hire more people or hire fewer people but get more hardware within their current budgets.
I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands. So if you spend 1 SWE year working on optimization acrosss your project and you're not saving 5000 CPU cores, it's a net loss.
Some projects were incredibly large and used much more than that so optimization made sense. But so often it didn't, particularly when whatever code you wrote would probably get replaced at some point anyway.
The other side of this is that there is (IMHO) a general usability problem with the Web in that it simply shouldn't take the resources it does. If you know people who had to or still do data entry for their jobs, you'll know that the mouse is pretty inefficient. The old terminals from 30-40+ years ago that were text-based had some incredibly efficent interfaces at a tiny fraction of the resource usage.
I had expected that at some point the Web would be "solved" in the sense that there'd be a generally expected technology stack and we'd move on to other problems but it simply hasn't happened. There's still a "framework of the week" and we're still doing dumb things like reimplementing scroll bars in user code that don't work right with the mouse wheel.
I don't know how to solve that problem or even if it will ever be "solved".
I worked there too and you're talking about performance in terms of optimal usage of CPU on a per-project basis.
Google DID put a ton of effort into two other aspects of performance: latency, and overall machine utilization. Both of these were top-down directives that absorbed a lot of time and attention from thousands of engineers. The salary costs were huge. But, if you're machine constrained you really don't want a lot of cores idling for no reason even if they're individually cheap (because the opportunity cost of waiting on new DC builds is high). And if your usage is very sensitive to latency then it makes sense to shave milliseconds off because of business metrics, not hardware $ savings.
The key part here is "machine utilization" and absolutely there was a ton of effort put into this. I think before my time servers were allocated to projects but even early on in my time at Google Borg had already adopted shared machine usage and therew was a whole system of resource quota implemented via cgroups.
Likewise there have been many optimization projects and they used to call these out at TGIF. No idea if they still do. One I remember was reducing the health checks via UDP for Stubby and given that every single Google product extensively uses Stubby then even a small (5%? I forget) reduction in UDP traffic amounted to 50,000+ cores, which is (and was) absolutely worth doing.
I wouldn't even put latency in the same category as "performance optimization" because often you decrease latency by increasing resource usage. For example, you may send duplicate RPCs and wait for the fastest to reply. That could be double or tripling effort.
Except you’re self selecting for a company that has high engineering costs, big fat margins to accommodate expenses like additional hardware, and lots of projects for engineers to work on.
The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
The problem is that almost no one is doing it, because the way we make these decisions has nothing to do with the economical calculus behind, most people just do “what Google does”, which explains a lot of the disfunction.
I think the parent's point is that if Google with millions of servers can't make performance optimization worthwhile, then it is very unlikely that a smaller company can. If salaries dominate over compute costs, then minimizing the latter at the expense of the former is counterproductive.
> The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
That's debatable. Performance optimization almost always lead to complexity increase. Doubled performance can easily cause quadrupled complexity. Then one has to consider whether the maintenance burden is worth the extra performance.
I don't know how to solve that problem or even if it will ever be "solved".
It will not be “solved” because it’s a non-problem.
You can run a thought experiment imagining an alternative universe where human resource were directed towards optimization, and that alternative universe would look nothing like ours. One extra engineer working on optimization means one less engineer working on features. For what exactly? To save some CPU cycles? Don’t make me laugh.
> I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands.
I think this probably holds true for outfits like Google because 1) on their scale "a core" is much cheaper than average, and 2) their salaries are much higher than average. But for your average business, even large businesses? A lot less so.
I think this is a classic "Facebook/Google/Netflix/etc. are in a class of their own and almost none of their practices will work for you"-type thing.
Maybe not to the same extent, but an AWS EC2 m5.large VM with 2 cores and 8 GB RAM costs ~$500/year (1 year reserved). Even if your engineers are being paid $50k/year, that's the same as 100 VMs or 200 cores + 800 GB RAM.
The title made me think Carmack was criticizing poorly optimized software and advocating for improving performance on old hardware.
When in fact, the tweet is absolutely not about either of the two. He's talking about a thought experiment where hardware stopped advancing and concludes with "Innovative new products would get much rarer without super cheap and scalable compute, of course".
> "Innovative new products would get much rarer without super cheap and scalable compute, of course".
Interesting conclusion—I'd argue we haven't seen much innovation since the smartphone (18 years ago now), and it's entirely because capital is relying on the advances of hardware to sell what is to consumers essentially the same product that they already have.
Of course, I can't read anything past the first tweet.
We have self driving cars, amazing advancement in computer graphics, dead reckoning of camera position from visual input...
In the meantime, hardware has had to go wide on threads as single core performance has not improved. You could argue that's been a software gain and a hardware failure.
And I'd argue that we've seen tons of innovation in the past 18 years aside from just "the smartphone" but it's all too easy to take for granted and forget from our current perspective.
First up, the smartphone itself had to evolve a hell of a lot over 18 years or so. Go try to use an iPhone 1 and you'll quickly see all of the roadblocks and what we now consider poor design choices littered everywhere, vs improvements we've all taken for granted since then.
18 years ago was 2007? Then we didn't have (for better or for worse on all points):
* Video streaming services
* Decent video game market places or app stores. Maybe "Battle.net" with like 5 games, lol!
* VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
* Mapping applications on a phone (there were some stand-alone solutions like Garmin and TomTom just getting off the ground)
* QR Codes (the standard did already exist, but mass adoption would get nowhere without being carried by the smartphone)
* Rideshare, food, or grocery delivery services (aside from taxis and whatever pizza or chinese places offered their own delivery)
* Voice-activated assistants (including Alexa and other standalone devices)
* EV Cars (that anyone wanted to buy) or partial autopilot features aside from 1970's cruise control
* Decent teleconferencing (Skype's featureset was damn limited at the time, and any expensive enterprise solutions were dead on the launchpad due to lack of network effects)
* Decent video displays (flatscreens were still busy trying to mature enough to push CRTs out of the market at this point)
* Color printers were far worse during this period than today, though that tech will never run out of room for improvement.
* Average US Internet speeds to the home were still ~1Mbps, with speeds to cellphone of 100kbps being quite luxurious. Average PCs had 2GB RAM and 50GB hard drive space.
* Naturally: the tech everyone loves to hate such as AI, Cryptocurrencies, social network platforms, "The cloud" and SaaS, JS Frameworks, Python (at least 3.0 and even realistically heavy adoption of 2.x), node.js, etc. Again "Is this a net benefit to humanity" and/or "does this get poorly or maliciously used a lot" doesn't speak to whether or not a given phenomena is innovative, and all of these objectively are.
There has been a lot of innovation - but it is focused to some niche and so if you are not in a niche you don't see it and wouldn't care if you did. Most of the major things you need have already invented - I recall word processors as a kid, so they for sure date back to the 1970s - we still need word processors and there is a lot of polish that can be added, but all innovation is in niche things that the majority of us wouldn't have a use for if we knew about it.
Of course innovation is always in bits and spurts.
I think its a bad argument though. If we had to stop with the features for a little while and created some breathing room, the features would come roaring back. There'd be a downturn sure but not a continuous one.
A subtext here may be his current AI work. In OP, Carmack is arguing, essentially, that 'software is slow because good smart devs are expensive and we don't want to pay for them to optimize code and systems end-to-end as there are bigger fish to fry'. So, an implication here is that if good smart devs suddenly got very cheap, then you might see a lot of software suddenly get very fast, as everyone might choose to purchase them and spend them on optimization. And why might good smart devs become suddenly available for cheap?
I heartily agree. It would be nice if we could extend the lifetime of hardware 5, 10 years past its, "planned obsolescence." This would divert a lot of e-waste, leave a lot of rare earth minerals in the ground, and might even significantly lower GHG emissions.
The market forces for producing software however... are not paying for such externalities. It's much cheaper to ship it sooner, test, and iterate than it is to plan and design for performance. Some organizations in the games industry have figured out a formula for having good performance and moving units. It's not spread evenly though.
In enterprise and consumer software there's not a lot of motivation to consider performance criteria in requirements: we tend to design for what users will tolerate and give ourselves as much wiggle room as possible... because these systems tend to be complex and we want to ship changes/features continually. Every change is a liability that can affect performance and user satisfaction. So we make sure we have enough room in our budget for an error rate.
Much different compared to designing and developing software behind closed doors until it's, "ready."
Point 1 is why growth/debt is not a good economic model in the long run. We should have a care & maintenance focused economy and center our macro scale efforts on the overall good of the human race, not perceived wealth of the few.
If we focused on upkeep of older vehicles, re-use of older computers, etc. our landfills would be smaller proportional to 'growth'.
I'm sure there's some game theory construction of the above that shows that it's objectively an inferior strategy to be a conservationist though.
We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
I think this specific class of computational power - strictly serialized transaction processing - has not grown at the same rate as other metrics would suggest. Adding 31 additional cores doesn't make the order matching engine go any faster (it could only go slower).
If your product is handling fewer than several million transactions per second and you are finding yourself reaching for a cluster of machines, you need to back up like 15 steps and start over.
> We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
This is the bit that really gets me fired up. People (read: system “architects”) were so desperate to “prove their worth” and leave a mark that many of these systems have been over complicated, unleashing a litany of new issues. The original design would still satisfy 99% of use cases and these days, given local compute capacity, you could run an entire market on a single device.
Why can you not match orders in parallel using logarithmic reduction, the same way you would sort in parallel? Is it that there is not enough other computation being done other than sorting by time and price?
You are only able to do that because you are doing simple processing on each transaction. If you had to do more complex processing on each transaction it wouldn't be possible to do that many. Though it is hard for me to imagine what more complex processing would be (I'm not in your domain)
One of the things I think about sometimes, a specific example rather than a rebuttal to Carmack.
The Electron Application is somewhere between tolerated and reviled by consumers, often on grounds of performance, but it's probably the single innovation that made using my Linux laptop in the workplace tractable. And it is genuinely useful to, for example, drop into a MS Teams meeting without installing.
So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
> So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
I would far, far rather have Windows-only software that is performant than the Electron slop we get today. With Wine there's a decent chance I could run it on Linux anyway, whereas Electron software is shit no matter the platform.
Wine doesn't even run Office, there's no way it'd run whatever native video stack Teams would use. Linux has Teams purely because Teams decided to go with web as their main technology.
Even the Electron version of Teams on Linux has a reduced feature set because there's no Office to integrate with.
Well, yes. It's an economic problem (which is to say, it's a resource allocation problem). Do you have someone spend extra time optimising your software or do you have them produce more functionality. If the latter generates more cash then that's what you'll get them to do. If the former becomes important to your cashflow then you'll get them to do that.
Is there any realistic way to shift the payment of hard-to-trace costs like environmental clean-up, negative mental or physical health, and wasted time back to the companies and products/software that cause them?
It's the kind of economics that shifts the financial debt to accumulating waste, and technical debt, which is paid for by someone else. It's basically stealing. There are --of course-- many cases in which thorough optimizing doesn't make much sense, but the idea of just adding servers instead of rewriting is a sad state of affairs.
It doesn't seem like stealing to me? Highly optimised software generally takes more effort to create and maintain.
The tradeoff is that we get more software in general, and more features in that software, i.e. software developers are more productive.
I guess on some level we can feel that it's morally bad that adding more servers or using more memory on the client is cheaper than spending developer time but I'm not sure how you could shift that equilibrium without taking away people's freedom to choose how to build software?
This is exactly right. Why should the company pay an extra $250k in salary to "optimize" when they can just offload that salary to their customers' devices instead? The extra couple of seconds, extra megabytes of bandwidth, and shittery of the whole ecosystem has been externalized to customers in search of ill-gotten profits.
This feels like hyperbole to me. Who is being stolen from here? Not the end user, they're getting the tradeoff of more features for a low price in exchange for less optimized software.
would you spend 100 years writing the perfect editor optimizing every single function and continueously optimizing and when will it ever be complete? No you wouldn't. Do you use python or java or C? Obviously, that can be optimized if you wrote in assembly. Practice what you preach, otherwise you'd be stealing.
Not really stealing. You could off course build software that is more optimized and with the same features but at a higher cost. Would most buyers pay twice the price for a webb app that loads in 1 sec instead of 2? Probably not.
> Do you have someone spend extra time optimising your software or do you have them produce more functionality
Depends. In general, I'd rather have devs optimize the software rather than adding new features just for the sake of change.
I don't use most of the new features in macOS, Windows, or Android. I mostly want an efficient environment to run my apps and security improvements. I'm not that happy about many of the improvements in macOS (eg the settings app).
Same with design software. I don't use most of the new features introduced by Adobe. I'd be happy using Illustrator or Photoshop from 10 years ago. I want less bloat, not more.
I also do audio and music production. Here I do want new features because the workflow is still being improved but definitely not at the cost of efficiency.
Regarding code editors I'm happy with VSCode in terms of features. I don't need anything else. I do want better LSPs but these are not part of the editor core. I wish VSCode was faster and consumed less memory though.
Efficiency is critical to my everyday life. For example, before I get up from my desk to grab a snack from the kitchen, I'll bring any trash/dishes with me to double the trip's benefits. I do this kind of thing often.
Optimizing software has a similar appeal. But when the problem is "spend hours of expensive engineering time optimizing the thing" vs "throw some more cheap RAM at it," the cheaper option will prevail. Sometimes, the problem is big enough that it's worth the optimization.
The market will decide which option is worth pursuing. If we get to a point where we've reached diminishing returns on throwing hardware at a problem, we'll optimize the software. Moore's Law may be slowing down, but evidently we haven't reached that point yet.
Ultimately it's a demand problem. If consumer demands more performant software, they would pay a premium for it. However, the opposite is more true. They would prefer an even less performant version if it came with a cheaper price tag.
"The world" runs on _features_ not elegant, fast, or bug free software. To the end user, there is no difference between a lack of a feature, and a bug. Nor is there any meaningful difference between software taking 5 minutes to complete something because of poor performance, compared to the feature not being there and the user having to spend 5 minutes completing the same task manually. It's "slow".
If you keep maximizing value for the end user, then you invariably create slow and buggy software. But also, if you ask the user whether they would want faster and less buggy software in exchange for fewer features, they - surprise - say no. And even more importantly: if you ask the buyer of software, which in the business world is rarely the end user, then they want features even more, and performance and elegance even less.
Given the same feature set, a user/buyer would opt for the fastest/least buggy/most elegant software. But if it lacks any features - it loses. The reason to keep software fast and elegant is because it's the most likely path to be able to _keep_ adding features to it as to not be the less feature rich offering. People will describe the fast and elegant solution with great reviews, praising how good it feels to use. Which might lead people to think that it's an important aspect. But in the end - they wouldn't buy it at all if it didn't do what they wanted. They'd go for the slow frustrating buggy mess if it has the critical feature they need.
Almost all of my nontechnical friends and family members have at some point complained about bloated and overly complicated software that they are required to use.
Also remember that Microsoft at this point has to drag their users kicking and screaming into using the next Windows version. If users were let to decide for themselves, many would have never upgraded past Windows XP. All that despite all the pretty new features in the later versions.
I'm fully with you that businesses and investors want "features" for their own sake, but definitely not users.
Every time I offer alternatives to slow hardware, people find a missing feature that makes them stick to what they're currently using. Other times the features are there but the buttons for it are in another place and people don't want to learn something new. And that's for free software, with paid software things become even worse because suddenly the hours they spend on loading times is worthless compared to a one-time fee.
Complaining about slow software happens all the time, but when given the choice between features and performance, features win every time. Same with workflow familiarity; you can have the slowest, most broken, hacked together spreadsheet-as-a-software-replacement mess, but people will stick to it and complain how bad it is unless you force them to use a faster alternative that looks different.
You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
If people could, no one would ever upgrade anything anymore. Look at how hard MS has to work to force anyone to upgrade. I have never heard of anyone who wanted a new version of Windows, Office, Slack, Zoom, etc.
This is also why everything (like Photoshop) is being forced into the cloud. The vast majority of people don't want the new features that are being offered. Including buyers at businesses. So the answer to keep revenue up is to force people to buy regardless of what features are being offered or not.
> You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
I think this is more a consumer perspective than a B2B one. I'm thinking about the business case. I.e. businesses purchase software (or has bespoke software developed). Then they pay for fixes/features/improvements. There is often a direct communication between the buyer and the developer (whether it's off-the shelf, inhouse or made to spec). I'm in this business and the dialog is very short "great work adding feature A. We want feature B too now. And oh the users say the software is also a bit slow can you make it go faster? Me: do you want feature B or faster first? Them (always) oh feature B. That saves us man-weeks every month". Then that goes on for feature C, D, E, ...Z.
In this case, I don't know how frustrated the users are, because the customer is not the user - it's the users' managers.
In the consumer space, the user is usually the buyer. That's one huge difference. You can choose the software that frustrates you the least, perhaps the leanest one, and instead have to do a few manual steps (e.g. choose vscode over vs, which means less bloated software but also many fewer features).
Agree WRT the tradeoff between features and elegance.
Although, I do wonder if there’s an additional tradeoff here. Existing users, can apparently do what they need to do with the software, because they are already doing it. Adding a new feature might… allow them to get rid of some other software, or do something new (but, that something new must not be so earth shattering, because they didn’t seek out other software to do it, and they were getting by without it). Therefore, I speculate that existing users, if they really were introspective, would ask for those performance improvements first. And maybe a couple little enhancements.
Potential new users on the other hand, either haven’t heard of your software yet, or they need it to do something else before they find it useful. They are the ones that reasonably should be looking for new features.
So, in “features vs performance” decision is also a signal about where the developers’ priorities lay: adding new users or keeping old ones happy. So, it is basically unsurprising that:
* techies tend to prefer the latter—we’ve played this game before, and know we want to be the priority for the bulk of the time using the thing, not just while we’re being acquired.
* buggy slow featureful software dominates the field—this is produced by companies that are prioritizing growth first.
* history is littered with beautiful, elegant software that users miss dearly, but which didn’t catch on broadly enough to sustain the company.
However, the tradeoff is real in both directions; most people spend most of their time as users instead of potential users. I think this is probably a big force behind the general perception that software and computers are incredibly shit nowadays.
Perfectly put. People who try to argue that more time should be spent on making software perform better probably aren't thinking about who's going to pay for that.
For the home/office computer, the money spent on more RAM and a better CPU enables all software it runs to be shipped more cheaply and with more features.
What cost? The hardware is dirt cheap. Programmers aren't cheap. The value of being able to use cheap software on cheap hardware is basically not having to spend a lot of time optimizing things. Time is the one thing that isn't cheap here. So there's a value in shipping something slightly sub optimal sooner rather than something better later.
> Except your browser taking 180% of available ram maybe.
For most business users, running the browser is pretty much the only job of the laptop. And using virtual memory for open tabs that aren't currently open is actually not that bad. There's no need to fit all your gazillion tabs into memory; only the ones you are looking at. Browsers are pretty good at that these days. The problem isn't that browsers aren't efficient but that we simply push them to the breaking content with content. Content creators simply expand their resource usage whenever browsers get optimized. The point of optimization is not saving cost on hardware but getting more out of the hardware.
The optimization topic triggers the OCD of a lot of people and sometimes those people do nice things. John Carmack built his career when Moore's law was still on display. Everything he did to get the most out of CPUs was super relevant and cool but it also dated in a matter of a few years. One moment we were running doom on simple 386 computers and the next we were running Quake and Unreal with shiny new Voodoo GPUs on a Pentium II pro. I actually had the Riva 128 as my first GPU, which was one of the first products that Nvidia shipped running Unreal and other cool stuff. And while CPUs have increased enormously in performance, GPUs have increased even more by some ridiculous factor. Nvidia has come a long way since then.
I'm not saying optimization is not important but I'm just saying that compute is a cheap commodity. I actually spend quite a bit of time optimizing stuff so I can appreciate what that feels like and how nice it is when you make something faster. And sometimes that can really make a big difference. But sometimes my time is better spent elsewhere as well.
Right, and that's true of end users as well. It's just not taken into account by most businesses.
I think your take is pretty reasonable, but I think most software is too far towards slow and bloated these days.
Browsers are pretty good, but developers create horribly slow and wasteful web apps. That's where the optimization should be done. And I don't mean they should make things as fast as possible, just test on an older machine that a big chunk of the population might still be using, and make it feel somewhat snappy.
The frustrating part is that most web apps aren't really doing anything that complicated, they're just built on layers of libraries that the developers don't understand very well. I don't really have a solution to any of this, I just wish developers cared a little bit more than they do.
> The hardware is dirt cheap. Programmers aren't cheap.
That may be fine if you can actually improve the user experience by throwing hardware at the problem. But in many (most?) situations, you can't.
Most of the user-facing software is still single-threaded (and will likely remain so for a long time). The difference in single-threaded performance between CPUs in wide usage is maybe 5x (and less than 2x for desktop), while the difference between well optimized and poorly optimized software can be orders of magnitude easily (milliseconds vs seconds).
And if you are bottlenecked by network latency, then the CPU might not even matter.
I have been thinking about this a lot ever since I played a game called "Balatro". In this game nothing extraordinary happens in terms of computing - some computations get done, some images are shuffled around on the screen, the effects are sparse. The hardware requirements aren't much by modern standards, but still, this game could be ported 1:1 to a machine with Pentium II with a 3dfx graphics card. And yet it demands so much more - not a lot by today standards, but still. I am tempted to try to run it on a 2010 netbook to see if it even boots up.
It is made in lua using love2d. That helped the developers and comes with a cost in minimal requirements (even if they aren't much for a game released in 2024).
One way to think about it is: If we were coding all our games in C with no engine, they would run faster, but we would have far fewer games. Fewer games means fewer hits. Odds are Balatro wouldn't have been made, because those developer hours would've been allocated to some other game which wasn't as good.
I was working as a janitor, moonlighting as an IT director, in 2010. Back then I told the business that laptops for the past five years (roughly since Nehalem) have plenty of horsepower to run spreadsheets (which is basically all they do) with two cores, 16 GB of RAM, and a 500GB SATA SSD. A couple of users in marketing did need something a little (not much) beefier. Saved a bunch of money by not buying the latest-and-greatest laptops.
I don't work there any more. Today I am convinced that's true today: those computers should still be great for spreadsheets. Their workflow hasn't seriously changed. It's the software that has. If they've continued with updates (can it even "run" MS Windows 10 or 11 today? No idea, I've since moved on to Linux) then there's a solid chance that the amount of bloat and especially move to online-only spreadsheets would tank their productivity.
Further, the internet at that place was terrible. The only offerings were ~16Mbit asynchronous DSL (for $300/mo just because it's a "business", when I could get the same speed for $80/mo at home), or Comcast cable 120Mbit for $500/mo. 120Mbit is barely enough to get by with an online-only spreadsheet, and 16Mbit definitely not. But worse: if internet goes down, then the business ceases to function.
This is the real theft that another commenter [0] mentioned that I wholeheartedly agree with. There's no reason whatsoever that a laptop running spreadsheets in an office environment should require internet to edit and update spreadsheets, or crazy amounts of compute/storage, or even huge amounts of bandwidth.
Computers today have zero excuse for terrible performance except only to offload costs onto customers - private persons and businesses alike.
My daily drivers at home are an i3-540 and and Athlon II X4. Every time something breaks down, I find it much cheaper to just buy a new part than to buy a whole new kit with motherboard/CPU/RAM.
I'm a sysadmin, so I only really need to log into other computers, but I can watch videos, browse the web, and do some programming on them just fine. Best ROI ever.
I'd be willing to bet that even a brand new iPhone has a surprising number of reasonably old pieces of hardware for Bluetooth, wifi, gyroscope, accelerometer, etc. Not everything in your phone changes as fast as the CPU.
If we're talking numbers, there are many, many more embedded systems than general purpose computers. And these are mostly built on ancient process nodes compared to the cutting edge we have today; the shiny octa-cores on our phones are supported by a myriad of ancilliary chips that are definitely not cutting edge.
Modern planes do not, and many older planes have been retrofitted, in whole or in part, with more modern computers.
Some of the specific embedded systems (like the sensors that feed back into the main avionics systems) may still be using older CPUs if you squint, but it's more likely a modern version of those older designs.
IBM PowerPC 750X apparently, which was the CPU the Power Mac G3 used back in the day. Since it's going into space it'll be one of the fancy radiation-hardened versions which probably still costs more than your car though, and they run four of them in lockstep to guard against errors.
Sorry, don't want to go back to a time where I could only edit ASCII in a single font.
Do I like bloat? No. Do I like more software rather than less? Yes! Unity and Unreal are less efficient than custom engines but there are 100x more titles because that tradeoff of efficiency of the CPU vs efficiency of creation.
The same is true for website based app (both online and off). Software ships 10x faster as a web page than as a native app for (windows/mac/linux/android/ios). For most, that's all I need. Even for native like apps, I use photopea.com over photoshop/gimp/krita/affinity etc because it's available everywhere no matter which machine I use or who's machine it is. Is it less efficient running in JS in the browser? Probaby. Do I care? No
VSCode, now the most popular editor in the worlds (IIRC) is web-tech. This has so many benefits. For one, it's been integrated into 100s of websites, so this editor I use is available in more places. It's using tech more people know so more extension that do more things. Also, probably arguably because of JS's speed issues, it encouraged the creation of the Language Server Protocol. Before this, every editor rolled their own language support. The LSP is arguably way more bloat than doing it directly in the editor. I don't care. It's a great idea, way more flexible. Any language can write one LSP and then all editors get support for that language.
Is there or could we make an iPhone-like that runs 100x slower than conventional phones but uses much less energy, so it powers itself on solar? It would be good for the environment and useful in survival situations.
Or could we make a phone that runs 100x slower but is much cheaper? If it also runs on solar it would be useful in third-world countries.
Processors are more than fast enough for most tasks nowadays; more speed is still useful, but I think improving price and power consumption is more important. Also cheaper E-ink displays, which are much better for your eyes, more visible outside, and use less power than LEDs.
We have much hardware on the secondary market (resale) that's only 2-3x slower than pristine new primary market devices. It is cheap, it is reuse, and it helps people save in a hyper-consumerist society. The common complaint is that it doesn't run bloated software anymore. And I don't think we can make non-bloated software for a variety of reasons.
As a video game developer, I can add some perspective (N=1 if you will). Most top-20 game franchises spawned years ago on much weaker hardware, but their current installments demand hardware not even a few years old (as recommended/intended way to play the game). This is due to hyper-bloating of software, and severe downskilling of game programmers in the industry to cut costs. The players don't often see all this, and they think the latest game is truly the greatest, and "makes use" of the hardware. But the truth is that aside from current-generation graphics, most games haven't evolved much in the last 10 years, and current-gen graphics arrived on PS4/Xbox One.
Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Ironically, upgrading a graphics card one generation (RTX 3070 -> 4070) costs about $300 if the old card is sold and $500 if it isn't. So gamers end up paying ~$400 for the latest games every few years and then rebel against paying $30 extra per game instead, which could very well be cheaper than the GPU upgrade (let alone other PC upgrades), and would allow companies to spend much more time on optimization. Well, assuming it wouldn't just go into the pockets of publishers (but that is a separate topic).
It's an example of Scott Alexander's Moloch where it's unclear who could end this race to the bottom. Maybe a culture shift could, we should perhaps become less consumerist and value older hardware more. But the issue of bad software has very deep roots. I think this is why Carmack, who has a practically perfect understanding of software in games, doesn't prescribe a solution.
One only needs to look at Horizon: Zero Dawn to note that the truth of this is deeply uneven across the games industry. World streaming architectures are incredible technical achievements. So are moddable engines. There are plenty of technical limits being pushed by devs, it's just not done at all levels.
> Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Producing games doesn't cost anything on a per-unit basis. That's not at all the reason for low quality.
Games could cost $1000 per copy and big game studios (who have investors to worry about) would still release buggy slow games, because they are still going to be under pressure to get the game done by Christmas.
> Or could we make a phone that runs 100x slower but is much cheaper? I
Probably not - a large part of the cost is equipment and R&D. It doesn't cost much more to build the most complex CPU vs a 6502 - there is only a tiny bit more silicon and chemicals. What is costly is the R&D behind the chip, and the R&D behind the machines that make the chips. If intel fired all their R&D engineers who were not focused on reducing manufacturing costs they could greatly reduce the price of their CPUs - until AMD released a next generation that is much better. (this is more or less what Henry Ford did with the model-T - he reduced costs every year until his competition adding features were enough better that he couldn't sell his cars.
Yes, it's possible and very simple. Lower the frequency (dramatically lowers power usage), fewer cores, few threads, etc. The problem is, we don't know what we need. What if a great new apps comes out (think LLM); you'll be complaining your phone is too slow to run it.
Exactly. Yes I understand the meaning behind it, but the line gets drummed into developers everywhere, the subtleties and real meaning are lost, and every optimisation- or efficiency-related question on Stack Overflow is met with cries of "You're doing it wrong! Don't ever think about optimising unless you're certain you have a problem!" This habit of pushing it to extremes, inevitably leads to devs not even thinking about making their software efficient. Especially when they develop on high-end hardware and don't test on anything slower.
Perhaps a classic case where a guideline, intended to help, ends up causing ill effects by being religiously stuck to at all times, instead of fully understanding its meaning and when to use it.
A simple example comes to mind, of a time I was talking to a junior developer who thought nothing of putting his SQL query inside a loop. He argued it didn't matter because he couldn't see how it would make any difference in that (admittedly simple) case, to run many queries instead of one. To me, it betrays a manner of thinking. It would never have occurred to me to write it the slower way, because the faster way is no more difficult or time-consuming to write. But no, they'll just point to the mantra of "premature optimisation" and keep doing it the slow way, including all the cases where it unequivocally does make a difference.
Unfortunately there is a distinct lack of any scientific investigation or rigorous analysis into the phenomenon allegedly occurring that is called “Wirth’s Law” unlike, say, Moore’s Law despite many anecdotal examples. Reasoning as if it were literally true leads to absurd conclusions that do not correspond with any reality that I can observe so I am tempted to say that as a broad phenomenon it is obviously false and that anyone who suggests otherwise is being disingenuous. For it to be true there would have to have been no progress whatever in computing-enabled technologies, yet the real manifestations of the increase in computing resources and the exploitation thereof permeates, alters and invades almost every aspect of society and of our personal day to day lives at a constantly increasing rate.
Call it the X-Windows factor --- software gets more capable/more complex and there's a constant leap-frogging (new hardware release, stuff runs faster, software is written to take advantage of new hardware, things run more slowly, software is optimized, things run more quickly).
The most striking example of this was Mac OS X Public Beta --- which made my 400MHz PowerPC G3 run at about the same speed as my 25 MHz NeXT Cube running the quite similar OPENSTEP 4.2 (just OS X added Java and Carbon and so forth) --- but each iteration got quicker until by 10.6.8, it was about perfect.
.NET has made great strides in this front in recent years. Newer versions optimize cpu and ram usage of lots of fundamentals, and introduced new constructs to reduce allocations and cpu for new code. One might argue they were able because they were so bad, but it’s worth looking into if you haven’t in a while.
Really no notes on this. Carmack hit both sides of the coin:
- the way we do industry-scale computing right now tends to leave a lot of opportunity on the table because we decouple, interpret, and de-integrate where things would be faster and take less space if we coupled, compiled, and made monoliths
- we do things that way because it's easier to innovate, tweak, test, and pivot on decoupled systems that isolate the impact of change and give us ample signal about their internal state to debug and understand them
The idea of a hand me down computer made of brass and mahogany still sounds ridiculous because it is, but we're nearly there in terms of Moore's law. We have true 2nm within reach and then the 1nm process is basically the end of the journey. I expect 'audiophile grade' PCs in the 2030s and then PCs become works of art, furniture, investments, etc. because they have nowhere to go.
The increasing longevity of computers has been impressing me for about 10 years.
My current machine is 4 years old. It's absolutely fine for what I do. I only ever catch it "working" when I futz with 4k 360 degree video (about which: fine). It's a M1 Macbook Pro.
I traded its predecessor in to buy it, so I don't have that one anymore; it was a 2019 model. But the one before that, a 2015 13" Intel Macbook Pro, is still in use in the house as my wife's computer. Keyboard is mushy now, but it's fine. It'd probably run faster if my wife didn't keep fifty billion tabs open in Chrome, but that's none of my business. ;)
The one behind that one, purchased in 2012, is also still in use as a "media server" / ersatz SAN. It's a little creaky and is I'm sure technically a security risk given its age and lack of updates, but it RUNS just fine.
Yeah, having browsers the size and complexities of OSs is just one of many symptoms. I intimate at this concept in a grumbling, helpless manner somewhat chronically.
There's a lot today that wasn't possible yesterday, but it also sucks in ways that weren't possible then.
I foresee hostility for saying the following, but it really seems most people are unwilling to admit that most software (and even hardware) isn't necessarily made for the user or its express purpose anymore. To be perhaps a bit silly, I get the impression of many services as bait for telemetry and background fun.
While not an overly earnest example, looking at Android's Settings/System/Developer Options is pretty quick evidence that the user is involved but clearly not the main component in any respect. Even an objective look at Linux finds manifold layers of hacks and compensation for a world of hostile hardware and soft conflict. It often works exceedingly well, though as impractical as it may be to fantasize, imagine how badass it would be if everything was clean, open and honest. There's immense power, with lots of infirmities.
I've said that today is the golden age of the LLM in all its puerility. It'll get way better, yeah, but it'll get way worse too, in the ways that matter.[1]
Developers over 50ish (like me) grew up at a time when CPU performance and memory constraints affected every application. So you had to always be smart about doing things efficiently with both CPU and memory.
Younger developers have machines that are so fast they can be lazy with all algorithms and do everything 'brute force'. Like searching thru an array every time when a hashmap would've been 10x faster. Or using all kinds of "list.find().filter().any().every()" chaining nonsense, when it's often smarter to do ONE loop, and inside that loop do a bunch of different things.
So younger devs only optimize once they NOTICE the code running slow. That means they're ALWAYS right on the edge of overloading the CPU, just thru bad coding. In other words, their inefficiencies will always expand to fit available memory, and available clock cycles.
I generally believe that markets are somewhat efficient.
But somehow, we've ended up with the current state of Windows as the OS that most people use to do their job.
Something went terribly wrong. Maybe the market is just too dumb, maybe it's all the market distortions that have to do with IP, maybe it's the monopolístic practices of Microsoft. I don't know, but in my head, no sane civilization would think that Windows 10/11 is a good OS that everyone should use to optimize our economy.
I'm not talking only about performance, but about the general crappiness of the experience of using it.
Often, this is presented as a tradeoff between the cost of development and the cost of hardware. However, there is a third leg of that stool: the cost of end-user experience.
When you have a system which is sluggish to use because your skimped on development, it is often the case that you cannot make it much faster no matter how expensive is the hardware you throw at it. Either there is a single-threaded critical path, so you hit the limit of what one CPU can do (and adding more does not help), or you hit the laws of physics, such as with network latency which is ultimately bound by the speed of light.
And even when the situation could be improved by throwing more hardware at it, this is often done only to the extent to make the user experience "acceptable", but not "great".
In either case, the user experience suffers and each individual user is less productive. And since there are (usually) orders of magnitude more users than developers, the total damage done can be much greater than the increased cost of performance-focused development. But the cost of development is "concentrated" while the cost of user experience is "distributed", so it's more difficult to measure or incentivize for.
The cost of poor user experience is a real cost, is larger than most people seem to think and is non-linear. This was observed in the experiments done by IBM, Google, Amazon and others decades ago. For example, take a look at:
He and Richard P. Kelisky, Director of Computing Systems for IBM's Research Division, wrote about their observations in 1979, "...each second of system response degradation leads to a similar degradation added to the user's time for the following [command]. This phenomenon seems to be related to an individual's attention span. The traditional model of a person thinking after each system response appears to be inaccurate. Instead, people seem to have a sequence of actions in mind, contained in a short-term mental memory buffer. Increases in SRT [system response time] seem to disrupt the thought processes, and this may result in having to rethink the sequence of actions to be continued."
Perfect parallel to the madness that is AI. With even modest sustainability incentives, the industry wouldn't have pulverized a trillion dollar on training models nobody uses to dominate the weekly attention fight and fundraising game.
Obviously, the world ran before computers. The more interesting part of this is what would we lose if we knew there were no new computers, and while I'd like to believe the world would put its resources towards critical infrastructure and global logistics, we'd probably see the financial sector trying to buy out whatever they could, followed by any data center / cloud computing company trying to lock all of the best compute power in their own buildings.
Imagine software engineering was like real engineering, where the engineers had licensing and faced fines or even prison for negligence. How much of the modern worlds software would be tolerated?
Very, very little.
If engineers handled the Citicorp center the same way software engineers did, the fix would have been to update the documentation in Confluence to not expose the building to winds and then later on shrug when it collapsed.
So Metal and DirectX 12 capable GPUs are already hard requirements for their respective platforms, and I anticipate that Wayland compositors will start depending on Vulkan eventually.
That will make pre-Broadwell laptops generally obsolete and there aren't many AMD laptops of that era that are worth saving.
Servers don't need a graphical environment, but the inefficiencies of older hardware are much more acute in this context compared to personal computing, where the full capabilities of the hardware are rarely exploited.
Click the link and contemplate while X loads. First, the black background. Next it spends a while and you're ready to celebrate! Nope, it was loading the loading spinner. Then the pieces of the page start to appear. A few more seconds pass while the page is redrawn with the right fonts; only then can you actually scroll the page.
Having had some time to question your sanity for clicking, you're grateful to finally see what you came to see. So you dwell 10x as long, staring at a loaded page and contemplating the tweet. You dwell longer to scroll and look at the replies.
How long were you willing to wait for data you REALLY care about? 10-30 seconds; if it's important enough you'll wait even longer.
Software is as fast as it needs to be to be useful to humans. Computer speed doesn't matter.
If the computer goes too fast it may even be suspected of trickery.
The priority should be safety, not speed. I prefer an e.g. slower browser or OS that isn't ridden with exploits and attack vectors.
Of course that doesn't mean everything should be done in JS and Electron as there's a lot of drawbacks to that. There exists a reasonable middle ground where you get e.g. memory safety but don't operate on layers upon layers of heavy abstraction and overhead.
The goal isn't optimized code, it is utility/value prop. The question then is how do we get the best utility/value given the resources we have. This question often leads to people believing optimization is the right path since it would use fewer resources and therefore the value prop would be higher. I believe they are both right and wrong. For me, almost universally, good optimization ends up simplifying things as it speeds things up. This 'secondary' benefit, to me, is actually the primary benefit. So when considering optimizations I'd argue that performance gains are a potential proxy for simplicity gains in many cases so putting a little more effort into that is almost always worth it. Just make sure you actually are simplifying though.
You're just replacing one favorite solution with another. Would users want simplicity at the cost of performance? Would they pay more for it? I don't think so.
You're right that the crux of it is that the only thing that matters is pure user value and that it comes in many forms. We're here because development cost and feature set provide the most obvious value.
100% agree with Carmack. There was a craft in writing software that I feel has been lost with access to inexpensive memory and compute. Programmers can be inefficient because they have all that extra headroom to do so which just contributes to the cycle of needing better hardware.
Software development has been commoditized and is directed by MBA's and others who don't see it as a craft. The need for fast project execution is above the craft of programming, hence, the code is bug-riddled and slow.
There are some niche areas (vintage,pico-8, arduino...) where people can still practise the craft, but that's just a hobby now. When this topic comes up I always think about Tarkovsky's Andrei Rublev movie, the artist's struggle.
If the tooling had kept up. We went from RADs that built you fully native GUIs to abandoning ship and letting Electron take over. Anyone else have 40 web browsers installed and they are each some Chromium hack?
You always optimise FOR something at the expense of something.
And that can, and frequently should, be lean resource consumption, but it can come at a price.
Which might be one or more of:
Accessibility.
Full internationalisation.
Integration paradigms (thinking about how modern web apps bring UI and data elements in from third parties).
Readability/maintainability.
Displays that can actually represent text correctly at any size without relying on font hinting hacks.
All sorts of subtle points around UX.
Economic/business model stuff (megabytes of cookie BS on every web site, looking at you right now.)
Etc.
I work on a laptop from 2014. An i7 4xxx with 32 GB RAM and 3 TB SSD. It's OK for Rails and for Django, Vue, Slack, Firefox and Chrome. Browsers and interpreters got faster. Luckily there was pressure to optimize especially in browsers.
The world will seek out software optimization only after hardware reaches its physical limits.
We're still in Startup Land, where it's more important to be first than it is to be good. From that point onward, you have to make a HUGE leap and your first-to-market competitor needs to make some horrendous screwups in order to overtake them.
The other problem is that some people still believe that the masses will pay more for quality. Sometimes, good enough is good enough. Tidal didn't replace iTunes or Spotify, and Pono didn't exactly crack the market for iPods.
He mentions the rate of innovation would slow down which I agree with. But I think that even 5% slower innovation rate would delay the optimizations we can do or even figure out what we need to optimize through centuries of computer usage and in the end we'd be less efficient because we'd be slower at finding efficiencies. Low adoption rate of new efficiencies is worse than high adoption rate of old efficiencies is I guess how to phrase it.
If Cadence for example releases every feature 5 years later because they spend more time optimizing them, it's software after all, how much will that delay semiconductor innovations?
Carmack is right to some extent, although I think it’s also worth mentioning that people replace their computers for reasons other than performance, especially smartphones. Improvements in other components, damage, marketing, and status are other reasons.
It’s not that uncommon for people to replace their phone after two years, and as someone who’s typically bought phones that are good but not top-of-the-line, I’m skeptical all of those people’s phones are getting bogged down by slow software.
This is Carmack's favorite observation over the last decade+. It stems from what made him successful at id. The world's changed since then. Home computers are rarely compute-bound, the code we write is orders of magnitude more complex, and compilers have gotten better. Any wins would come at the cost of a massive investment in engineering time or degraded user experience.
Minimalism is excellent. As others have mentioned, using languages that are more memory safe (by assumption the language is wrote in such a way) may be worth the additional complexity cost.
But surely with burgeoning AI use efficiency savings are being gobbled up by the brute force nature of it.
Maybe model training and the likes of hugging face can avoid different groups trying to reinvent the same AI wheel using more resources than a cursory search of a resource.
We have customers with thousands of machines that are still using spinning, mechanical 54000 RPM drives. The machines are unbelievably slow and only get slower with every single update, its nuts.
Tell me about it. Web development has only become fun again at my place since upgrading from Intel Mac to M4 Mac.
Just throw in Slack chat, vscode editor in Electron, Next.js stack, 1-2 docker containers, one browser and you need top notch hardware to run it fluid (Apple Silicon is amazing though). I'm doing no fancy stuff.
Chat, editor in a browser and docker don't seem the most efficient thing if put all together.
I think optimizations only occur when the users need them. That is why there are so many tricks for game engine optimization and compiling speed optimization. And that is why MSFT could optimize the hell out of VSCode.
People simply do not care about the rest. So there will be as little money spent on optimization as possible.
Sadly software optimization doesn't offer enough cost savings for most companies to address consumer frustration. However, for large AI workloads, even small CPU improvements yield significant financial benefits, making optimization highly worthwhile.
I already run on older hardware and most people can if they chose to - haven't bought a new computer since 2005. Perhaps the OS can adopt a "serverless" model where high computational tasks are offloaded as long as there is sufficient bandwidth.
I'm already moving in this direction in my personal life. It's partly nostalgia but it's partly practical. It's just that work requires working with people who only use what hr and it hoists on them, then I need a separate machine for that.
Well, it is a point. But also remember the horrors of the monoliths he made. Like in Quake (123?) where you have hacks like if level name contains XYZ then do this magic. I think the conclusion might be wrong.
Feels like half of this thread didn't read or ignored his last line: "Innovative new products would get much rarer without super cheap and scalable compute, of course."
This is the story of life in a nutshell. It's extremely far from optimized, and that is the natural way of all that it spawns. It almost seems inelegant to attempt to "correct" it.
It could also run on much less current hardware if efficiency was a priority. Then comes the AI bandwagon and everyone is buying loads of new equipment to keep up with the Jones.
This is a double edge sword problem, but I think what people are glazing over with the compute power topic is power efficiency. One thing I struggle with home labing old gaming equipment is the consideration to the power efficiency of new hardware. Hardly a valid comparison, but I can choose to recycle my Ryzen 1700x with a 2080ti for a media server that will probably consume a few hundred watts, or I can get a M1 that sips power. The double edge sword part is that Ryzen system becomes considerably more power efficient running proxmox or ubuntu server vs a windows client. We as a society choose our niche we want to leverage and it swings with and like economics, strapped for cash, choose to build more efficient code; no limits, buy the horsepower to meet the needs.
Carmack is a very smart guy and I agree with the sentiment behind his post, but he's a software guy. Unfortunately for all of us hardware has bugs, sometimes bugs so bad that you need to drop 30-40% of your performance to mitigate them - see Spectre, Meltdown and friends.
I don't want the crap Intel has been producing for the last 20 years, I want the ARM, RiscV and AMD CPUs from 5 years in the future. I don't want a GPU by Nvidia that comes with buggy drivers and opaque firmware updates, I want the open source GPU that someone is bound to make in the next decade. I'm happy 10gb switches are becoming a thing in the home, I don't want the 100 mb hubs from the early 2000s.
I'm going to be pretty blunt. Carmack gets worshiped when he shouldn't be. He has several bad takes in terms of software. Further, he's frankly behind the times when it comes to the current state of the software ecosystem.
I get it, he's legendary for the work he did at id software. But this is the guy who only like 5 years ago was convinced that static analysis was actually a good thing for code.
He seems to have a perpetual view on the state of software. Interpreted stuff is slow, networks are slow, databases are slow. Everyone is working with Pentium 1s and 2MB of ram.
None of these are what he thinks they are. CPUs are wicked fast. Interpreted languages are now within a single digit multiple of natively compiled languages. Ram is cheap and plentiful. Databases and networks are insanely fast.
Good on him for sharing his takes, but really, he shouldn't be considered a "thought leader". I've noticed his takes have been outdated for over a decade.
I'm sure he's a nice guy, but I believe he's fallen into a trap that many older devs do. He's overestimating what the costs of things are because his mental model of computing is dated.
Let me specify that what I'm calling interpreted (and I'm sure carmack agrees) is languages with a VM and JIT.
The JVM and Javascript both fall into this category.
The proof is in the pudding. [1]
The JS version that ran in 8.54 seconds [2] did not use any sort of fancy escape hatches to get there. It's effectively the naive solution.
But if you look at the winning C version, you'll note that it went all out pulling every single SIMD trick in the book to win [3]. And with all that, the JS version is still only ~4x slower (single digit multiplier).
And if you look at the C++ version which is a near direct translation [4] which isn't using all the SIMD tricks in the books to win, it ran in 5.15. Bringing the multiple down to 1.7x.
Perhaps you weren't thinking of these JIT languages as being interpreted. That's fair. But if you did, you need to adjust your mental model of what's slow. JITs have come a VERY long way in the last 20 years.
I will say that languages like python remain slow. That wasn't what I was thinking of when I said "interpreted". It's definitely more than fair to call it an interpreted language.
A simple do-nothing for loop in JavaScript via my browser's web console will run at hundreds of MHz. Single-threaded, implicitly working in floating-point (JavaScript being what it is) and on 2014 hardware (3GHz CPU).
> But this is the guy who only like 5 years ago was convinced that static analysis was actually a good thing for code.
Why isn't it?
> Interpreted stuff is slow
Well, it is. You can immediately tell the difference between most C/C++/Rust/... programs and Python/Ruby/... Either because they're implicitly faster (nature) or they foster an environment where performance matters (nurture), it doesn't matter, the end result (adult) is what matters.
> networks are slow
Networks are fast(er), but they're still slow for most stuff. Gmail is super nice, but it's slower than almost desktop email program that doesn't have legacy baggage stretching back 2-3 decades.
I didn't say it isn't good. I'm saying that Carmack wasn't convinced of the utility until much later after the entire industry had adopted it. 5 years is wrong (time flies) it was 2011 when he made statements about how it's actually a good thing.
> Well, it is.
Not something I'm disputing. I'm disputing the degree of slowness, particularly in languages with good JITs such as Java and Javascript. There's an overestimation on how much the language matters.
> Gmail is super nice, but it's slower than almost desktop email program
Now that's a weird comparison.
A huge portion of what makes gmail slow is that it's a gigantic and at this point somewhat dated javascript application.
Look, not here to defend JS as the UX standard of modern app dev, I don't love it.
What I'm talking about slow networks is mainly in terms of backend servers talking to one another. Because of the speed of light, there's always going to be some major delays moving data out of the datacenter into a consumer device. Within the datacenter, however, things will be pretty dang nippy.
As long as sufficient amounts of wealthy people are able to wield their money as a force to shape society, this is will always be the outcome.
Unfortunately,in our current society a rich group of people with a very restricted intellect, abnormal psychology, perverse views on human interaction and a paranoid delusion that kept normal human love and compassion beyond their grasp, were able to shape society to their dreadful imagination.
Hopefully humanity can make it through these times, despite these hateful aberrations doing their best to wield their economic power to destroy humans as a concept.
My partner was diagnosed with Parkinson’s almost 5 years ago. His disease has progressed significantly in the past year, and he begun to have delusions. He also had side effects from carbidopa/levodopa, which we decided to stop, and our primary physician decided he should start on PD-5 formula 4 months ago from UINE HEALTH CENTER. He now sleeps soundly, works out frequently, and is now very active since we started him on the PD-5 formula. It doesn’t make the Parkinson’s disease go away, but it did give him a better quality of life. We got the treatment from www. uineheathcentre. com
I mean, if you put win 95 on a period appropriate machine, you can do office work easily. All that is really driving computing power is the web and gaming. If we weren't doing either of those things as much, I bet we could all quite happily use machines from the 2000s era
I do have Windows 2000 installed with IIS (and some office stuff) in a ESXi for fun and nostalgia. It serves some static html pages within my local network. The host machine is some kind of i7 machine that is about 7-10 years old.
That machine is SOOOOOO FAST. I love it. To be honest, that tasks that I was doing back in the day are identical to today
True for large corporations. But for individuals the ability to put what was previously an entire stacks in a script that doesn't call out to the internet will be a big win.
How many people are going to write and maintain shell scripts with 10+ curls? If we are being honest this is the main reason people use python.
Let's keep the CPU efficiency golf to Zachtronics games, please.
I/O is almost always the main bottleneck. I swear to god 99% of developers out there only know how to measure cpu cycles of their code so that's the only thing they optimize for. Call me after you've seen your jobs on your k8s clusters get slow because all of your jobs are inefficiently using local disk and wasting cycles waiting in queue for reads/writes. Or your DB replication slows down to the point that you have to choose between breaking the mirror and stop making money.
And older hardware consumes more power. That's the main driving factor between server hardware upgrades because you can fit more compute into your datacenter.
I agree with Carmack's assessment here, but most people reading are taking the wrong message away with them.
There's servers and there's all of the rest of consumer hardware.
I need to buy a new phone every few years simply because the manufacturer refuses to update it. Or they add progressively more computationally expensive effects that makes my old hardware crawl. Or the software I use only supports 2 old version of macOS. Or Microsoft decides that your brand new cpu is no good for win 11 because it's lacking a TPM. Or god help you if you try to open our poorly optimized electron app on your 5 year old computer.
People say this all the time, and usually it's just an excuse not to optimize anything.
First, I/O can be optimized. It's very likely that most servers are either wasteful in the number of requests they make, or are shuffling more data around than necessary.
Beyond that though, adding slow logic on top of I/O latency only makes things worse.
Also, what does I/O being a bottleneck have to do with my browser consuming all of my RAM and using 120% of my CPU? Most people who say "I/O is the bottleneck" as a reason to not optimize only care about servers, and ignore the end users.
I/O _can_ be optimized. I know someone who had this as their fulltime job at Meta. Outside of that nobody is investing in it though.
I'm a platform engineer for a company with thousands of microservices. I'm not thinking on your desktop scale. Our jobs are all memory hogs and I/O bound messes. Across all of the hardware we're buying we're using maybe 10% CPU. Peers I talk to at other companies are almost universally in the same situation.
I'm not saying don't care about CPU efficiency, but I encounter dumb shit all the time like engineers asking us to run exotic new databases with bad licensing and no enterprise features just because it's 10% faster when we're nowhere near experiencing those kinds of efficiency problems. I almost never encounter engineers who truly understand or care about things like resource contention/utilization. Everything is still treated like an infinite pool with perfect 100% uptime, despite (at least) 20 years of the industry knowing better.
There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.
It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].
It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.
[0] https://www.lg.com/uk/lg-experience/inspiration/lg-ai-wash-e...
The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.
There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
This is also the exact reason why all the bright-eyed pieces that some technology would increase worker's productivity and therefore allow more leisure time for the worker (20 hour workweek etc) are either hopelessly naive or pure propaganda.
Increased productivity means that the company has a new option to either reduce costs or increase output at no additional cost, one of which it has to do to stay ahead in the rat-race of competitors. Investing the added productivity into employee leisure time would be in the best case foolish and in the worst case suicidal.
62 replies →
You're on the right track, but missing an important aspect.
In most cases the company making the inferior product didn't spend less. But they did spend differently. As in, they spent a lot on marketing.
You were focused on quality, and hoped for viral word of mouth marketing. Your competitors spent the same as you, but half their budget went to marketing. Since people buy what they know, they won.
Back in the day MS made Windows 95. IBM made OS/2. MS spend a billion $ on marketing Windows 95. That's a billion back when a billion was a lot. Just for the launch.
Techies think that Quality leads to sales. If does not. Marketing leads to sales. There literally is no secret to business success other than internalizing that fact.
23 replies →
> What I realized is that lower costs, and therefore lower quality,
This implication is the big question mark. It's often true but it's not at all clear that it's necessarily true. Choosing better languages, frameworks, tools and so on can all help with lowering costs without necessarily lowering quality. I don't think we're anywhere near the bottom of the cost barrel either.
I think the problem is focusing on improving the quality of the end products directly when the quality of the end product for a given cost is downstream of the quality of our tools. We need much better tools.
For instance, why are our languages still obsessed with manipulating pointers and references as a primary mode of operation, just so we can program yet another linked list? Why can't you declare something as a "Set with O(1) insert" and the language or its runtime chooses an implementation? Why isn't direct relational programming more common? I'm not talking programming in verbose SQL, but something more modern with type inference and proper composition, more like LINQ, eg. why can't I do:
These abstract over implementation details that we're constantly fiddling with in our end programs, often for little real benefit. Studies have repeatedly shown that humans can write less than 20 lines of correct code per day, so each of those lines should be as expressive and powerful as possible to drive down costs without sacrificing quality.
24 replies →
> I don’t think this leads to market collapse
You must have read that the Market for Lemons is a type of market failure or collapse. Market failure (in macroeconomics) does not yet mean collapse. It describes a failure to allocate resources in the market such that the overall welfare of the market participants decreases. With this decrease may come a reduction in trade volume. When the trade volume decreases significantly, we call it a market collapse. Usually, some segment of the market that existed ceases to exist (example in a moment).
There is a demand for inferior goods and services, and a demand for superior goods. The demand for superior goods generally increases as the buyer becomes wealthier, and the demand for inferior goods generally increases as the buyer becomes less wealthy.
In this case, wealthier buyers cannot buy the superior relevant software previously available, even if they create demand for it. Therefore, we would say a market fault has developed as the market could not organize resources to meet this demand. Then, the volume of high-quality software sales drops dramatically. That market segment collapses, so you are describing a market collapse.
> There’s probably another name for this
You might be thinking about "regression to normal profits" or a "race to the bottom." The Market for Lemons is an adjacent scenario to both, where a collapse develops due to asymmetric information in the seller's favor. One note about macroecon — there's never just one market force or phenomenon affecting any real situation. It's always a mix of some established and obscure theories.
12 replies →
My wife has a perfume business. She makes really high quality extrait de parfums [1] with expensive materials and great formulations. But the market is flooded with eau de parfums -- which are far more diluted than a extrait -- using cheaper ingredients, selling for about the same price. We've had so many conversations about whether she should dilute everything like the other companies do, but you lose so much of the beauty of the fragrance when you do that. She really doesn't want to go the route of mediocrity, but that does seem to be what the market demands.
[1] https://studiotanais.com/
18 replies →
I had the same realization but with car mechanics. If you drive a beater you want to spend the least possible on maintenance. On the other hand, if the car mechanic cares about cars and their craftsmanship they want to get everything to tip-top shape at high cost. Some other mechanics are trying to scam you and get the most amount of money for the least amount of work. And most people looking for car mechanics want to pay the least amount possible, and don't quite understand if a repair should be expensive or not. This creates a downward pressure on price at the expense of quality and penalizes the mechanics that care about quality.
7 replies →
Exactly. People on HN get angry and confused about low software quality, compute wastefulness, etc, but what's happening is not a moral crisis: the market has simply chosen the trade-off it wants, and industry has adapted to it
If you want to be rewarded for working on quality, you have to find a niche where quality has high economic value. If you want to put effort into quality regardless, that's a very noble thing and many of us take pleasure in doing so, but we shouldn't act surprised when we aren't economically rewarded for it
I actually disagree. I think that people will pay more for higher quality software, but only if they know the software is higher quality.
It's great to say your software is higher quality, but the question I have is whether or not is is higher quality with the same or similar features, and second, whether the better quality is known to the customers.
It's the same way that I will pay hundreds of dollars for Jetbrains tools each year even though ostensibly VS Code has most of the same features, but the quality of the implementation greatly differs.
If a new company made their IDE better than jetbrains though, it'd be hard to get me to fork over money. Free trials and so on can help spread awareness.
7 replies →
It can depend on the application/niche.
I used to write signal processing software for land mobile radios. Those radios were used by emergency services. For the most part, our software was high quality in that it gave good quality audio and rarely had problems. If it did have a problem, it would recover quickly enough that the customer would not notice.
Our radios got a name for reliability: such as feedback from customers about skyscrapers in New York being on fire and the radios not skipping a beat during the emergency response. Word of mouth traveled in a relatively close knit community and the "quality" did win customers.
Oddly we didn't have explicit procedures to maintain that quality. The key in my mind was that we had enough time in the day to address the root cause of bugs, it was a small enough team that we knew what was going into the repository and its effect on the system, and we developed incrementally. A few years later, we got spread thinner onto more products and it didn't work so well.
2 replies →
I kind of see this in action when I'm comparing products on Amazon. When comparing two products on Amazon that are substantially the same, the cheaper one will have way more reviews. I guess this implies that it has captured the majority of the market.
5 replies →
There's an analogy with evolution. In that case, what survives might be the fittest, but it's not the fittest possible. It's the least fit that can possibly win. Anything else represents an energy expenditure that something else can avoid, and thus outcompete.
I had the exact same experience trying to build a startup. The thing that always puzzled me was Apple: they've grown into one of the most profitable companies in the world on the basis of high-quality stuff. How did they pull it off?
6 replies →
This is a really succinct analysis, thanks.
I'm thinking out loud but it seems like there's some other factors at play. There's a lower threshold of quality that needs to happen (the thing needs to work) so there's at least two big factors, functionality and cost. In the extreme, all other things being equal, if two products were presented at the exact same cost but one was of superior quality, the expectation is that the better quality item would win.
There's always the "good, fast, cheap" triangle but with Moore's law (or Wright's law), cheap things get cheaper, things iterate faster and good things get better. Maybe there's an argument that when something provides an order of magnitude quality difference at nominal price difference, that's when disruption happens?
So, if the environment remains stable, then mediocrity wins as the price of superior quality can't justify the added expense. If the environment is growing (exponentially) then, at any given snapshot, mediocrity might win but will eventually be usurped by quality when the price to produce it drops below a critical threshold.
You're laying it out like it's universal, in my experience there are products where people will seek for the cheapest good enough but there are also other product that people know they want quality and are willing to pay more.
Take cars for instance, if all people wanted the cheapest one then Mercedes or even Volkswagen would be out of business.
Same for professional tools and products, you save more by buying quality product.
And then, even in computer and technology. Apple iPhone aren't cheap at all, MacBook come with soldered ram and storage, high price, yet a big part of people are willing to buy that instead of the usual windows bloated spyware laptop that run well enough and is cheap.
4 replies →
If you’re trying to sell a product to the masses, you either need to make it cheap or a fad.
You cannot make a cheap product with high margins and get away with it. Motorola tried with the RAZR. They had about five or six good quarters from it and then within three years of initial launch were hemorrhaging over a billion dollars a year.
You have to make premium products if you want high margins. And premium means you’re going for 10% market share, not dominant market share. And if you guess wrong and a recession happens, you might be fucked.
Yes, I was in this place too when I had a consulting company. We bid on projects with quotes for high quality work and guaranteed delivery within the agreed timeframe. More often than not we got rejected in favor of some students who submitted a quote for 4x less. I sometimes asked those clients how the project went, and they'd say, well, those guys missed the deadline and asked for more money several times
> We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
Relevant apocrypha: https://www.youtube.com/watch?v=UFcb-XF1RPQ
These economic forces exist in math too. Almost every mathematician publishes informal proofs. These contain just enough discussion in English (or other human language) to convince a few other mathematicians in the same field that they their idea is valid. But it is possible to make errors. There are other techniques: formal step-by-step proof presentations (e.g. by Leslie Lamport) or computer-checked proofs that would be more reliable. But almost no mathematician uses these.
I feel your realization and still hope my startup will have an competitive edge through quality.
In this case quality also means code quality which in my coding believe should lead to faster feature development
I'm a layman, but in my opinion building quality software can't really be a differentiator because anyone can build quality software given enough time and resources. You could take two car mechanics and with enough training, time, assistance from professional dev consultants, testing, rework, so and so forth, make a quality piece of software. But you'd have spent $6 million to make a quality alarm clock app.
A differentiator would be having the ability to have a higher than average quality per cost. Then maybe you're onto something.
It depends on who is paying versus using the product. If the buyer is the user, they tend value quality more so than otherwise.
Do you drive the cheapest car, eat the cheapest food, wear the cheapest clothes, etc.?
> People want cheap
There is an exception: luxury goods. Some are expensive, but people don't mind them being overpriced because e.g. they are social status symbols. Is there such a thing like "luxury software"? I think Apple sort of has this reputation.
I see another dynamic "customer value" features get prioritized and eventually product reaches a point of crushing tech debt. It results in "customer value" features delivery velocity grinding to a halt. Obviously subject to other forces but it is not infrequent for someone to come in and disrupt the incumbents at this point.
>lower costs, and therefore lower quality,
Many high-quality open-source designs suggest this is a false premise, and as a developer who writes high-quality and reliable software for much much lower rates than most, cost should not be seen as a reliable indicator of quality.
"Quality is free"[1], luxury isn't.
Also, one should not confuse the quality of the final product and the quality of the process.
[1] https://archive.org/details/qualityisfree00cros
There’s probably another name for this
Capitalism? Marx's core belief was that capitalists would always lean towards paying the absolute lowest price they could for labor and raw materials that would allow them to stay in production. If there's more profit in manufacturing mediocrity at scale than quality at a smaller scale, mediocrity it is.
Not all commerce is capitalistic. If a commercial venture is dedicated to quality, or maximizing value for its customers, or the wellbeing of its employees, then it's not solely driven by the goal of maximizing capital. This is easier for a private than a public company, in part because of a misplaced belief that maximizing shareholder return is the only legally valid business objective. I think it's the corporate equivalent of diabetes.
1 reply →
But do you think you could have started with a bug laden mess? Or is it just the natural progression down the quality and price curve that comes with scale
> People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality).
Sure, but what about the people who consider quality as part of their product evaluation? All else being equal everyone wants it cheaper, but all else isn't equal. When I was looking at smart lighting, I spent 3x as much on Philips Hue as I could have on Ikea bulbs: bought one Ikea bulb, tried it on next to a Hue one, and instantly returned the Ikea one. It was just that much worse. I'd happily pay similar premiums for most consumer products.
But companies keep enshittifying their products. I'm not going to pay significantly more for a product which is going to break after 16 months instead of 12 months. I'm not going to pay extra for some crappy AI cloud blockchain "feature". I'm not going to pay extra to have a gaudy "luxury" brand logo stapled all over it.
Companies are only interested in short-term shareholder value these days, which means selling absolute crap at premium prices. I want to pay extra to get a decent product, but more and more it turns out that I can't.
>There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
It's the same concept as the age old "only an engineer can build a bridge that just barely doesn't fall down" circle jerk but for a more diverse set of goods than just bridges.
Maybe you could compete by developing new and better products? Ford isn't selling the same car with lower and lower costs every year.
It's really hard to reconcile your comment with Silicon Valley, which was built by often expensive innovation, not by cutting costs. Were Apple, Meta, Alphabet, Microsoft successful because they cut costs? The AI companies?
2 replies →
I’d argue this exists for public companies, but there are many smaller, private businesses where there’s no doctrine of maximising shareholder value
These companies often place a greater emphasis on reputation and legacy Very few and far between, Robert McNeel & Associates (American) is one that comes to mind (Rhino3D), as his the Dutch company Victron (power hardware)
The former especially is not known for maximising their margins, they don’t even offer a subscription-model to their customers
Victron is an interesting case, where they deliberately offer few products, and instead of releasing more, they heavily optimise and update their existing models over many years in everything from documentation to firmware and even new features. They’re a hardware company mostly so very little revenue is from subscriptions
I'm proud of you, it often takes people multiple failures before they learn to accept their worldview that regulations aren't necessary and the tragedy of Commons is a myth are wrong.
> There’s probably another name for this
Race to the bottom
The problem with your thesis is that software isn't a physical good, so quality isn't tangible. If software does the advertised thing, it's good software. That's it.
With physical items, quality prevents deterioration over time. Or at least slows it. Improves function. That sort of thing.
Software just works or doesn't work. So you want to make something that works and iterate as quickly as possible. And yes, cost to produce it matters so you can actually bring it to market.
> the market sells as if all goods were high-quality
The phrase "high-quality" is doing work here. The implication I'm reading is that poor performance = low quality. However, the applications people are mentioning in this comment section as low performance (Teams, Slack, Jira, etc) all have competitors with much better performance. But if I ask a person to pick between Slack and, say, a a fast IRC client like Weechat... what do you think the average person is going to consider low-quality? It's the one with a terminal-style UI, no video chat, no webhook integrations, and no custom avatars or emojis.
Performance is a feature like everything else. Sometimes, it's a really important feature; the dominance of Internet Explorer was destroyed by Chrome largely because it was so much faster than IE when it was released, and Python devs are quickly migrating to uv/ruff due to the performance improvement. But when you start getting into the territory of "it takes Slack 5 seconds to start up instead of 10ms", you're getting into the realm where very few people care.
You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Performance is not just a quantitative issue. It leaks into everything, from architecture to delivery to user experience. Bad performance has expensive secondary effects, because we introduce complexity to patch over it like horizontal scaling, caching or eventual consistency. It limits our ability to make things immediately responsive and reliable at the same time.
8 replies →
If you're being honest, compare Slack and Teams not with weechat, but with Telegram. Its desktop client (along with other clients) is written by an actually competent team that cares about performance, and it shows. They have enough money to produce a native client written in C++ that has fantastic performance and is high quality overall, but these software behemoths with budgets higher than most countries' GDP somehow never do.
This; "quality" is such an unclear term here.
In an efficient market people buy things based on a value which in the case of software, is derived from overall fitness for use. "Quality" as a raw performance metric or a bug count metric aren't relevant; the criteria is "how much money does using this product make or save me versus its competition or not using it."
In some cases there's a Market of Lemons / contract / scam / lack of market transparency issue (ie - companies selling defective software with arbitrary lock-ins and long contracts), but overall the slower or more "defective" software is often more fit for purpose than that provided by the competition. If you _must_ have a feature that only a slow piece of software provides, it's still a better deal to acquire that software than to not. Likewise, if software is "janky" and contains minor bugs that don't affect the end results it provides, it will outcompete an alternative which can't produce the same results.
That's true. I meant it in a broader sense. Quality = {speed, function, lack of bugs, ergonomics, ... }.
[dead]
I don't think it's necessarily a market for lemons. That involves information asymmetry.
Sometimes that happens with buggy software, but I think in general, people just want to pay less and don't mind a few bugs in the process. Compare and contrast what you'd have to charge to do a very thorough process with multiple engineers checking every line of code and many hours of rigorous QA.
I once did some software for a small book shop where I lived in Padova, and created it pretty quickly and didn't charge the guy - a friend - much. It wasn't perfect, but I fixed any problems (and there weren't many) as they came up and he was happy with the arrangement. He was patient because he knew he was getting a good deal.
I do think there is an information problem in many cases.
It is easy to get information of features. It is hard to get information on reliability or security.
The result is worsened because vendors compete on features, therefore they all make the same trade off of more features for lower quality.
2 replies →
I have worked for large corporations that have foisted awful HR, expense reporting, time tracking and insurance "portals" that were so awful I had to wonder if anyone writing the checks had ever seen the product. I brought up the point several times that if my team tried to tell a customer that we had their project all done but it was full of as many bugs and UI nightmares as these back office platforms, I would be chastised, demoted and/or fired.
I used to work at a large company that had a lousy internal system for doing performance evals and self-reviews. The UI was shitty, it was unreliable, it was hard to use, it had security problems, it would go down on the eve of reviews being due, etc. This all stressed me out until someone in management observed, rather pointedly, that the reason for existence of this system is that we are contractually required to have such a system because the rules for government contracts mandate it, and that there was a possibility (and he emphasized the word possibility knowingly) that the managers actully are considering their personal knowledge of your performance rather than this performative documentation when they consider your promotions and comp adjustments. It was like being hit with a zen lightning bolt: this software meets its requirements exactly, and I can stop worrying about it. From that day on I only did the most cursory self-evals and minimal accomplishents, and my career progressed just fine.
You might not think about this as “quality” but it does have the quality of meeting the perverse functional requirements of the situation.
1 reply →
> I had to wonder if anyone writing the checks had ever seen the product
Probably not, and that's like 90% of the issue with enterprise software. Sadly enterprise software products are often sold based mainly on how many boxes they check in the list of features sent to management, not based on the actual quality and usability of the product itself.
What you're describing is Enterprise(tm) software. Some consultancy made tens of millions of dollars building, integrating, and deploying those things. This of course was after they made tens of millions of dollars producing reports exploring how they would build, integrate, and deploy these things and all the various "phases" involved. Then they farmed all the work out to cheap coders overseas and everyone went for golf.
Meanwhile I'm a founder of startup that has gotten from zero to where it is on probably what that consultancy spends every year on catering for meetings.
If they think it is unimportant talk as if it is. It could be more polished. Do we want to impress them or just satisfy their needs?
The job it’s paid to do is satisfy regulation requirements.
Across three jobs, I have now seen three different HR systems from the same supplier which were all differently terrible.
> the market buys bug-filled, inefficient software about as well as it buys pristine software
In fact, the realization is that the market buy support.
And that includes google and other companies that lack much of human support.
This is the key.
Support is manifested in many ways:
* There is information about it (docs, videos, blogs, ...)
* There is people that help me ('look ma, this is how you use google')
* There is support for the thing I use ('OS, Browser, Formats, ...')
* And for my way of working ('Excel let me do any app there...')
* And finally, actual people (that is the #1 thing that keep alive even the worst ERP on earth). This also includes marketing, sales people, etc. This are signal of having support even if is not exactly the best. If I go to enterprise and only have engineers that will be a bad signal, because well, developers then to be terrible at other stuff and the other stuff is support that matters.
If you have a good product, but there is not support, is dead.
And if you wanna fight a worse product, is smart to reduce the need to support for ('bugs, performance issues, platforms, ...') for YOUR TEAM because you wanna reduce YOUR COSTS but you NEED to add support in other dimensions!
The easiest for a small team, is just add humans (that is the MOST scarce source of support). After that, it need to be creative.
(also, this means you need to communicate your advantages well, because there is people that value some kind of support more than others 'have the code vs propietary' is a good example. A lot prefer the proprietary with support more than the code, I mean)
So you're telling me that if companies want to optimize profitability, they’d release inefficient, bug-ridden software with bad UI—forcing customers to pay for support, extended help, and bug fixes?
Suddenly, everything in this crazy world is starting to make sense.
2 replies →
This really focuses on the single metric that can be used try ought lifetime of a product … a really good point that keeps unfolding.
Starting an OSS product - write good docs. Got a few enterprise people interested - “customer success person” is most important marketing you can do …
Even if end-users had the data to reasonably tie-break on software quality and performance, as I scroll my list of open applications not a single one of them can be swapped out with another just because it were more performant.
For example: Docker, iterm2, WhatsApp, Notes.app, Postico, Cursor, Calibre.
I'm using all of these for specific reasons, not for reasons so trivial that I can just use the best-performing solution in each niche.
So it seems obviously true that it's more important that software exists to fill my needs in the first place than it pass some performance bar.
I’m surprised in your list because it contains 3 apps that I’ve replaced specifically due to performance issues (docker, iterm and notes). I don’t consider myself particularly performance sensitive (at home) either. So it might be true that the world is even _less_ likely to pay for resource efficiency than we think.
3 replies →
Except you’ve already swapped terminal for iterm, and orbstack already exists in part because docker left so much room for improvement, especially on the perf front.
1 reply →
> But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies
I'd take this one step further, 99% of the software written isn't being done with performance in mind. Even here in HN, you'll find people that advocate for poor performance because even considering performance has become a faux pas.
That means you L4/5 and beyond engineers are fairly unlikely to have any sort of sense when it comes to performance. Businesses do not prioritize efficient software until their current hardware is incapable of running their current software (and even then, they'll prefer to buy more hardware is possible.)
The user tolerance has changed as well because the web 2.0 "perpetual beta" and SaaS replacing other distribution models.
Also Microsoft has educated now several generations to accept that software fails and crashes.
Because "all software is the same", customers may not appreciate good software when they're used to live with bad software.
Is this really tolerance and not just monopolistic companies abusing their market position? I mean workers can't even choose what software they're allowed to use, those choices are made by the executive/management class.
> The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
That's where FOSS or even proprietary "shared source" wins. You know if the software you depend on is generally badly or generally well programmed. You may not be able to find the bugs, but you can see how long the functions are, the comments, and how things are named. YMMV, but conscientiousness is a pretty great signal of quality; you're at least confident that their code is clean enough that they can find the bugs.
Basically the opposite of the feeling I get when I look at the db schemas of proprietary stuff that we've paid an enormous amount for.
IME, the problem is that FOSS consumer facing software is just about the worst in UX and design.
Technically correct, since you know it's bad because it's FOSS.
At least when talking about software that has any real world use case, and not development for developments sake.
The used car market is market for lemons because it is difficult to distinguish between a car that has been well maintained and a car close to breaking down. However, the new car market is decidedly not a market for lemons because every car sold is tested by the state, and reviewed by magazines and such. You know exactly what you are buying.
Software is always sold new. Software can increase in quality the same way cars have generally increased in quality over the decades. Creating standards that software must meet before it can be sold. Recalling software that has serious bugs in it. Punishing companies that knowingly sell shoddy software. This is not some deep insight. This is how every other industry operates.
A hallmark of well-designed and well-written software is that it is easy to replace, where bug-ridden spaghetti-bowl monoliths stick around forever because nobody wants to touch them.
Just through pure Darwinism, bad software dominates the population :)
That's sorta the premise of the tweet, though.
Right now, the market buys bug-filled, inefficient software because you can always count on being able to buy hardware that is good enough to run it. The software expands to fill the processing specs of the machine it is running on - "What Andy giveth, Bill taketh away" [1]. So there is no economic incentive to produce leaner, higher-quality software that does only the core functionality and does it well.
But imagine a world where you suddenly cannot get top-of-the-line chips anymore. Maybe China invaded Taiwan and blockaded the whole island, or WW3 broke out and all the modern fabs were bombed, or the POTUS instituted 500% tariffs on all electronics. Regardless of cause, you're now reduced to salvaging microchips from key fobs and toaster ovens and pregnancy tests [2] to fulfill your computing needs. In this world, there is quite a lot of economic value to being able to write tight, resource-constrained software, because the bloated stuff simply won't run anymore.
Carmack is saying that in this scenario, we would be fine (after an initial period of adjustment), because there is enough headroom in optimizing our existing software that we can make things work on orders-of-magnitude less powerful chips.
[1] https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law
[2] https://www.popularmechanics.com/science/a33957256/this-prog...
I have that washing machine btw. I saw the AI branding and had a chuckle. I bought it anyway because it was reasonably priced (the washer was $750 at Costco).
In my case I bought it because LG makes appliances that fit under the counter if you don't have much space.
It bothered me the AI BS, but the price was good and the machine works fine.
I worked in a previous job on a product with 'AI' in the name. It was a source of amusement to many of us working there that the product didn't, and still doesn't use any AI.
> Smart Laundry with LG's AI Washing Machines: Efficient Spin Cycles & Beyond
Finally, the perfect example of AI-washing.
Reminds me of a "Washing Machine Trategy" by Stanisław Lem. A short story that may be a perfect parabole of the today's AI bubble.
> The AI label itself commands a price premium.
In the minds of some CEOs and VCs, maybe.
As for consumers, the AI label is increasingly off-putting.[0]
[0] https://www.tandfonline.com/doi/full/10.1080/19368623.2024.2...
A big part of why I like shopping at Costco is that they generally don't sell garbage. Their filter doesn't always match mine, but they do have a meaningful filter.
You must be referring only to security bugs because you would quickly toss Excel or Photoshop if it were filled with performance and other bugs. Security bugs are a different story because users don't feel the consequences of the problem until they get hacked and even then, they don't know how they got hacked. There are no incentives for developers to actually care.
Developers do care about performance up to a point. If the software looks to be running fine on a majority of computers why continue to spend resources to optimize further? Principle of diminishing returns.
I wouldn't be so sure. People will rename genes to work around Excel bugs.
> This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI.
The user cannot but a good AI might itself allow the average user to bridge the information asymmetry. So as long as we have a way to select a good AI assistant for ourselves...
> The user cannot but a good AI might itself allow the average user to bridge the information asymmetry. So as long as we have a way to select a good AI assistant for ourselves...
In the end it all hinges on the users ability to assess the quality of the product. Otherwise, the user cannot judge whether an assistant recommends quality products and the assistant has an incentive to suggest poorly (e.g. sellout to product producers).
1 reply →
> The AI label itself commands a price premium.
These days I feel like I'd be willing to pay more for a product that explicitly disavowed AI. I mean, that's vulnerable to the same kind of marketing shenanigans, but still. :-)
Ha! You're totally right.
That's generally what I think as well. Yes, the world could run on older hardware, but you keep making faster and adding more CPU's so, why bother making the code more efficient?
The argument that a buyer can't verify quality is simply false. Specially if the cost of something is large. And for lots of things, such verification isn't that hard.
The Market for Lemons story is about a complex thing that most people don't understand and is to low value. But even that paper misses many real world solution people have found for this.
> The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI.
Why then did people pay for ChatGPT when Google claimed it had something better? Because people quickly figured out that Google solution wasn't better.
Its easy to share results, its easy to look up benchmarks.
The claim that anything that claims its AI will automatically be able to demand some absurd prices is simply not true. At best people just slap AI on everything and its as if everybody stands up in a theater, nobody is better off.
Bad software is not cheaper to make (or maintain) in the long-term.
There are many exceptions.
1. Sometimes speed = money. Being the first to market, meeting VC-set milestones for additional funding, and not running out of runway are all things cheaper than the alternatives. Software maintenance costs later don't come close to opportunity costs if a company/project fails.
2. Most of the software is disposable. It's made to be sold, and the code repo will be chucked into a .zip on some corporate drive. There is no post-launch support, and the software's performance after launch is irrelevant for the business. They'll never touch the codebase again. There is no "long-term" for maintenance. They may harm their reputation, but that depends on whether their clients can talk with each other. If they have business or govt clients, they don't care.
3. The average tenure in tech companies is under 3 years. Most people involved in software can consider maintenance "someone else's problem." It's like the housing stock is in bad shape in some countries (like the UK) because the average tenure is less than 10 years. There isn't a person in the property's owner history to whom an investment in long-term property maintenance would have yielded any return. So now the property is dilapidated. And this is becoming a real nationwide problem.
4. Capable SWEs cost a lot more money. And if you hire an incapable IC who will attempt to future-proof the software, maintenance costs (and even onboarding costs) can balloon much more than some inefficient KISS code.
5. It only takes 1 bad engineering manager in the whole history of a particular piece of commercial software to ruin its quality, wiping out all previous efforts to maintain it well. If someone buys a second-hand car and smashes it into a tree hours later, was keeping the car pristinely maintained for that moment (by all the previous owners) worth it?
And so forth. What you say is true in some cases (esp where a company and its employees act in good faith) but not in many others.
2 replies →
What does "make in the long-term" even mean? How do you make a sandwich in the long-term?
Bad things are cheaper and easier to make. If they weren't, people would always make good things. You might say "work smarter," but smarter people cost more money. If smarter people didn't cost more money, everyone would always have the smartest people.
1 reply →
"In the long run, we are all dead." -- Keynes
In my experiences, companies can afford to care about good software if they have extreme demands (e.g. military, finance) or amortize over very long timeframes (e.g. privately owned). It's rare for consumer products to fall into either of these categories.
That’s true - but finding good engineers who know how to do it is more expensive, at least in expenditures.
Maybe not, but that still leaves the question of who ends up bearing the actual costs of the bad software.
Therefore brands as guardians of quality .
I actually disagree a bit. Sloppy software is cheap when you're a startup but it's quite expensive when you're big. You have all the costs of transmission and instances you need to account for. If airlines are going to cut an olive from the salad why wouldn't we pay programmers to optimize? This stuff compounds too.
We're currently operate in a world where new features are pushed that don't interest consumers. While they can't tell the difference between slop and not at purchase they sure can between updates. People constantly complain about stuff getting slower. But they also do get excited when things get faster.
Imo it's in part because we turned engineers into MBAs. Wherever I ask why can't we solve a problem some engineer always responds "well it's not that valuable". The bug fix is valuable to the user but they always clarify they mean money. Let's be honest, all those values are made up. It's not the job of the engineer to figure out how much profit a big fix will result in, it's their job to fix bugs.
Famously Coke doesn't advertise to make you aware of Coke. They advertise to associate good feelings. Similarly, car companies advertise to get their cars associated with class. Which is why sometimes they will advertise to people who have no chance of buying the car. What I'm saying is that brand matters. The problem right now is that all major brands have decided brand doesn't matter or brand decisions are always set in stone. Maybe they're right, how often do people switch? But maybe they're wrong, switching seems to just have the same features but a new UI that you got to learn from scratch (yes, even Apple devices aren't intuitive)
the thing is - countries have set down legal rules preventing selling of food that actively harms the consumer(expired, known poisonous, addition of addictive substances(opiates) etc) to continue your food analogy.
in software the regulations can be boiled down to 'lol lmao' in pre-GDPR era. and even now i see GDPR violations daily.
My partner was diagnosed with Parkinson’s almost 5 years ago. His disease has progressed significantly in the past year, and he begun to have delusions. He also had side effects from carbidopa/levodopa, which we decided to stop, and our primary physician decided he should start on PD-5 formula 4 months ago from UINE HEALTH CENTER. He now sleeps soundly, works out frequently, and is now very active since we started him on the PD-5 formula. It doesn’t make the Parkinson’s disease go away, but it did give him a better quality of life. We got the treatment from www. uineheathcentre. com
I like to point out that since ~1980, computing power has increased about 1000X.
If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.
If you went back in time to 1980 and offered the following choice:
I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.
People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...
Personally I think the 1000Xers kinda ruined things for the rest of us.
Except we've squandered that 1000x not on bounds checking but on countless layers of abstractions and inefficiency.
Am I taking crazy pills or are programs not nearly as slow as HN comments make them out to be? Almost everything loads instantly on my 2021 MacBook and 2020 iPhone. Every program is incredibly responsive. 5 year old mobile CPUs load modern SPA web apps with no problems.
The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.
178 replies →
Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.
If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.
14 replies →
The major slowdown of modern applications is network calls. Spend 50-500ms a pop for a few kilos of data. Many modern applications will spin up a half dozen blocking network calls casually.
This is something I've wished to eliminate too. Maybe we just cast the past 20 years as the "prototyping phase" of modern infrastructure.
It would be interesting to collect a roadmap for optimizing software at scale -- where is there low hanging fruit? What are the prime "offenders"?
Call it a power saving initiative and get environmentally-minded folks involved.
2 replies →
> on countless layers of abstractions
Even worse, our bottom most abstraction layers pretend that we are running on a single core system from the 80s. Even Rust got hit by that when it pulled getenv from C instead of creating a modern and safe replacement.
And text that is not a pixely or blurry mess. And Unicode.
1 reply →
I made a vendor run their buggy and slow software on a Sparc 20 against their strenuous complaints to just let them have an Ultra, but when they eventually did optimize their software to run efficiently (on the 20) it helped set the company up for success in the wider market. Optimization should be treated as competitive advantage, perhaps in some cases one of the most important.
> Optimization should be treated as competitive advantage
That's just so true!
The right optimizations at the right moment can have a huge boost for both the product and the company.
However the old tenet regarding premature optimization has been cargo-culted and expanded to encompass any optimization, and the higher-ups would rather have ICs churn out new features instead, shifting the cost of the bloat to the customer by insisting on more and bigger machines.
It's good for the economy, surely, but it's bad for both the users and the eventual maintainers of the piles of crap that end up getting produced.
> If dynamic array bounds checking cost 5% (narrator: it is far less than that)
It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.
It's not that simple either - normally, if you're doing some loops over a large array of pixels, say, to perform some operation to them, there will only be a couple of bounds checks before the loop starts, checking the starting and ending conditions of the loops, not re-doing the bounds check for every pixel.
So very rarely should it be anything like 3-4x the cost, though some complex indexing could cause it to happen, I suppose. I agree scopes are a decent way to handle it!
1 reply →
> It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
Your understanding of how bounds checking works in modern languages and compilers is not up to date. You're not going to find a situation where bounds checking causes an algorithm to take 3-4X longer.
A lot of people are surprised when the bounds checking in Rust is basically negligible, maybe 5% at most. In many cases if you use iterators you might not see a hit at all.
Then again, if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong.
> This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
Do you have any examples at all? Or is this just speculation?
6 replies →
Your argument is exactly why we ended up with the abominations of C and C++ instead of the safety of Pascal, Modula-2, Ada, Oberon, etc. Programmers at the time didn't realize how little impact safety features like bounds checking have. The bounds only need to be checked once for a for loop, not on each iteration.
3 replies →
Clock speeds are 2000x higher than the 80s.
IPC could be 80x higher when taking into account SIMD and then you have to multiply by each core. Mainstream CPUs are more like 1 to 2 million times faster than what was there in the 80s.
You can get full refurbished office computers that are still in the million times faster range for a few hundred dollars.
The things you are describing don't have much to do with computers being slow and feeling slow, but they are happening anyway.
Scripting languages that are constantly allocating memory to any small operation and pointer chasing ever variable because the type is dynamic is part of the problem, then you have people writing extremely inefficient programs in an already terrible environment.
Most programs are written now in however way the person writing them wants to work, not how someone using it wishes they were written.
Most people have actually no concept of optimization or what runs faster than something else. The vast majority of programs are written by someone who gets it to work and thinks "this is how fast this program runs".
The idea that the same software can run faster is a niche thought process, not even everyone on hacker news thinks about software this way.
The cost of bounds checking, by itself, is low. The cost of using safe languages generally can be vastly higher.
Garbage collected languages often consume several times as much memory. They aren't immediately freeing memory no longer being used, and generally require more allocations in the first place.
Maybe since 1980.
I recently watched a video that can be summarised quite simply as: "Computers today aren't that much faster than the computers of 20 years ago, unless you specifically code for them".
https://www.youtube.com/watch?v=m7PVZixO35c
It's a little bit ham-fisted, as the author was shirking decades of compile optimisations also, and it's not apples to apples as he's comparing desktop class hardware with what is essentially laptop hardware; but it's also interesting to see that a lot of the performance gains really weren't that great actually. he observes a doubling of performance in 15 years! Truth be told most people use laptops now, and truth be told 20 years ago most people used desktops, so it's not totally unfair.
Maybe we've bought a lot into marketing.
I agree with the sentiment and analysis that most humans prefer short term gains over long term ones. One correction to your example, though. Dynamic bounds checking does not solve security. And we do not know of a way to solve security. So, the gains are not as crisp as you are making them seem.
Bounds checking solves one tiny subset of security. There are hundreds of other subsets that we know how to solve. However these days the majority of the bad attacks are social and no technology is likely to solve them - as more than 10,000 years of history of the same attack has shown. Technology makes the attacks worse because they now scale, but social attacks have been happening for longer than recorded history (well there is every reason to believe that - there is unlikely to evidence going back that far).
3 replies →
You don't have to "solve" security in order to improve security hygiene by a factor of X, and thus risk of negative consequences by that same factor of X.
1 reply →
> And we do not know of a way to solve security.
Security through compartmentalization approach actually works. Compare the number of CVEs of your favorite OS with those for Qubes OS: https://www.qubes-os.org/security/qsb/
3 replies →
It's more like 100,000X.
Just the clockspeed increased 1000X, from 4 MHz to 4 GHz.
But then you have 10x more cores, 10x more powerful instructions (AVX), 10x more execution units per core.
Don't forget the law of large numters. 5% performance hit on one system is one thing, 5% across almost all of the current computing landscape is still a pretty huge value.
It's about 5%.
Cost of cyberattacks globally[1]: O($trillions)
Cost of average data breach[2][3]: ~$4 million
Cost of lost developer productivity: unknown
We're really bad at measuring the secondary effects of our short-sightedness.
[1] https://iotsecurityfoundation.org/time-to-fix-our-digital-fo...
[2] https://www.internetsociety.org/resources/doc/2023/how-to-ta...
[3] https://www.ibm.com/reports/data-breach
1 reply →
But it's not free for the taking. The point is that we'd get more than that 5%'s worth in exchange. So sure, we'll get significant value "if software optimization was truly a priority", but we get even more value by making other things a priority.
Saying "if we did X we'd get a lot in return" is similar to the fallacy of inverting logical implication. The question isn't, will doing something have significant value, but rather, to get the most value, what is the thing we should do? The answer may well be not to make optimisation a priority even if optimisation has a lot of value.
2 replies →
I don't think it's that deep. We are just stuck with browsers now, for better and worse. Everything else trails.
We're stuck with browsers now until the primary touch with the internet is assistants / agent UIs / chat consoles.
That could end up being Electron (VS Code), though that would be a bit sad.
9 replies →
The first reply is essentially right. This isn't what happened at all, just because C is still prevalent. All the inefficiency is everything down the stack, not in C.
Most programming languages have array bounds checking now.
Most programming languages are written in C, which doesn't.
Fairly sure that was OP's point.
3 replies →
I think on year 2001 GHz CPU should be a performance benchmark that every piece of basic non-high performance software should execute acceptably on.
This is kind of been a disappointment to me of AI when I've tried it. This has kind of been a disappointment to me of AI when I've tried it. Llm should be able to Port things. It should be able to rewrite things with the same interface. It should be able to translate from inefficient languages to more efficient ones.
It should even be able to optimize existing code bases automatically, or at least diagnose or point out poor algorithms, cache optimization, etc.
Heck I remember powerbuilder in the mid 90s running pretty well on 200 mhz CPUs. It doesn't even really interpreted stuff. It's just amazing how slow stuff is. Do rounded corners and CSS really consume that much CPU power?
My limited experience was trying to take the unix sed source code and have AI port it into a jvm language, and it could do the most basic operations, but utterly failed at even the intermediate sed capabilities. And then optimize? Nope
Of course there's no desire for something like that. Which really shows what the purpose of all this is. It's to kill jobs. It's not to make better software. And it means AI is going to produce a flood of bad software. Really bad software.
I've pondered this myself without digging into the specifics. The phrase "sufficiently smart compiler" sticks in my head.
Shower thoughts include whether there are languages that have features, other than through their popularity and representation in training corpuses, help us get from natural language to efficient code?
I was recently playing around with a digital audio workstation (DAW) software package called Reaper that honestly surprised me with its feature set, portability (Linux, macOS, Windows), snappiness etc. The whole download was ~12 megabytes. It felt like a total throwback to the 1990s in a good way.
It feels like AI should be able to help us get back to small snappy software, and in so doing maybe "pay its own way" with respect to CPU and energy requirements. Spending compute cycles to optimize software deployed millions of times seems intuitively like a good bargain.
I don't trust that shady-looking narrator. 5% of what exactly? Do you mean that testing for x >= start and < end is only 5% as expensive as assigning an int to array[x]?
Or would bounds checking in fact more than double the time to insert a bunch of ints separately into the array, testing where each one is being put? Or ... is there some gimmick to avoid all those individual checks, I don't know.
You only need to bounds check once before a for loop starts, not every iteration.
2 replies →
>Personally I think the 1000Xers kinda ruined things for the rest of us.
Reminds me of when NodeJS came out that bridged client and server side coding. And apparently their repos can be a bit of a security nightmare nowadays- so the minimalist languages with limited codebase do have their pros.
Since 1980 maybe. But since 2005 it increased maybe 5x and even that's generous. And that's half of the time that passed and two decades.
https://youtu.be/m7PVZixO35c?si=px2QKP9-80hDV8Ui
2005 was Pentium 4 era.
For comparison: https://www.cpubenchmark.net/compare/1075vs5852/Intel-Pentiu...
That's about a 168x difference. That was from before Moores law started petering out.
For only a 5x speed difference you need to go back to the 4th or 5th generation Intel Core processors from about 10 years ago.
It is important to note that the speed figure above is computed by adding all of the cores together and that single core performance has not increased nearly as much. A lot of that difference is simply from comparing a single core processor with one that has 20 cores. Single core performance is only about 8 times faster than that ancient Pentium 4.
As the other guy said, top of the line CPUs today are roughly ~100x faster than 20 years ago. A single core is ~10x faster (in terms of instructions per second) and we have ~10x the number of cores.
1 reply →
The problem is 1000xers are a rarity.
The software desktop users have to put up with is slow.
You can always install DOS as your daily driver and run 1980's software on any hardware from the past decade, and then tell me how that's slow.
1000x referred to the hardware capability, and that's not a rarity that is here.
The trouble is how software has since wasted a majority of that performance improvement.
Some of it has been quality of life improvements, leading nobody to want to use 1980s software or OS when newer versions are available.
But the lion's share of the performance benefit got chucked into the bin with poor design decisions, layers of abstractions, too many resources managed by too many different teams that never communicate making any software task have to knit together a zillion incompatible APIs, etc.
2 replies →
> the 1000Xers kinda ruined things for the rest of us
Robert Barton (of Burroughs 5000 fame) once referred to these people as “high priests of a low cult.”
The realization that a string could hold a gigabyte of text might have killed off null terminated strings, and saved us all a f@ckton of grief
And this is JavaScript. And you. are. going. to. LOVE IT!
So I've worked for Google (and Facebook) and it really drives the point home of just how cheap hardware is and how not worth it optimizing code is most of the time.
More than a decade ago Google had to start managing their resource usage in data centers. Every project has a budget. CPU cores, hard disk space, flash storage, hard disk spindles, memory, etc. And these are generally convertible to each other so you can see the relative cost.
Fun fact: even though at the time flash storage was ~20x the cost of hard disk storage, it was often cheaper net because of the spindle bottleneck.
Anyway, all of these things can be turned into software engineer hours, often called "mili-SWEs" meaning a thousandth of the effort of 1 SWE for 1 year. So projects could save on hardware and hire more people or hire fewer people but get more hardware within their current budgets.
I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands. So if you spend 1 SWE year working on optimization acrosss your project and you're not saving 5000 CPU cores, it's a net loss.
Some projects were incredibly large and used much more than that so optimization made sense. But so often it didn't, particularly when whatever code you wrote would probably get replaced at some point anyway.
The other side of this is that there is (IMHO) a general usability problem with the Web in that it simply shouldn't take the resources it does. If you know people who had to or still do data entry for their jobs, you'll know that the mouse is pretty inefficient. The old terminals from 30-40+ years ago that were text-based had some incredibly efficent interfaces at a tiny fraction of the resource usage.
I had expected that at some point the Web would be "solved" in the sense that there'd be a generally expected technology stack and we'd move on to other problems but it simply hasn't happened. There's still a "framework of the week" and we're still doing dumb things like reimplementing scroll bars in user code that don't work right with the mouse wheel.
I don't know how to solve that problem or even if it will ever be "solved".
I worked there too and you're talking about performance in terms of optimal usage of CPU on a per-project basis.
Google DID put a ton of effort into two other aspects of performance: latency, and overall machine utilization. Both of these were top-down directives that absorbed a lot of time and attention from thousands of engineers. The salary costs were huge. But, if you're machine constrained you really don't want a lot of cores idling for no reason even if they're individually cheap (because the opportunity cost of waiting on new DC builds is high). And if your usage is very sensitive to latency then it makes sense to shave milliseconds off because of business metrics, not hardware $ savings.
The key part here is "machine utilization" and absolutely there was a ton of effort put into this. I think before my time servers were allocated to projects but even early on in my time at Google Borg had already adopted shared machine usage and therew was a whole system of resource quota implemented via cgroups.
Likewise there have been many optimization projects and they used to call these out at TGIF. No idea if they still do. One I remember was reducing the health checks via UDP for Stubby and given that every single Google product extensively uses Stubby then even a small (5%? I forget) reduction in UDP traffic amounted to 50,000+ cores, which is (and was) absolutely worth doing.
I wouldn't even put latency in the same category as "performance optimization" because often you decrease latency by increasing resource usage. For example, you may send duplicate RPCs and wait for the fastest to reply. That could be double or tripling effort.
Except you’re self selecting for a company that has high engineering costs, big fat margins to accommodate expenses like additional hardware, and lots of projects for engineers to work on.
The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
The problem is that almost no one is doing it, because the way we make these decisions has nothing to do with the economical calculus behind, most people just do “what Google does”, which explains a lot of the disfunction.
I think the parent's point is that if Google with millions of servers can't make performance optimization worthwhile, then it is very unlikely that a smaller company can. If salaries dominate over compute costs, then minimizing the latter at the expense of the former is counterproductive.
> The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
That's debatable. Performance optimization almost always lead to complexity increase. Doubled performance can easily cause quadrupled complexity. Then one has to consider whether the maintenance burden is worth the extra performance.
2 replies →
It will not be “solved” because it’s a non-problem.
You can run a thought experiment imagining an alternative universe where human resource were directed towards optimization, and that alternative universe would look nothing like ours. One extra engineer working on optimization means one less engineer working on features. For what exactly? To save some CPU cycles? Don’t make me laugh.
> I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands.
I think this probably holds true for outfits like Google because 1) on their scale "a core" is much cheaper than average, and 2) their salaries are much higher than average. But for your average business, even large businesses? A lot less so.
I think this is a classic "Facebook/Google/Netflix/etc. are in a class of their own and almost none of their practices will work for you"-type thing.
Maybe not to the same extent, but an AWS EC2 m5.large VM with 2 cores and 8 GB RAM costs ~$500/year (1 year reserved). Even if your engineers are being paid $50k/year, that's the same as 100 VMs or 200 cores + 800 GB RAM.
Google doesn't come up with better compression and binary serialization formats just for fun--it improves their bottom line.
The title made me think Carmack was criticizing poorly optimized software and advocating for improving performance on old hardware.
When in fact, the tweet is absolutely not about either of the two. He's talking about a thought experiment where hardware stopped advancing and concludes with "Innovative new products would get much rarer without super cheap and scalable compute, of course".
It's related to a thread from yesterday, I'm guessing you haven't seen it:
https://news.ycombinator.com/item?id=43967208 https://threadreaderapp.com/thread/1922015999118680495.html
> "Innovative new products would get much rarer without super cheap and scalable compute, of course".
Interesting conclusion—I'd argue we haven't seen much innovation since the smartphone (18 years ago now), and it's entirely because capital is relying on the advances of hardware to sell what is to consumers essentially the same product that they already have.
Of course, I can't read anything past the first tweet.
We have self driving cars, amazing advancement in computer graphics, dead reckoning of camera position from visual input...
In the meantime, hardware has had to go wide on threads as single core performance has not improved. You could argue that's been a software gain and a hardware failure.
4 replies →
And I'd argue that we've seen tons of innovation in the past 18 years aside from just "the smartphone" but it's all too easy to take for granted and forget from our current perspective.
First up, the smartphone itself had to evolve a hell of a lot over 18 years or so. Go try to use an iPhone 1 and you'll quickly see all of the roadblocks and what we now consider poor design choices littered everywhere, vs improvements we've all taken for granted since then.
18 years ago was 2007? Then we didn't have (for better or for worse on all points):
* Video streaming services
* Decent video game market places or app stores. Maybe "Battle.net" with like 5 games, lol!
* VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
* Mapping applications on a phone (there were some stand-alone solutions like Garmin and TomTom just getting off the ground)
* QR Codes (the standard did already exist, but mass adoption would get nowhere without being carried by the smartphone)
* Rideshare, food, or grocery delivery services (aside from taxis and whatever pizza or chinese places offered their own delivery)
* Voice-activated assistants (including Alexa and other standalone devices)
* EV Cars (that anyone wanted to buy) or partial autopilot features aside from 1970's cruise control
* Decent teleconferencing (Skype's featureset was damn limited at the time, and any expensive enterprise solutions were dead on the launchpad due to lack of network effects)
* Decent video displays (flatscreens were still busy trying to mature enough to push CRTs out of the market at this point)
* Color printers were far worse during this period than today, though that tech will never run out of room for improvement.
* Average US Internet speeds to the home were still ~1Mbps, with speeds to cellphone of 100kbps being quite luxurious. Average PCs had 2GB RAM and 50GB hard drive space.
* Naturally: the tech everyone loves to hate such as AI, Cryptocurrencies, social network platforms, "The cloud" and SaaS, JS Frameworks, Python (at least 3.0 and even realistically heavy adoption of 2.x), node.js, etc. Again "Is this a net benefit to humanity" and/or "does this get poorly or maliciously used a lot" doesn't speak to whether or not a given phenomena is innovative, and all of these objectively are.
8 replies →
There has been a lot of innovation - but it is focused to some niche and so if you are not in a niche you don't see it and wouldn't care if you did. Most of the major things you need have already invented - I recall word processors as a kid, so they for sure date back to the 1970s - we still need word processors and there is a lot of polish that can be added, but all innovation is in niche things that the majority of us wouldn't have a use for if we knew about it.
Of course innovation is always in bits and spurts.
I think its a bad argument though. If we had to stop with the features for a little while and created some breathing room, the features would come roaring back. There'd be a downturn sure but not a continuous one.
A subtext here may be his current AI work. In OP, Carmack is arguing, essentially, that 'software is slow because good smart devs are expensive and we don't want to pay for them to optimize code and systems end-to-end as there are bigger fish to fry'. So, an implication here is that if good smart devs suddenly got very cheap, then you might see a lot of software suddenly get very fast, as everyone might choose to purchase them and spend them on optimization. And why might good smart devs become suddenly available for cheap?
This is exactly the point. People ignore that "bloat" is not (just) "waste", it is developer productivity increase motivated by economics.
The ability to hire and have people be productive in a less complicated language expands the market for workers and lowers cost.
I heartily agree. It would be nice if we could extend the lifetime of hardware 5, 10 years past its, "planned obsolescence." This would divert a lot of e-waste, leave a lot of rare earth minerals in the ground, and might even significantly lower GHG emissions.
The market forces for producing software however... are not paying for such externalities. It's much cheaper to ship it sooner, test, and iterate than it is to plan and design for performance. Some organizations in the games industry have figured out a formula for having good performance and moving units. It's not spread evenly though.
In enterprise and consumer software there's not a lot of motivation to consider performance criteria in requirements: we tend to design for what users will tolerate and give ourselves as much wiggle room as possible... because these systems tend to be complex and we want to ship changes/features continually. Every change is a liability that can affect performance and user satisfaction. So we make sure we have enough room in our budget for an error rate.
Much different compared to designing and developing software behind closed doors until it's, "ready."
Point 1 is why growth/debt is not a good economic model in the long run. We should have a care & maintenance focused economy and center our macro scale efforts on the overall good of the human race, not perceived wealth of the few.
If we focused on upkeep of older vehicles, re-use of older computers, etc. our landfills would be smaller proportional to 'growth'.
I'm sure there's some game theory construction of the above that shows that it's objectively an inferior strategy to be a conservationist though.
I sometimes wonder how the game theorist would argue with physics.
We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
I think this specific class of computational power - strictly serialized transaction processing - has not grown at the same rate as other metrics would suggest. Adding 31 additional cores doesn't make the order matching engine go any faster (it could only go slower).
If your product is handling fewer than several million transactions per second and you are finding yourself reaching for a cluster of machines, you need to back up like 15 steps and start over.
> We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
This is the bit that really gets me fired up. People (read: system “architects”) were so desperate to “prove their worth” and leave a mark that many of these systems have been over complicated, unleashing a litany of new issues. The original design would still satisfy 99% of use cases and these days, given local compute capacity, you could run an entire market on a single device.
Why can you not match orders in parallel using logarithmic reduction, the same way you would sort in parallel? Is it that there is not enough other computation being done other than sorting by time and price?
It's an inherently serial problem and regulations require it to be that way. Users who submit first want their orders to be the one that crosses.
2 replies →
I think it is the temporal aspect of order matching - for exchanges it is an inherently serial process.
You are only able to do that because you are doing simple processing on each transaction. If you had to do more complex processing on each transaction it wouldn't be possible to do that many. Though it is hard for me to imagine what more complex processing would be (I'm not in your domain)
The order matching engine is mostly about updating an in-memory order book representation.
It is rarely the case that high volume transaction processing facilities also need to deal with deeply complex transactions.
I can't think of many domains of business wherein each transaction is so compute intensive that waiting for I/O doesn't typically dominate.
3 replies →
One of the things I think about sometimes, a specific example rather than a rebuttal to Carmack.
The Electron Application is somewhere between tolerated and reviled by consumers, often on grounds of performance, but it's probably the single innovation that made using my Linux laptop in the workplace tractable. And it is genuinely useful to, for example, drop into a MS Teams meeting without installing.
So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
> So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
I would far, far rather have Windows-only software that is performant than the Electron slop we get today. With Wine there's a decent chance I could run it on Linux anyway, whereas Electron software is shit no matter the platform.
Wine doesn't even run Office, there's no way it'd run whatever native video stack Teams would use. Linux has Teams purely because Teams decided to go with web as their main technology.
Even the Electron version of Teams on Linux has a reduced feature set because there's no Office to integrate with.
1 reply →
Well, yes. It's an economic problem (which is to say, it's a resource allocation problem). Do you have someone spend extra time optimising your software or do you have them produce more functionality. If the latter generates more cash then that's what you'll get them to do. If the former becomes important to your cashflow then you'll get them to do that.
I think you’re right in that it’s an economics problem, but you’re wrong about which one.
For me this is a clear case of negative externalities inflicted by software companies against the population at large.
Most software companies don’t care about optimization because they’re not paying the real costs on that energy, lost time or additional e-waste.
Is there any realistic way to shift the payment of hard-to-trace costs like environmental clean-up, negative mental or physical health, and wasted time back to the companies and products/software that cause them?
It's the kind of economics that shifts the financial debt to accumulating waste, and technical debt, which is paid for by someone else. It's basically stealing. There are --of course-- many cases in which thorough optimizing doesn't make much sense, but the idea of just adding servers instead of rewriting is a sad state of affairs.
It doesn't seem like stealing to me? Highly optimised software generally takes more effort to create and maintain.
The tradeoff is that we get more software in general, and more features in that software, i.e. software developers are more productive.
I guess on some level we can feel that it's morally bad that adding more servers or using more memory on the client is cheaper than spending developer time but I'm not sure how you could shift that equilibrium without taking away people's freedom to choose how to build software?
1 reply →
> It's basically stealing.
This is exactly right. Why should the company pay an extra $250k in salary to "optimize" when they can just offload that salary to their customers' devices instead? The extra couple of seconds, extra megabytes of bandwidth, and shittery of the whole ecosystem has been externalized to customers in search of ill-gotten profits.
4 replies →
> It's basically stealing
This feels like hyperbole to me. Who is being stolen from here? Not the end user, they're getting the tradeoff of more features for a low price in exchange for less optimized software.
5 replies →
would you spend 100 years writing the perfect editor optimizing every single function and continueously optimizing and when will it ever be complete? No you wouldn't. Do you use python or java or C? Obviously, that can be optimized if you wrote in assembly. Practice what you preach, otherwise you'd be stealing.
Not really stealing. You could off course build software that is more optimized and with the same features but at a higher cost. Would most buyers pay twice the price for a webb app that loads in 1 sec instead of 2? Probably not.
1 reply →
> Do you have someone spend extra time optimising your software or do you have them produce more functionality
Depends. In general, I'd rather have devs optimize the software rather than adding new features just for the sake of change.
I don't use most of the new features in macOS, Windows, or Android. I mostly want an efficient environment to run my apps and security improvements. I'm not that happy about many of the improvements in macOS (eg the settings app).
Same with design software. I don't use most of the new features introduced by Adobe. I'd be happy using Illustrator or Photoshop from 10 years ago. I want less bloat, not more.
I also do audio and music production. Here I do want new features because the workflow is still being improved but definitely not at the cost of efficiency.
Regarding code editors I'm happy with VSCode in terms of features. I don't need anything else. I do want better LSPs but these are not part of the editor core. I wish VSCode was faster and consumed less memory though.
Efficiency is critical to my everyday life. For example, before I get up from my desk to grab a snack from the kitchen, I'll bring any trash/dishes with me to double the trip's benefits. I do this kind of thing often.
Optimizing software has a similar appeal. But when the problem is "spend hours of expensive engineering time optimizing the thing" vs "throw some more cheap RAM at it," the cheaper option will prevail. Sometimes, the problem is big enough that it's worth the optimization.
The market will decide which option is worth pursuing. If we get to a point where we've reached diminishing returns on throwing hardware at a problem, we'll optimize the software. Moore's Law may be slowing down, but evidently we haven't reached that point yet.
Ultimately it's a demand problem. If consumer demands more performant software, they would pay a premium for it. However, the opposite is more true. They would prefer an even less performant version if it came with a cheaper price tag.
You have just explained how enshitification works.
"The world" runs on _features_ not elegant, fast, or bug free software. To the end user, there is no difference between a lack of a feature, and a bug. Nor is there any meaningful difference between software taking 5 minutes to complete something because of poor performance, compared to the feature not being there and the user having to spend 5 minutes completing the same task manually. It's "slow".
If you keep maximizing value for the end user, then you invariably create slow and buggy software. But also, if you ask the user whether they would want faster and less buggy software in exchange for fewer features, they - surprise - say no. And even more importantly: if you ask the buyer of software, which in the business world is rarely the end user, then they want features even more, and performance and elegance even less. Given the same feature set, a user/buyer would opt for the fastest/least buggy/most elegant software. But if it lacks any features - it loses. The reason to keep software fast and elegant is because it's the most likely path to be able to _keep_ adding features to it as to not be the less feature rich offering. People will describe the fast and elegant solution with great reviews, praising how good it feels to use. Which might lead people to think that it's an important aspect. But in the end - they wouldn't buy it at all if it didn't do what they wanted. They'd go for the slow frustrating buggy mess if it has the critical feature they need.
Almost all of my nontechnical friends and family members have at some point complained about bloated and overly complicated software that they are required to use.
Also remember that Microsoft at this point has to drag their users kicking and screaming into using the next Windows version. If users were let to decide for themselves, many would have never upgraded past Windows XP. All that despite all the pretty new features in the later versions.
I'm fully with you that businesses and investors want "features" for their own sake, but definitely not users.
Every time I offer alternatives to slow hardware, people find a missing feature that makes them stick to what they're currently using. Other times the features are there but the buttons for it are in another place and people don't want to learn something new. And that's for free software, with paid software things become even worse because suddenly the hours they spend on loading times is worthless compared to a one-time fee.
Complaining about slow software happens all the time, but when given the choice between features and performance, features win every time. Same with workflow familiarity; you can have the slowest, most broken, hacked together spreadsheet-as-a-software-replacement mess, but people will stick to it and complain how bad it is unless you force them to use a faster alternative that looks different.
Every software you use has more bloat than useful features? Probably not. And what's useless to one user might be useful to another.
No way.
You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
If people could, no one would ever upgrade anything anymore. Look at how hard MS has to work to force anyone to upgrade. I have never heard of anyone who wanted a new version of Windows, Office, Slack, Zoom, etc.
This is also why everything (like Photoshop) is being forced into the cloud. The vast majority of people don't want the new features that are being offered. Including buyers at businesses. So the answer to keep revenue up is to force people to buy regardless of what features are being offered or not.
> You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
I think this is more a consumer perspective than a B2B one. I'm thinking about the business case. I.e. businesses purchase software (or has bespoke software developed). Then they pay for fixes/features/improvements. There is often a direct communication between the buyer and the developer (whether it's off-the shelf, inhouse or made to spec). I'm in this business and the dialog is very short "great work adding feature A. We want feature B too now. And oh the users say the software is also a bit slow can you make it go faster? Me: do you want feature B or faster first? Them (always) oh feature B. That saves us man-weeks every month". Then that goes on for feature C, D, E, ...Z.
In this case, I don't know how frustrated the users are, because the customer is not the user - it's the users' managers.
In the consumer space, the user is usually the buyer. That's one huge difference. You can choose the software that frustrates you the least, perhaps the leanest one, and instead have to do a few manual steps (e.g. choose vscode over vs, which means less bloated software but also many fewer features).
Agree WRT the tradeoff between features and elegance.
Although, I do wonder if there’s an additional tradeoff here. Existing users, can apparently do what they need to do with the software, because they are already doing it. Adding a new feature might… allow them to get rid of some other software, or do something new (but, that something new must not be so earth shattering, because they didn’t seek out other software to do it, and they were getting by without it). Therefore, I speculate that existing users, if they really were introspective, would ask for those performance improvements first. And maybe a couple little enhancements.
Potential new users on the other hand, either haven’t heard of your software yet, or they need it to do something else before they find it useful. They are the ones that reasonably should be looking for new features.
So, in “features vs performance” decision is also a signal about where the developers’ priorities lay: adding new users or keeping old ones happy. So, it is basically unsurprising that:
* techies tend to prefer the latter—we’ve played this game before, and know we want to be the priority for the bulk of the time using the thing, not just while we’re being acquired.
* buggy slow featureful software dominates the field—this is produced by companies that are prioritizing growth first.
* history is littered with beautiful, elegant software that users miss dearly, but which didn’t catch on broadly enough to sustain the company.
However, the tradeoff is real in both directions; most people spend most of their time as users instead of potential users. I think this is probably a big force behind the general perception that software and computers are incredibly shit nowadays.
Perfectly put. People who try to argue that more time should be spent on making software perform better probably aren't thinking about who's going to pay for that.
For the home/office computer, the money spent on more RAM and a better CPU enables all software it runs to be shipped more cheaply and with more features.
Unfortunately, bloated software passes the costs to the customer and it's hard to evaluate the loss.
Except your browser taking 180% of available ram maybe.
By the way, the world could also have some bug free software, if anyone could afford to pay for it.
What cost? The hardware is dirt cheap. Programmers aren't cheap. The value of being able to use cheap software on cheap hardware is basically not having to spend a lot of time optimizing things. Time is the one thing that isn't cheap here. So there's a value in shipping something slightly sub optimal sooner rather than something better later.
> Except your browser taking 180% of available ram maybe.
For most business users, running the browser is pretty much the only job of the laptop. And using virtual memory for open tabs that aren't currently open is actually not that bad. There's no need to fit all your gazillion tabs into memory; only the ones you are looking at. Browsers are pretty good at that these days. The problem isn't that browsers aren't efficient but that we simply push them to the breaking content with content. Content creators simply expand their resource usage whenever browsers get optimized. The point of optimization is not saving cost on hardware but getting more out of the hardware.
The optimization topic triggers the OCD of a lot of people and sometimes those people do nice things. John Carmack built his career when Moore's law was still on display. Everything he did to get the most out of CPUs was super relevant and cool but it also dated in a matter of a few years. One moment we were running doom on simple 386 computers and the next we were running Quake and Unreal with shiny new Voodoo GPUs on a Pentium II pro. I actually had the Riva 128 as my first GPU, which was one of the first products that Nvidia shipped running Unreal and other cool stuff. And while CPUs have increased enormously in performance, GPUs have increased even more by some ridiculous factor. Nvidia has come a long way since then.
I'm not saying optimization is not important but I'm just saying that compute is a cheap commodity. I actually spend quite a bit of time optimizing stuff so I can appreciate what that feels like and how nice it is when you make something faster. And sometimes that can really make a big difference. But sometimes my time is better spent elsewhere as well.
> Time is the one thing that isn't cheap here.
Right, and that's true of end users as well. It's just not taken into account by most businesses.
I think your take is pretty reasonable, but I think most software is too far towards slow and bloated these days.
Browsers are pretty good, but developers create horribly slow and wasteful web apps. That's where the optimization should be done. And I don't mean they should make things as fast as possible, just test on an older machine that a big chunk of the population might still be using, and make it feel somewhat snappy.
The frustrating part is that most web apps aren't really doing anything that complicated, they're just built on layers of libraries that the developers don't understand very well. I don't really have a solution to any of this, I just wish developers cared a little bit more than they do.
1 reply →
> The hardware is dirt cheap.
Maybe to you.
Meanwhile plenty of people are living paycheck-to-paycheck and literally cannot afford a phone, let alone a new phone and computer every few years.
> The hardware is dirt cheap.
It's not, because you multiply that 100% extra CPU time by all of an application's users and only then you come to the real extra cost.
And if you want to pick on "application", think of the widely used libraries and how much any non optimization costs when they get into everything...
Your whole reply is focused at business level but not everybody can afford 32GB of RAM just to have a smooth experience on a web browser.
> The hardware is dirt cheap. Programmers aren't cheap.
That may be fine if you can actually improve the user experience by throwing hardware at the problem. But in many (most?) situations, you can't.
Most of the user-facing software is still single-threaded (and will likely remain so for a long time). The difference in single-threaded performance between CPUs in wide usage is maybe 5x (and less than 2x for desktop), while the difference between well optimized and poorly optimized software can be orders of magnitude easily (milliseconds vs seconds).
And if you are bottlenecked by network latency, then the CPU might not even matter.
I have been thinking about this a lot ever since I played a game called "Balatro". In this game nothing extraordinary happens in terms of computing - some computations get done, some images are shuffled around on the screen, the effects are sparse. The hardware requirements aren't much by modern standards, but still, this game could be ported 1:1 to a machine with Pentium II with a 3dfx graphics card. And yet it demands so much more - not a lot by today standards, but still. I am tempted to try to run it on a 2010 netbook to see if it even boots up.
It is made in lua using love2d. That helped the developers and comes with a cost in minimal requirements (even if they aren't much for a game released in 2024).
One way to think about it is: If we were coding all our games in C with no engine, they would run faster, but we would have far fewer games. Fewer games means fewer hits. Odds are Balatro wouldn't have been made, because those developer hours would've been allocated to some other game which wasn't as good.
1 reply →
The game is ported to switch, and it does run slow when you do big combos. You can feel it visually to the point that it's a bit annoying.
I was working as a janitor, moonlighting as an IT director, in 2010. Back then I told the business that laptops for the past five years (roughly since Nehalem) have plenty of horsepower to run spreadsheets (which is basically all they do) with two cores, 16 GB of RAM, and a 500GB SATA SSD. A couple of users in marketing did need something a little (not much) beefier. Saved a bunch of money by not buying the latest-and-greatest laptops.
I don't work there any more. Today I am convinced that's true today: those computers should still be great for spreadsheets. Their workflow hasn't seriously changed. It's the software that has. If they've continued with updates (can it even "run" MS Windows 10 or 11 today? No idea, I've since moved on to Linux) then there's a solid chance that the amount of bloat and especially move to online-only spreadsheets would tank their productivity.
Further, the internet at that place was terrible. The only offerings were ~16Mbit asynchronous DSL (for $300/mo just because it's a "business", when I could get the same speed for $80/mo at home), or Comcast cable 120Mbit for $500/mo. 120Mbit is barely enough to get by with an online-only spreadsheet, and 16Mbit definitely not. But worse: if internet goes down, then the business ceases to function.
This is the real theft that another commenter [0] mentioned that I wholeheartedly agree with. There's no reason whatsoever that a laptop running spreadsheets in an office environment should require internet to edit and update spreadsheets, or crazy amounts of compute/storage, or even huge amounts of bandwidth.
Computers today have zero excuse for terrible performance except only to offload costs onto customers - private persons and businesses alike.
[0]: https://news.ycombinator.com/item?id=43971960
The world DOES run on older hardware.
How new do you think the CPU in your bank ATM or car's ECU is?
Well I know the CPU in my laptop is already over 10 years old and still works good enough for everything I do.
My daily drivers at home are an i3-540 and and Athlon II X4. Every time something breaks down, I find it much cheaper to just buy a new part than to buy a whole new kit with motherboard/CPU/RAM.
I'm a sysadmin, so I only really need to log into other computers, but I can watch videos, browse the web, and do some programming on them just fine. Best ROI ever.
2 replies →
Some of it does.
The chips in everyones pockets do a lot of compute and are relatively new though.
I'd be willing to bet that even a brand new iPhone has a surprising number of reasonably old pieces of hardware for Bluetooth, wifi, gyroscope, accelerometer, etc. Not everything in your phone changes as fast as the CPU.
There's a decent chance something in the room you're in right now is running an 8051 core.
Sure, if you think the world consists of cash transactions and whatever a car needs to think about.
If we're talking numbers, there are many, many more embedded systems than general purpose computers. And these are mostly built on ancient process nodes compared to the cutting edge we have today; the shiny octa-cores on our phones are supported by a myriad of ancilliary chips that are definitely not cutting edge.
2 replies →
Doom can run on Apple's Lightning to HDMI adapter.
A USB charger is more powerful than the Apollo Guidance Computer: https://web.archive.org/web/20240101203337/https://forresthe...
1 reply →
Powerplants and planes still run on 80s hardware.
Modern planes do not, and many older planes have been retrofitted, in whole or in part, with more modern computers.
Some of the specific embedded systems (like the sensors that feed back into the main avionics systems) may still be using older CPUs if you squint, but it's more likely a modern version of those older designs.
Related: I wonder what cpu Artemis/Orion is using
IBM PowerPC 750X apparently, which was the CPU the Power Mac G3 used back in the day. Since it's going into space it'll be one of the fancy radiation-hardened versions which probably still costs more than your car though, and they run four of them in lockstep to guard against errors.
https://www.eetimes.com/comparing-tech-used-for-apollo-artem...
6 replies →
Sorry, don't want to go back to a time where I could only edit ASCII in a single font.
Do I like bloat? No. Do I like more software rather than less? Yes! Unity and Unreal are less efficient than custom engines but there are 100x more titles because that tradeoff of efficiency of the CPU vs efficiency of creation.
The same is true for website based app (both online and off). Software ships 10x faster as a web page than as a native app for (windows/mac/linux/android/ios). For most, that's all I need. Even for native like apps, I use photopea.com over photoshop/gimp/krita/affinity etc because it's available everywhere no matter which machine I use or who's machine it is. Is it less efficient running in JS in the browser? Probaby. Do I care? No
VSCode, now the most popular editor in the worlds (IIRC) is web-tech. This has so many benefits. For one, it's been integrated into 100s of websites, so this editor I use is available in more places. It's using tech more people know so more extension that do more things. Also, probably arguably because of JS's speed issues, it encouraged the creation of the Language Server Protocol. Before this, every editor rolled their own language support. The LSP is arguably way more bloat than doing it directly in the editor. I don't care. It's a great idea, way more flexible. Any language can write one LSP and then all editors get support for that language.
Is there or could we make an iPhone-like that runs 100x slower than conventional phones but uses much less energy, so it powers itself on solar? It would be good for the environment and useful in survival situations.
Or could we make a phone that runs 100x slower but is much cheaper? If it also runs on solar it would be useful in third-world countries.
Processors are more than fast enough for most tasks nowadays; more speed is still useful, but I think improving price and power consumption is more important. Also cheaper E-ink displays, which are much better for your eyes, more visible outside, and use less power than LEDs.
We have much hardware on the secondary market (resale) that's only 2-3x slower than pristine new primary market devices. It is cheap, it is reuse, and it helps people save in a hyper-consumerist society. The common complaint is that it doesn't run bloated software anymore. And I don't think we can make non-bloated software for a variety of reasons.
As a video game developer, I can add some perspective (N=1 if you will). Most top-20 game franchises spawned years ago on much weaker hardware, but their current installments demand hardware not even a few years old (as recommended/intended way to play the game). This is due to hyper-bloating of software, and severe downskilling of game programmers in the industry to cut costs. The players don't often see all this, and they think the latest game is truly the greatest, and "makes use" of the hardware. But the truth is that aside from current-generation graphics, most games haven't evolved much in the last 10 years, and current-gen graphics arrived on PS4/Xbox One.
Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Ironically, upgrading a graphics card one generation (RTX 3070 -> 4070) costs about $300 if the old card is sold and $500 if it isn't. So gamers end up paying ~$400 for the latest games every few years and then rebel against paying $30 extra per game instead, which could very well be cheaper than the GPU upgrade (let alone other PC upgrades), and would allow companies to spend much more time on optimization. Well, assuming it wouldn't just go into the pockets of publishers (but that is a separate topic).
It's an example of Scott Alexander's Moloch where it's unclear who could end this race to the bottom. Maybe a culture shift could, we should perhaps become less consumerist and value older hardware more. But the issue of bad software has very deep roots. I think this is why Carmack, who has a practically perfect understanding of software in games, doesn't prescribe a solution.
One only needs to look at Horizon: Zero Dawn to note that the truth of this is deeply uneven across the games industry. World streaming architectures are incredible technical achievements. So are moddable engines. There are plenty of technical limits being pushed by devs, it's just not done at all levels.
1 reply →
> Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Producing games doesn't cost anything on a per-unit basis. That's not at all the reason for low quality.
Games could cost $1000 per copy and big game studios (who have investors to worry about) would still release buggy slow games, because they are still going to be under pressure to get the game done by Christmas.
> Or could we make a phone that runs 100x slower but is much cheaper? I
Probably not - a large part of the cost is equipment and R&D. It doesn't cost much more to build the most complex CPU vs a 6502 - there is only a tiny bit more silicon and chemicals. What is costly is the R&D behind the chip, and the R&D behind the machines that make the chips. If intel fired all their R&D engineers who were not focused on reducing manufacturing costs they could greatly reduce the price of their CPUs - until AMD released a next generation that is much better. (this is more or less what Henry Ford did with the model-T - he reduced costs every year until his competition adding features were enough better that he couldn't sell his cars.
[dead]
I absolutely agree with what you are saying.
Yes, it's possible and very simple. Lower the frequency (dramatically lowers power usage), fewer cores, few threads, etc. The problem is, we don't know what we need. What if a great new apps comes out (think LLM); you'll be complaining your phone is too slow to run it.
Meanwhile on every programmer's 101 forum: "Space is cheap! Premature optimization is the root of all evil! Dev time > runtime!"
Exactly. Yes I understand the meaning behind it, but the line gets drummed into developers everywhere, the subtleties and real meaning are lost, and every optimisation- or efficiency-related question on Stack Overflow is met with cries of "You're doing it wrong! Don't ever think about optimising unless you're certain you have a problem!" This habit of pushing it to extremes, inevitably leads to devs not even thinking about making their software efficient. Especially when they develop on high-end hardware and don't test on anything slower.
Perhaps a classic case where a guideline, intended to help, ends up causing ill effects by being religiously stuck to at all times, instead of fully understanding its meaning and when to use it.
A simple example comes to mind, of a time I was talking to a junior developer who thought nothing of putting his SQL query inside a loop. He argued it didn't matter because he couldn't see how it would make any difference in that (admittedly simple) case, to run many queries instead of one. To me, it betrays a manner of thinking. It would never have occurred to me to write it the slower way, because the faster way is no more difficult or time-consuming to write. But no, they'll just point to the mantra of "premature optimisation" and keep doing it the slow way, including all the cases where it unequivocally does make a difference.
HN: Yeah! We should be go back to writing optimized code that fully uses the hardware capabilities!
Also HN: Check this new AI tool that consumes 1000x more energy to do the exact same thing we could already do, but worse and with no reproducibility
Google the goomba fallacy
Wirth's Law:
>software is getting slower more rapidly than hardware is becoming faster.
https://en.wikipedia.org/wiki/Wirth%27s_law
Unfortunately there is a distinct lack of any scientific investigation or rigorous analysis into the phenomenon allegedly occurring that is called “Wirth’s Law” unlike, say, Moore’s Law despite many anecdotal examples. Reasoning as if it were literally true leads to absurd conclusions that do not correspond with any reality that I can observe so I am tempted to say that as a broad phenomenon it is obviously false and that anyone who suggests otherwise is being disingenuous. For it to be true there would have to have been no progress whatever in computing-enabled technologies, yet the real manifestations of the increase in computing resources and the exploitation thereof permeates, alters and invades almost every aspect of society and of our personal day to day lives at a constantly increasing rate.
Call it the X-Windows factor --- software gets more capable/more complex and there's a constant leap-frogging (new hardware release, stuff runs faster, software is written to take advantage of new hardware, things run more slowly, software is optimized, things run more quickly).
The most striking example of this was Mac OS X Public Beta --- which made my 400MHz PowerPC G3 run at about the same speed as my 25 MHz NeXT Cube running the quite similar OPENSTEP 4.2 (just OS X added Java and Carbon and so forth) --- but each iteration got quicker until by 10.6.8, it was about perfect.
Related: https://duskos.org/
Oh man, that's lovely. Awesome project!
.NET has made great strides in this front in recent years. Newer versions optimize cpu and ram usage of lots of fundamentals, and introduced new constructs to reduce allocations and cpu for new code. One might argue they were able because they were so bad, but it’s worth looking into if you haven’t in a while.
Really no notes on this. Carmack hit both sides of the coin:
- the way we do industry-scale computing right now tends to leave a lot of opportunity on the table because we decouple, interpret, and de-integrate where things would be faster and take less space if we coupled, compiled, and made monoliths
- we do things that way because it's easier to innovate, tweak, test, and pivot on decoupled systems that isolate the impact of change and give us ample signal about their internal state to debug and understand them
The idea of a hand me down computer made of brass and mahogany still sounds ridiculous because it is, but we're nearly there in terms of Moore's law. We have true 2nm within reach and then the 1nm process is basically the end of the journey. I expect 'audiophile grade' PCs in the 2030s and then PCs become works of art, furniture, investments, etc. because they have nowhere to go.
https://en.wikipedia.org/wiki/2_nm_process
https://en.wikipedia.org/wiki/International_Roadmap_for_Devi...
The increasing longevity of computers has been impressing me for about 10 years.
My current machine is 4 years old. It's absolutely fine for what I do. I only ever catch it "working" when I futz with 4k 360 degree video (about which: fine). It's a M1 Macbook Pro.
I traded its predecessor in to buy it, so I don't have that one anymore; it was a 2019 model. But the one before that, a 2015 13" Intel Macbook Pro, is still in use in the house as my wife's computer. Keyboard is mushy now, but it's fine. It'd probably run faster if my wife didn't keep fifty billion tabs open in Chrome, but that's none of my business. ;)
The one behind that one, purchased in 2012, is also still in use as a "media server" / ersatz SAN. It's a little creaky and is I'm sure technically a security risk given its age and lack of updates, but it RUNS just fine.
This always saddens me. We could have things instant, simple, and compute & storage would be 100x more abundant in practical terms than it is today.
It's not even a trade off a lot of the time, simpler architectures perform better but are also vastly easier and cheaper to maintain.
We just lack expertise I think, and pass on cargo cult "best practices" much of the time.
Yeah, having browsers the size and complexities of OSs is just one of many symptoms. I intimate at this concept in a grumbling, helpless manner somewhat chronically.
There's a lot today that wasn't possible yesterday, but it also sucks in ways that weren't possible then.
I foresee hostility for saying the following, but it really seems most people are unwilling to admit that most software (and even hardware) isn't necessarily made for the user or its express purpose anymore. To be perhaps a bit silly, I get the impression of many services as bait for telemetry and background fun.
While not an overly earnest example, looking at Android's Settings/System/Developer Options is pretty quick evidence that the user is involved but clearly not the main component in any respect. Even an objective look at Linux finds manifold layers of hacks and compensation for a world of hostile hardware and soft conflict. It often works exceedingly well, though as impractical as it may be to fantasize, imagine how badass it would be if everything was clean, open and honest. There's immense power, with lots of infirmities.
I've said that today is the golden age of the LLM in all its puerility. It'll get way better, yeah, but it'll get way worse too, in the ways that matter.[1]
Edit: 1. Assuming open source doesn't persevere
My phone isn't getting slower, but rather the OS running on it becomes less efficient with every update. Shameful.
Developers over 50ish (like me) grew up at a time when CPU performance and memory constraints affected every application. So you had to always be smart about doing things efficiently with both CPU and memory.
Younger developers have machines that are so fast they can be lazy with all algorithms and do everything 'brute force'. Like searching thru an array every time when a hashmap would've been 10x faster. Or using all kinds of "list.find().filter().any().every()" chaining nonsense, when it's often smarter to do ONE loop, and inside that loop do a bunch of different things.
So younger devs only optimize once they NOTICE the code running slow. That means they're ALWAYS right on the edge of overloading the CPU, just thru bad coding. In other words, their inefficiencies will always expand to fit available memory, and available clock cycles.
I'm not much into retro computing. But it amazes me what people are pulling out of a dated hardware.
Doom on the Amiga for example (many consider it the main factor for the Amiga demise). Optimization and 30 years and it finally arrived
I generally believe that markets are somewhat efficient.
But somehow, we've ended up with the current state of Windows as the OS that most people use to do their job.
Something went terribly wrong. Maybe the market is just too dumb, maybe it's all the market distortions that have to do with IP, maybe it's the monopolístic practices of Microsoft. I don't know, but in my head, no sane civilization would think that Windows 10/11 is a good OS that everyone should use to optimize our economy.
I'm not talking only about performance, but about the general crappiness of the experience of using it.
I wonder if anyone has calculated the additional planet heating generated by crappy e.g. JS apps or useless animations
Often, this is presented as a tradeoff between the cost of development and the cost of hardware. However, there is a third leg of that stool: the cost of end-user experience.
When you have a system which is sluggish to use because your skimped on development, it is often the case that you cannot make it much faster no matter how expensive is the hardware you throw at it. Either there is a single-threaded critical path, so you hit the limit of what one CPU can do (and adding more does not help), or you hit the laws of physics, such as with network latency which is ultimately bound by the speed of light.
And even when the situation could be improved by throwing more hardware at it, this is often done only to the extent to make the user experience "acceptable", but not "great".
In either case, the user experience suffers and each individual user is less productive. And since there are (usually) orders of magnitude more users than developers, the total damage done can be much greater than the increased cost of performance-focused development. But the cost of development is "concentrated" while the cost of user experience is "distributed", so it's more difficult to measure or incentivize for.
The cost of poor user experience is a real cost, is larger than most people seem to think and is non-linear. This was observed in the experiments done by IBM, Google, Amazon and others decades ago. For example, take a look at:
The Economic Value of Rapid Response Time https://jlelliotton.blogspot.com/p/the-economic-value-of-rap...
He and Richard P. Kelisky, Director of Computing Systems for IBM's Research Division, wrote about their observations in 1979, "...each second of system response degradation leads to a similar degradation added to the user's time for the following [command]. This phenomenon seems to be related to an individual's attention span. The traditional model of a person thinking after each system response appears to be inaccurate. Instead, people seem to have a sequence of actions in mind, contained in a short-term mental memory buffer. Increases in SRT [system response time] seem to disrupt the thought processes, and this may result in having to rethink the sequence of actions to be continued."
Z+6 months: Start porting everything to Collapse OS
https://collapseos.org/
Perfect parallel to the madness that is AI. With even modest sustainability incentives, the industry wouldn't have pulverized a trillion dollar on training models nobody uses to dominate the weekly attention fight and fundraising game.
Evidence: DeepSeek
Obviously, the world ran before computers. The more interesting part of this is what would we lose if we knew there were no new computers, and while I'd like to believe the world would put its resources towards critical infrastructure and global logistics, we'd probably see the financial sector trying to buy out whatever they could, followed by any data center / cloud computing company trying to lock all of the best compute power in their own buildings.
Imagine software engineering was like real engineering, where the engineers had licensing and faced fines or even prison for negligence. How much of the modern worlds software would be tolerated?
Very, very little.
If engineers handled the Citicorp center the same way software engineers did, the fix would have been to update the documentation in Confluence to not expose the building to winds and then later on shrug when it collapsed.
"If this country built bridges they way it builds [information] systems, we'd be a nation run by ferryboats." --Tim Bryce
So Metal and DirectX 12 capable GPUs are already hard requirements for their respective platforms, and I anticipate that Wayland compositors will start depending on Vulkan eventually.
That will make pre-Broadwell laptops generally obsolete and there aren't many AMD laptops of that era that are worth saving.
Servers don't need a graphical environment, but the inefficiencies of older hardware are much more acute in this context compared to personal computing, where the full capabilities of the hardware are rarely exploited.
Consider UX:
Click the link and contemplate while X loads. First, the black background. Next it spends a while and you're ready to celebrate! Nope, it was loading the loading spinner. Then the pieces of the page start to appear. A few more seconds pass while the page is redrawn with the right fonts; only then can you actually scroll the page.
Having had some time to question your sanity for clicking, you're grateful to finally see what you came to see. So you dwell 10x as long, staring at a loaded page and contemplating the tweet. You dwell longer to scroll and look at the replies.
How long were you willing to wait for data you REALLY care about? 10-30 seconds; if it's important enough you'll wait even longer.
Software is as fast as it needs to be to be useful to humans. Computer speed doesn't matter.
If the computer goes too fast it may even be suspected of trickery.
The priority should be safety, not speed. I prefer an e.g. slower browser or OS that isn't ridden with exploits and attack vectors.
Of course that doesn't mean everything should be done in JS and Electron as there's a lot of drawbacks to that. There exists a reasonable middle ground where you get e.g. memory safety but don't operate on layers upon layers of heavy abstraction and overhead.
Unfortunately currently the priority is neither.
[flagged]
The goal isn't optimized code, it is utility/value prop. The question then is how do we get the best utility/value given the resources we have. This question often leads to people believing optimization is the right path since it would use fewer resources and therefore the value prop would be higher. I believe they are both right and wrong. For me, almost universally, good optimization ends up simplifying things as it speeds things up. This 'secondary' benefit, to me, is actually the primary benefit. So when considering optimizations I'd argue that performance gains are a potential proxy for simplicity gains in many cases so putting a little more effort into that is almost always worth it. Just make sure you actually are simplifying though.
You're just replacing one favorite solution with another. Would users want simplicity at the cost of performance? Would they pay more for it? I don't think so.
You're right that the crux of it is that the only thing that matters is pure user value and that it comes in many forms. We're here because development cost and feature set provide the most obvious value.
100% agree with Carmack. There was a craft in writing software that I feel has been lost with access to inexpensive memory and compute. Programmers can be inefficient because they have all that extra headroom to do so which just contributes to the cycle of needing better hardware.
Software development has been commoditized and is directed by MBA's and others who don't see it as a craft. The need for fast project execution is above the craft of programming, hence, the code is bug-riddled and slow. There are some niche areas (vintage,pico-8, arduino...) where people can still practise the craft, but that's just a hobby now. When this topic comes up I always think about Tarkovsky's Andrei Rublev movie, the artist's struggle.
If the tooling had kept up. We went from RADs that built you fully native GUIs to abandoning ship and letting Electron take over. Anyone else have 40 web browsers installed and they are each some Chromium hack?
Optimise is never a neutral word.
You always optimise FOR something at the expense of something.
And that can, and frequently should, be lean resource consumption, but it can come at a price.
Which might be one or more of: Accessibility. Full internationalisation. Integration paradigms (thinking about how modern web apps bring UI and data elements in from third parties). Readability/maintainability. Displays that can actually represent text correctly at any size without relying on font hinting hacks. All sorts of subtle points around UX. Economic/business model stuff (megabytes of cookie BS on every web site, looking at you right now.) Etc.
I work on a laptop from 2014. An i7 4xxx with 32 GB RAM and 3 TB SSD. It's OK for Rails and for Django, Vue, Slack, Firefox and Chrome. Browsers and interpreters got faster. Luckily there was pressure to optimize especially in browsers.
Code bloat: https://en.wikipedia.org/wiki/Code_bloat
Software bloat > Causes: https://en.wikipedia.org/wiki/Software_bloat#Causes
Program optimization > Automated and manual optimization: https://en.wikipedia.org/wiki/Program_optimization#Automated...
The world will seek out software optimization only after hardware reaches its physical limits.
We're still in Startup Land, where it's more important to be first than it is to be good. From that point onward, you have to make a HUGE leap and your first-to-market competitor needs to make some horrendous screwups in order to overtake them.
The other problem is that some people still believe that the masses will pay more for quality. Sometimes, good enough is good enough. Tidal didn't replace iTunes or Spotify, and Pono didn't exactly crack the market for iPods.
He mentions the rate of innovation would slow down which I agree with. But I think that even 5% slower innovation rate would delay the optimizations we can do or even figure out what we need to optimize through centuries of computer usage and in the end we'd be less efficient because we'd be slower at finding efficiencies. Low adoption rate of new efficiencies is worse than high adoption rate of old efficiencies is I guess how to phrase it.
If Cadence for example releases every feature 5 years later because they spend more time optimizing them, it's software after all, how much will that delay semiconductor innovations?
I've installed OSX Sequoia on 2015 iMacs with 8 gigs of ram and it runs great. More than great actually.
Linux on 10-15 year old laptops and it runs good. if you beef up RAM and SSD then actually really good.
So for everyday stuff we can and do run on older hardware.
258M! A quarter of a gigabyte for a HelloWorld example. Sheesh.
Carmack is right to some extent, although I think it’s also worth mentioning that people replace their computers for reasons other than performance, especially smartphones. Improvements in other components, damage, marketing, and status are other reasons.
It’s not that uncommon for people to replace their phone after two years, and as someone who’s typically bought phones that are good but not top-of-the-line, I’m skeptical all of those people’s phones are getting bogged down by slow software.
This is Carmack's favorite observation over the last decade+. It stems from what made him successful at id. The world's changed since then. Home computers are rarely compute-bound, the code we write is orders of magnitude more complex, and compilers have gotten better. Any wins would come at the cost of a massive investment in engineering time or degraded user experience.
Minimalism is excellent. As others have mentioned, using languages that are more memory safe (by assumption the language is wrote in such a way) may be worth the additional complexity cost.
But surely with burgeoning AI use efficiency savings are being gobbled up by the brute force nature of it.
Maybe model training and the likes of hugging face can avoid different groups trying to reinvent the same AI wheel using more resources than a cursory search of a resource.
In the last Def Con 32 the badge can run full Doom on puny Pico 2 microcontroller [1].
[1] Running Doom on the Raspberry Pi Pico 2: A Def Con 32 Badge Hack:
https://shop.sb-components.co.uk/blogs/posts/running-doom-
We have customers with thousands of machines that are still using spinning, mechanical 54000 RPM drives. The machines are unbelievably slow and only get slower with every single update, its nuts.
Tell me about it. Web development has only become fun again at my place since upgrading from Intel Mac to M4 Mac.
Just throw in Slack chat, vscode editor in Electron, Next.js stack, 1-2 docker containers, one browser and you need top notch hardware to run it fluid (Apple Silicon is amazing though). I'm doing no fancy stuff.
Chat, editor in a browser and docker don't seem the most efficient thing if put all together.
Well obviously. And there would be no wars if everybody made peace a priority.
It's obvious for both cases where the real priorities of humanity lie.
8bit/16bit demo scene can do it, but that's true dedication.
I think optimizations only occur when the users need them. That is why there are so many tricks for game engine optimization and compiling speed optimization. And that is why MSFT could optimize the hell out of VSCode.
People simply do not care about the rest. So there will be as little money spent on optimization as possible.
When it's free, it doesn't need to be performant unless the free competition is performant.
Reminded me of this interesting thought experiment
https://x.com/lauriewired/status/1922015999118680495
Sadly software optimization doesn't offer enough cost savings for most companies to address consumer frustration. However, for large AI workloads, even small CPU improvements yield significant financial benefits, making optimization highly worthwhile.
I already run on older hardware and most people can if they chose to - haven't bought a new computer since 2005. Perhaps the OS can adopt a "serverless" model where high computational tasks are offloaded as long as there is sufficient bandwidth.
I'm already moving in this direction in my personal life. It's partly nostalgia but it's partly practical. It's just that work requires working with people who only use what hr and it hoists on them, then I need a separate machine for that.
We are squandering bandwidth similarly and that hasn’t increased as much as processing power.
Well, yes, I mean, the world could run on less of all sorts of things, if efficient use of those things were a priority. It's not, though.
Well, it is a point. But also remember the horrors of the monoliths he made. Like in Quake (123?) where you have hacks like if level name contains XYZ then do this magic. I think the conclusion might be wrong.
Feels like half of this thread didn't read or ignored his last line: "Innovative new products would get much rarer without super cheap and scalable compute, of course."
Works without Javascript:
https://nitter.poast.org/ID_AA_Carmack/status/19221007713925...
This is the story of life in a nutshell. It's extremely far from optimized, and that is the natural way of all that it spawns. It almost seems inelegant to attempt to "correct" it.
1. Consumers are attracted to pretty UIs and lots of features, which pretty much drives inefficiency.
2. The consumers that have the money to buy software/pay for subscriptions have the newer hardware.
It could also run on much less current hardware if efficiency was a priority. Then comes the AI bandwagon and everyone is buying loads of new equipment to keep up with the Jones.
The world runs on the maximization of ( - entropy / $) and that's definitely not the same thing as minimizing compute power or bug count.
Where lack of performance costs money, optimization is quite invested in. See PyTorch (Inductor CUDA graphs), Triton, FlashAttention, Jax, etc.
How much of the extra power has gone to graphics?
Most of it?
The world could run on older hardware if rapid development did not also make money.
Rapid development is creating a race towards faster hardware.
Yes, but it is not a priority. GTM is the priority. Make money machine go brrr.
Probably, but we'd be in a pretty terrible security place without modern hardware based cryptographic operations.
isn't this Bane's rule?:
https://news.ycombinator.com/item?id=8902739
Wow. That's a great comment. Been on a similar path myself lately. Thanks for posting.
This is a double edge sword problem, but I think what people are glazing over with the compute power topic is power efficiency. One thing I struggle with home labing old gaming equipment is the consideration to the power efficiency of new hardware. Hardly a valid comparison, but I can choose to recycle my Ryzen 1700x with a 2080ti for a media server that will probably consume a few hundred watts, or I can get a M1 that sips power. The double edge sword part is that Ryzen system becomes considerably more power efficient running proxmox or ubuntu server vs a windows client. We as a society choose our niche we want to leverage and it swings with and like economics, strapped for cash, choose to build more efficient code; no limits, buy the horsepower to meet the needs.
Carmack is a very smart guy and I agree with the sentiment behind his post, but he's a software guy. Unfortunately for all of us hardware has bugs, sometimes bugs so bad that you need to drop 30-40% of your performance to mitigate them - see Spectre, Meltdown and friends.
I don't want the crap Intel has been producing for the last 20 years, I want the ARM, RiscV and AMD CPUs from 5 years in the future. I don't want a GPU by Nvidia that comes with buggy drivers and opaque firmware updates, I want the open source GPU that someone is bound to make in the next decade. I'm happy 10gb switches are becoming a thing in the home, I don't want the 100 mb hubs from the early 2000s.
My professor back in the day told me that "software is eating hardware". No matter how hardware gets advanced, software will utilize that advancement.
I'm going to be pretty blunt. Carmack gets worshiped when he shouldn't be. He has several bad takes in terms of software. Further, he's frankly behind the times when it comes to the current state of the software ecosystem.
I get it, he's legendary for the work he did at id software. But this is the guy who only like 5 years ago was convinced that static analysis was actually a good thing for code.
He seems to have a perpetual view on the state of software. Interpreted stuff is slow, networks are slow, databases are slow. Everyone is working with Pentium 1s and 2MB of ram.
None of these are what he thinks they are. CPUs are wicked fast. Interpreted languages are now within a single digit multiple of natively compiled languages. Ram is cheap and plentiful. Databases and networks are insanely fast.
Good on him for sharing his takes, but really, he shouldn't be considered a "thought leader". I've noticed his takes have been outdated for over a decade.
I'm sure he's a nice guy, but I believe he's fallen into a trap that many older devs do. He's overestimating what the costs of things are because his mental model of computing is dated.
> Interpreted languages are now within a single digit multiple of natively compiled languages.
You have to be either clueless or delusional if you really believe that.
Let me specify that what I'm calling interpreted (and I'm sure carmack agrees) is languages with a VM and JIT.
The JVM and Javascript both fall into this category.
The proof is in the pudding. [1]
The JS version that ran in 8.54 seconds [2] did not use any sort of fancy escape hatches to get there. It's effectively the naive solution.
But if you look at the winning C version, you'll note that it went all out pulling every single SIMD trick in the book to win [3]. And with all that, the JS version is still only ~4x slower (single digit multiplier).
And if you look at the C++ version which is a near direct translation [4] which isn't using all the SIMD tricks in the books to win, it ran in 5.15. Bringing the multiple down to 1.7x.
Perhaps you weren't thinking of these JIT languages as being interpreted. That's fair. But if you did, you need to adjust your mental model of what's slow. JITs have come a VERY long way in the last 20 years.
I will say that languages like python remain slow. That wasn't what I was thinking of when I said "interpreted". It's definitely more than fair to call it an interpreted language.
[1] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[2] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[3] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[4] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
5 replies →
A simple do-nothing for loop in JavaScript via my browser's web console will run at hundreds of MHz. Single-threaded, implicitly working in floating-point (JavaScript being what it is) and on 2014 hardware (3GHz CPU).
> But this is the guy who only like 5 years ago was convinced that static analysis was actually a good thing for code.
Why isn't it?
> Interpreted stuff is slow
Well, it is. You can immediately tell the difference between most C/C++/Rust/... programs and Python/Ruby/... Either because they're implicitly faster (nature) or they foster an environment where performance matters (nurture), it doesn't matter, the end result (adult) is what matters.
> networks are slow
Networks are fast(er), but they're still slow for most stuff. Gmail is super nice, but it's slower than almost desktop email program that doesn't have legacy baggage stretching back 2-3 decades.
> Why isn't it?
I didn't say it isn't good. I'm saying that Carmack wasn't convinced of the utility until much later after the entire industry had adopted it. 5 years is wrong (time flies) it was 2011 when he made statements about how it's actually a good thing.
> Well, it is.
Not something I'm disputing. I'm disputing the degree of slowness, particularly in languages with good JITs such as Java and Javascript. There's an overestimation on how much the language matters.
> Gmail is super nice, but it's slower than almost desktop email program
Now that's a weird comparison.
A huge portion of what makes gmail slow is that it's a gigantic and at this point somewhat dated javascript application.
Look, not here to defend JS as the UX standard of modern app dev, I don't love it.
What I'm talking about slow networks is mainly in terms of backend servers talking to one another. Because of the speed of light, there's always going to be some major delays moving data out of the datacenter into a consumer device. Within the datacenter, however, things will be pretty dang nippy.
[flagged]
1 reply →
As long as sufficient amounts of wealthy people are able to wield their money as a force to shape society, this is will always be the outcome.
Unfortunately,in our current society a rich group of people with a very restricted intellect, abnormal psychology, perverse views on human interaction and a paranoid delusion that kept normal human love and compassion beyond their grasp, were able to shape society to their dreadful imagination.
Hopefully humanity can make it through these times, despite these hateful aberrations doing their best to wield their economic power to destroy humans as a concept.
don't major cloud companies do this and then sell the gains as a commodity?
The world could run on old hardware if you wiped your win10 machines and installed Linux instead of buying new win11 machines.
My partner was diagnosed with Parkinson’s almost 5 years ago. His disease has progressed significantly in the past year, and he begun to have delusions. He also had side effects from carbidopa/levodopa, which we decided to stop, and our primary physician decided he should start on PD-5 formula 4 months ago from UINE HEALTH CENTER. He now sleeps soundly, works out frequently, and is now very active since we started him on the PD-5 formula. It doesn’t make the Parkinson’s disease go away, but it did give him a better quality of life. We got the treatment from www. uineheathcentre. com
I mean, if you put win 95 on a period appropriate machine, you can do office work easily. All that is really driving computing power is the web and gaming. If we weren't doing either of those things as much, I bet we could all quite happily use machines from the 2000s era
I do have Windows 2000 installed with IIS (and some office stuff) in a ESXi for fun and nostalgia. It serves some static html pages within my local network. The host machine is some kind of i7 machine that is about 7-10 years old.
That machine is SOOOOOO FAST. I love it. To be honest, that tasks that I was doing back in the day are identical to today
True for large corporations. But for individuals the ability to put what was previously an entire stacks in a script that doesn't call out to the internet will be a big win.
How many people are going to write and maintain shell scripts with 10+ curls? If we are being honest this is the main reason people use python.
[dead]
[dead]
[flagged]
Forcing the world to run on older hardware because all the newer stuff is used for running AI.
Very much so!
based
I'd much prefer Carmack to think about optimizing for energy consumption.
These two metrics often scale linearly.
Let's keep the CPU efficiency golf to Zachtronics games, please.
I/O is almost always the main bottleneck. I swear to god 99% of developers out there only know how to measure cpu cycles of their code so that's the only thing they optimize for. Call me after you've seen your jobs on your k8s clusters get slow because all of your jobs are inefficiently using local disk and wasting cycles waiting in queue for reads/writes. Or your DB replication slows down to the point that you have to choose between breaking the mirror and stop making money.
And older hardware consumes more power. That's the main driving factor between server hardware upgrades because you can fit more compute into your datacenter.
I agree with Carmack's assessment here, but most people reading are taking the wrong message away with them.
There's servers and there's all of the rest of consumer hardware.
I need to buy a new phone every few years simply because the manufacturer refuses to update it. Or they add progressively more computationally expensive effects that makes my old hardware crawl. Or the software I use only supports 2 old version of macOS. Or Microsoft decides that your brand new cpu is no good for win 11 because it's lacking a TPM. Or god help you if you try to open our poorly optimized electron app on your 5 year old computer.
But Carmack is clearly talking about servers here. That is my problem -- the main audience is going to read this and think about personal compute.
All those situations you describe are also a choice made so that companies can make sales.
1 reply →
> I/O is almost always the main bottleneck.
People say this all the time, and usually it's just an excuse not to optimize anything.
First, I/O can be optimized. It's very likely that most servers are either wasteful in the number of requests they make, or are shuffling more data around than necessary.
Beyond that though, adding slow logic on top of I/O latency only makes things worse.
Also, what does I/O being a bottleneck have to do with my browser consuming all of my RAM and using 120% of my CPU? Most people who say "I/O is the bottleneck" as a reason to not optimize only care about servers, and ignore the end users.
I/O _can_ be optimized. I know someone who had this as their fulltime job at Meta. Outside of that nobody is investing in it though.
I'm a platform engineer for a company with thousands of microservices. I'm not thinking on your desktop scale. Our jobs are all memory hogs and I/O bound messes. Across all of the hardware we're buying we're using maybe 10% CPU. Peers I talk to at other companies are almost universally in the same situation.
I'm not saying don't care about CPU efficiency, but I encounter dumb shit all the time like engineers asking us to run exotic new databases with bad licensing and no enterprise features just because it's 10% faster when we're nowhere near experiencing those kinds of efficiency problems. I almost never encounter engineers who truly understand or care about things like resource contention/utilization. Everything is still treated like an infinite pool with perfect 100% uptime, despite (at least) 20 years of the industry knowing better.
I'm looking at our Datadog stats right now. It is 64% cpu 36% IO.
That's great for you. How much infrastructure do you run?
1 reply →