← Back to context

Comment by donatj

13 hours ago

I've come to the conclusion in the last couple years that being the guy who understands how the abstraction works under the hood is treated by companies as more of a liability than a virtue.

More and more places just want Jira tickets done fast instead of someone that's going to push back or question if this is the best way to build some thing. They want the thing, they don't care if it works well. They don't care if it's efficient. They want it now.

We've been moving to React, replacing an internal framework that's worked wonders for us we've been using for over a decade. The biggest part of the move is "hiring".

My general sense is that nobody understands how React works under the hood. The answer I get when I ask questions is generally just "don't worry about it".

Everything is giant overbuilt and terrible because most people never bothered to learn even a single level up from where they do most of their work. The people that do become unhirable. Everything takes hundreds or thousands more cycles and electricity it should because people can't be bothered to understand what they're doing.

Well, if more react devs knew how it worked under the hood they might choose something else[1] :-)

Jokes aside, if you don't need two-way data binding, using react frameworks pulls in a lot of crap that you never need.

The majority of web apps have no need for react

‐--------

[1] I always joke that the reason I am atheist is not because I don't know much about your religion, it's because I know too much about your religion.

> They want the thing, they don't care if it works well. They don't care if it's efficient. They want it now.

That's because they don't know what to build that will be a successful product, so they essentially try to brute force this question of "what to build" by trying different ideas quickly and see which one will stick. And in this quick iteration loop people just throw bunch of stuff together to make something and once that something gains traction they will keep piling on top of that shaky foundation.

Hardware is cheap ; human labor is not. Companies have figured out that the best way to extract monye from customers is to give them something that barely works now, rather then something that works great later.

  • > Hardware is cheap ; human labor is not.

    Especially true when you're paying for neither hardware nor labor.

    Writing inefficient client-side software, whether it's desktop or webshit, makes the customers / users pay for the hardware, and pay with their time.

  • Is it, though? Can we really keep saying that "hardware will always be cheaper than human labour" when RAM prices are soaring, GPUs are becoming prohibitively expensive, and we're looking at a probably chip shortage?

    I think the era of "poor software for fantastic hardware" is coming to an end.

    • RAM + GPU are getting more expensive but mostly for applications that require a lot of it like AI. The hardware cost for regular applications has not vastly increased (especially when factoring in inflation). Spending 2x development time on a problem often is not worth it (or only with large deployments).

      UI development is an even more special case here. The customer buys the machine which runs the code, not the company. So sadly "good enough" is the standard.

      One example for me here is the "switch product option" button on Amazon listings (e.g. switch green to blue color, smaller to larger model). On my phone this sometimes takes >5 seconds to properly load. Horribly optimised.

      1 reply →

    • It’s not even close to at an end. Hardware would need to increase in cost by hundreds or even thousands of times to materially change that calculation.

      Just as an example, the cost of one week of engineering time corresponds to tens of thousands of vCPU-hours, which is many years of CPU time.

      As such, it only ever makes business sense to optimize code either when it has bottlenecks that can’t be fixed by throwing hardware at it, or when it’s so inefficient that it can be sped up by several orders of magnitude.

  • That's not true if you are on cloud. Clumsily written software becomes really expensive to run.

> I've come to the conclusion in the last couple years that being the guy who understands how the abstraction works under the hood is treated by companies is more of a liability than a virtue.

This is one of the most alienating things about the modern software engineering industry. Someone who grew up just fucking around with computers since they were 5 is supposedly now on even footing with someone who took a 16 week bootcamp and a Claude subscription and has never seen a terminal before.

I was at a drum and bass show recently and talked to one of the other people there. It was obvious I didn't really listen to that much drum and bass as I couldn't name anybody except the most popular artists. You see peoples' reactions change slightly when they discover you are not really part of their music scene - you're an outsider, or a tourist, or even a poser. That's not even a problem, that's just the way subcultures are - you've either lived and breathed that way of life, or not.

What LLMs are doing is they are automating the manufacture of posers and cultural appropriators at scale - you don't really understand the nooks and crannies of this territory, you never actually lived on IRC or in the bash terminal - but you can sure wave around these oversimplified maps of the territory with all the back alleys and laneways missing, and use your pocket book of translated phrases to pose as a native.

> My general sense is that nobody understands how React works under the hood. The answer I get when I ask questions is generally just "don't worry about it".

The problem in software is it seems that we are losing the ability to distinguish between appropriators of computer geek culture and those who do "speak" programming languages natively. The bar has fallen so low that I can't even expect people to understand the difference between runtime and compile time. Anybody who brings up such advanced and esoteric (read: high school level computing) topics is viewed with scorn, as if their ability to expose ignorance on foundational topics presents an existential (or career) threat.

There's been a rise of anti-intellectualism in software from people with non-STEM backgrounds who actually disdain seeking out and possessing such knowledge. It's utterly useless to study - just like math. I find it harder and harder to locate hobbyists, especially here in Toronto, who bother to go below the abstractions not just because they want to, but because they are compelled to understand.

  • Your words resonate with me. Even before LLMs, I’ve been disappointed with the general direction the software industry took in the 2010s. Today’s software industry is not the industry of Licklider, Engelbart, Bob Taylor, Alan Kay, Woz, Stallman, Ritchie, Thompson, Pike, Joy, and many others whom I admire, who helped establish an ethos of computing that fostered a sense of freedom, creativity, and wonder.

    Instead, what we have today is a computing ecosystem dominated by powerful players who care about money and control. Speaking from the standpoint of a Bay Area resident, since roughly 2012, the field has been increasingly taken over by people who are in it for the money. Combine that with Alan Kay’s observation that computer science is a “pop culture” that often lives in the moment and has little regard for the past, and also combine that with the “move fast and break things” attitude that permeates modern software development, and this has created an environment that seems hostile to the types of nerdy pursuits that the industry once encouraged. The working environments of many major software companies and the products they release are a reflection of the values of the companies’ executives, managers, and shareholders.

    While I’m not anti-AI, I see agentic coding as another step in the direction that the software industry was already heading towards, where it can move even faster and break even more things.

    There is still wonder, joy, and freedom in computing, but I feel this is increasingly confined to the hobbyist world and certain niches in research environments.

  • I can confidently say that I know little to no people truly interested in understanding technology, except for strangers online.

  • > Anybody who brings up such advanced and esoteric (read: high school level computing) topics is viewed with scorn.

    Design time, code time, compile time, run time. Why all that potentially wasteful upfront work?

    The next step are shipped applications whose help menu is a chat interface that responds to all user questions of the form "How do I ...", with a short pause to add a new hack to the existing pile, and then some upbeat instructions.

    In theory this should be nirvana. No more vibe coding! Everyone is a power user. Zero dependencies. But there will be much weeping.

    • > In theory this should be nirvana. No more vibe coding! Everyone is a power user. Zero dependencies. But there will be much weeping.

      If I had to sum up the zeitgeist of the '90s techno-optimism it would be this persistent, confident prediction that once people just learned _how_ to use computers, and everyone is a power user everything will be fine! Despite the mounting evidence that actually, no, like everything else in reality the distribution of skill is a bell-curve with the median sitting uncomfortably low for those who, to quote OP, "lived on IRC or in the bash terminal".

      Free universal education didn't fix this problem, LLMs won't fix this problem. Man's natural paucity is no longer in the availability or accessibility of knowledge. The liberal ideal that all we must do is empower the individual turns out to not have been the solution to everything forever.

      But hey, being self-aware enough to make productive use of this new technology is probably _some_ kind of edge.

      May as many as possible survive.

  • sounds like youre working at the wrong place. detailed computing knowledge and maths is essential in some industries and like you said, scorned in others. i couldnt think of anything worse to do with my time than spend all day with mba's or webdevs (lol im sorry thats unfair, web development is complex with all the callbacks and sync issues).

    • Thank you, I was starting to wonder.

      I guess because I’m in game dev maybe, but in all my jobs knowing about the underlying stack has either been necessary knowledge or highly regarded.

      I can’t think of any time in my career where knowing about the internals of the stack was ever frowned upon or where it’s been anything other than an advantage (especially when hunting bugs). I must have been lucky.

  • people will accuse you of "gatekeeping" because you shouldn't need to have any knowledge or skill to do stuff. those things are unimportant, even bad, because anything requiring those is inherently exclusionary. lmao.

This has been obvious to me since I graduated with a BIT majoring in 'Software design.' I literally went to university with software design and software architecture being my core interests.

When I graduated, I was shocked to learn that no company cared about any of the architectural concepts that I had learned. UML class diagrams, sequence diagrams, ER diagrams, etc... had been on the way out. At one point, as internet companies where scaling up, there was a brief resurgence of interest in sequence diagrams... Especially as a communication method when explaining complex bugs or complex message-passing scenarios. But it didn't really last. Nowadays most software is riddled with race conditions and deep exploitable architectural flaws. Cryptocurrencies have been victims to many such attacks. Billions of dollars have been lost to race conditions... And that's just the ones which were discovered. They are notoriously difficult to find post-implementation.

The programming primitives that we're using today aren't optimized to avoid race conditions or even try to encourage good concurrency patterns; quite the opposite; they encourage convenient but disorganized parallelization and they're optimized to put the focus on type safety which is a far less concerning issue. A lot of people who were rightly alarmed by gaps in schema validation (which is critical at API boundaries) became overly obsessed with type safety (which is a broader concern). I have built some async primitives for Node.js, nobody cared! NOBODY! Other developers have had the same experience with most other languages. I think only a few niche languages like Elixir actually treated it as important. But nobody even acknowledged that the problem could be remedied in existing languages. It's so bad that it seems as though some people wanted it to be that way.

The term 'concurrency safety' doesn't even exist! Some have a vague idea about thread-safety OK, that's very specific to one particular concurrency primitive... but what about the concurrency of asynchronous logic (much more common nowadays)? I have felt thoroughly suppressed in that regard in my career.

The only voice on the subject of architecture that got through to the 'mainstream' was Martin Fowler (one of the inventors of Agile software development). After that, there was Dan Abramov of Redux fame. Some notable opinionated architecture books were published but none really identified the underlying essential philosophy to good architecture.

The best, most succinct quote I ever read on the subject was from Alan Kay (inventor of OOP) who said "I'm sorry that I long ago coined the term 'objects' for this topic because it gets many people to focus on the lesser idea. The big idea is messaging."

I like that quote for many reasons; firstly because it shows wisdom, secondly, it tells you what the issue is, very simply and, thirdly, it hints at the importance of 'focus' in this discipline where we are saturated with thousands of complex overlapping and partially conflicting ideas.

I think the FP trend was somewhat of a red herring. Same with Type Safety. Yes, they were useful to some extent, there are some really good ideas in there, but people got so caught up in them that the most fundamental area of improvement was ignored entirely. To me, the core value proposition of FP can be reduced down to "pass by value is safer than pass by reference." Consider that in the context of Alan Kay's "The big idea is messaging." - Is an object reference a message? NO! A live instance is not a message! Precisely! His point supports pass-by-value, furthermore, it encourages succinct/minimal parameters.

Good architecture is rooted in 2 core concepts. 1. Loose coupling. 2. High cohesion and you achieve those by separating logic + structure from messaging. The biggest mistake people make it passing around structure and logic as parameters to other logic. You should avoid moving around logic and structure at runtime; only messages should move between objects; the simpler the messages, the better. And note that 'avoid' doesn't mean never but it means you have to be extremely careful when you do violate this principle and there should be a really good commercial reason to do so. I.e. You should exhaust other reasonable approaches first.

  • My journey is quite similar. My mental model got a huge boost after I read and understood Leslie Lamports early work and the work of Edward Lee about getting deterministic results in the presence of concurrency. I even found the earliest paper with a mathematical proof that write and read must be separated in time or space (the math basics of the rust borrow checker), but don't find it anymore.

    - https://lamport.azurewebsites.net/pubs/time-clocks.pdf

    - https://en.wikipedia.org/wiki/Chandy%E2%80%93Lamport_algorit...

    - https://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-...

  • Yeah, passing by value or "Value semantics" can prevent many programming errors. Passing references to immutable data can serve a similar purpose. In low-level languages where memory layout and calling convention map to target hardware, there are differences in performance to consider.

    Pass by value would indeed make a big difference to how programs are structured and make it easier to reason about programs.

    I just want to point out that "concurrency safety" is very much a word, although "thread safety" is more common. These are broadly part of memory safety, which is a topic mainly due to security concerns but also academic study.

    The two perspectives are not perfectly congruent. Non-concurrency-safe languages like go can also be considered broadly memory safe. The pragmatic rationale is that data races in GCed languages are much less exploitable. From a academic, principle based view this is unsatisfying and unconvincing as one would prefer safety to be matter of semantics. See also https://www.ralfj.de/blog/2025/07/24/memory-safety.html

    Rust uses "fearless concurrency" as a slogan. Rust offers more options than passing by value (Copy) while still guaranteeing safety through static type checking.

    There is also research for GCed languages to establish non-interference eg Scala capture checking.

    Concurrency is recognized as difficult (at least by people who are knowledgable) and programs language design usually involves pragmatic choices if you need concurrency. If the language does not provide the primitives or spec that enables safety, then you are left with patterns and architecture.

    The science is still evolving, it is certainly not the case that nobody cares. Rather, progress is slow and moving ideas from research industry is even slower. How much value we ascribe to correctness, safety and performance in industry depends very much on the context.

  • > only messages should move between objects

    Can you provide an example for this?

    • The Alan Kay viewpoint (he is NOT the inventor of OOP [1]) is considered the least helpful viewpoint on OO design. The “magical” and unhelpful “its all about messages” perspective, that helps you not at all unless one is talking about the internal implementation of a platform like Smalltalk. Consider the views of the real inventors - Nygaard and Dahl.

      [1] I don't think I invented "Object-oriented" but more or less "noticed" what was really powerful about just making everything from complete computers communicating with non-command messages. This was all chronicled in the HOPL II chapter I wrote "The Early History of Smalltalk". — Alan Kay

    • Say you have a Car, Engine and Dashboard object.

      Let's not have dashboard access the temperature by doing `GetSurroundingCar().engine.temperature`

      If the dashboard needs to get the temperature from a sensor in the engine, it should be able to "talk" to the sensor, without going through car object.

      In ideal OOP, a "method call o.m(...)" is considered a message m being sent to o.

      In practice, field access, value and "data objects" etc are useful. OOP purism isn't necessarily helping if taken to the extreme.

      The pure OOP idea emphasizes that the structure of a program (how things are composed) should be based on interactions between "units of behavior".

      1 reply →

    • 1. Avoid passing live instances (by reference) to other instances as much as possible. Because you don't want many instance references to be scattered too widely throughout your codebase. This can cause 'spooky action at a distance' where the instance state is being modified by interactions occurring in one part of the code and it unexpectedly breaks a different module which also has a reference to that same instance in a different part of the codebase. The more broadly scattered the reference is throughout the codebase, the harder it is to figure out which part of the code is responsible for the unexpected state change. These bugs are often very difficult to track down because stack traces tend to be misleading because they don't point you to the event which led to the unexpected state change which later caused the bug.

      2. Avoid overly complex function parameters and return values. Stick to passing simple primitives; strings, numbers, flat objects with as few fields as necessary (by value, if possible). Otherwise, it increases the coupling of your module with dependent logic and is often a sign of low-cohesion. The relationship between cohesion and coupling tends to be inversely proportional. If you spend a lot of time thinking about cohesion of your modules (I.e. give each module a distinct, well-defined, non-overlapping purpose), the loosely-coupled function interfaces will tend to come to you naturally.

      The metaphor I sometimes use to explain this is:

      If you want to catch a taxi to go from point A to point B, do you bring a steering wheel and a jerry-can of petrol with you to give to the taxi driver? No, you just give them a message; information about the pick up location and destination. This is an easy to understand example. The original scenario involves improper overlapping responsibilities between you and the taxi service which add friction. Usually it's not so simple, the problem is not so familiar, and you really need to think it through.

      We understand intuitively why it's a bad idea in this case because we understand very well the goal of the customer, the power dynamics (convenience of the customer has priority over that of the taxi driver), time constraints (customer may be in a hurry), the compatibility constraints (steering wheel and fuel will not suit all cars). When we don't understand a problem so well, an optimal solution can be difficult to come up with and we usually miss the optimal solution by a long shot.

  • nice post, lately ive been dealing with concurrency, between threads and processes. trying to keep it cross platform as well, its a lot to learn. if you have large buffers and want to keep some semblance of performance, its VERY interesting understanding all the transfer mechanisms and cache levels involved. i feel these are the sorts of things my education skipped, it was all very focused on the static structure of objects not the dynamics of data transfer.

> replacing an internal framework that's worked wonders for us we've been using for over a decade

Can you share what this internal framework is?

> More and more places just want Jira tickets done fast instead of someone that's going to push back or question if this is the best way to build some thing.

That's one thing I never care to do unless I'm the one making the technical decisions. What I do is to build the thing, but with defensive programming in place. I take care of making that my code is good, then harden any interface so that I can demonstrate that I'm not the cause for new bugs. People will be careless, so make sure that you have blast doors between your work and theirs.

And I do take time to learn about the abstractions of the new shiny tools, even when it's overengineered. Going blind and making mistakes is not my cup of tea.