Comment by jdw64

3 days ago

The real issue, in my view, is not AI itself.

The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

Short-term cost cutting leads to less junior hiring, and removes the slack that experienced engineers need in order to teach. As a result, tacit knowledge stops being transferred.

What remains is documentation and automation.

But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.

AI is following the same pattern.

What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

The West has seen this before, especially in the case of General Electric.

GE pursued aggressive short-term financial optimization, cutting costs, focusing on quarterly results, and maximizing shareholder returns. In the process, it hollowed out its own long-term capabilities. It effectively traded its future for short-term gains.

The same mindset is visible today.

The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.

Tacit knowledge comes from direct experience with real systems over time. If you remove the people and the learning pipeline, that knowledge does not stay in the organization. It disappears.

> removing people and organizational slack

You are spot on w.r.t every assertion you've made. When bean-counters took over the ecosystem they optimised immediate profitability over everything else. Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time. There's no room for experimentation, repair, or anything else.

I've commented about lack of slack on several times here on HN because when I notice a broken system now a days, 90% of it is due to lack of slack in the system to absorb short term shocks.

  • The problem is, in the minds of these people 'firing at 100% all the time' generally means doing busywork and/or thinking of ways to cheat/manipulate their customers and the market for maximum gain whole delivering minimum value. I would have loved to be 100% engaged working on solving real problems in honest ways at some of my past jobs, but alas MBA/marketing leadership, which has taken over much of tech has very little interest in actually building good things and solving real problems in honest ways.

    • This is what happens when companies become so nepotistic that they only believe in their own bullshit.

      "Can they really breathe fire or did we make that up?"

    • > generally means doing busywork and/or thinking of ways to cheat/manipulate their customers and the market for maximum gain whole delivering minimum value

      When I read comments like this I can’t help but wonder where people like you work. It’s completely unrepeatable to me. I work with really good people, all the way to the tip, and no try to make money by increasing value for our customers.

      Apple, Google, Walmart, Amazon, Home Depot, Anthropic, Toyota, and a hundred other companies all offer me incredible value for so cheap. Why are people so cynical about a world that offers them unimaginable riches everywhere they look.

      Sure there are bad companies. And if you work at one of those, go get a new job.

      3 replies →

  • I think the bean counters get a bad rap for this a bit unfairly. The past century has seen more progress in knowledge and technology than the rest of human history combined. The world and business environment are changing too rapidly to make longtermist thinking practical.

    Few care if you have a lifetime warranty and excellent service or replacement parts if the majority will upgrade in a few years! Mature technologies increasingly become cheaply available as services, eg. laundry, food, transportation. That further reduces demand on production, as many can get by with the bare minimum and don't need the highest quality, longest lasting appliances. Software is even more ephemeral and specialized.

    Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.

    R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself? Open source software has even further muddied the waters. Applications have only a limited lifetime before being replicated and becoming free products (this has only been intensified by the introduction of AI), so companies develop services instead.

    Technology and knowledge deepening and rapidly becoming more specialized makes the monolithic corporation much less practical, so companies also need to specialize in order to effectively compete. Going too far in the name of efficiency can destroy core competencies, but moving away from the old model was necessary and rational.

    • > R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself?

      Because some problems that many companies in very specialized industries work on are so special that outside of this industry, nearly all people won't even have heard about them.

      Additionally, many problems companies have where research would make sense are not the kind of problems that are a good fit for universities.

      4 replies →

    • Universities dont do product oriented research. They do more general research. And also, they should not do product oriented research, that is companies role.

      And universities research capabilities are being destroyed too right now.

    • > Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.

      Here's the problem with your reasoning. This paragraph is simply wrong, with each sentence being untrue. Education and training are never wasted money, the skills aren't changing that quickly, there isn't any slack in the workforce, and qualified worker shortages are being reported in every trade across the board. Someone needs to solve the problems you hand-wave away.

      > this works just fine in most cases - somebody will learn what they need to get paid.

      That's me. I specialize in learning new domains. I cost like 8x more than the random junior you'd be able to hire with a functional onboarding program.

      2 replies →

    • "The world and business environment are changing too rapidly to make longtermist thinking practical." Tell that to the Chinese...

  • I’ll note at the end of the last century I worked at IBM research which had a budget of 6 Billion dollars. Management was trying very hard to get better return on that investment. Even today IBM though often ridiculed in the tech space (sometimes they do deserve it) spends a lot on R&D.

    • Lucent at the same time went through the same issue: how to monetise Bell Labs.

      Bell Labs greatest work came out when AT&T was a monopoly. Once they were broken up (1984?) they started feeling the pain.

      When the Lucent spinoff took place, the new entities had no Monopoly money to fund unconstrained research while management's behaviour never changed.

      I don't know how BL fared under Alcatel and now Nokia, but haven't heard of anything interesting for years.

      3 replies →

  • > Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time

    Not just that, you have to be always doing less for more gains. Real work is bad work. Shrinkflation good. I don't know what it is if it wasn't a pure scammer mindset.

    • > Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time

      This is a classic Goldratt / Theory of Constraints mistake.

      1 reply →

  • > When bean-counters took over the ecosystem [...] in their mind, every part of the system needs to be firing at 100% all the time.

    This is only fair, because they themselves are firing at 100% all the time IYKWIM ;)

  • I believe private equity ownership represents this in an aggressive form. The 2 and 20 percent takes that PE usually mandates as part of their purchase agreement means that they are highly highly incentivized to maximize short term "wins" over long term survival.

    I think Chesterton and Taleb also had pretty reasonable things to say about understanding a system before you make changes and fragile/anti-fragile systems as well.

  • It’s especially ironic since the bean counters produce no value. I like ‘Developer Hegemony’, even if the title needs changing. The author makes a great case for why information workers produce almost all the value. It’s them that make the profits, yet they’re always a cost center.

  • It's funny because when I first managed at a public company I was told no employee can work on something more than 80% (and sustained 80% actual work wasn't believable) and if my people were logging 80% or more time to capitalizable projects I would be in trouble.

  • They also took out all the quality, though in pure business terms one can argue that's a kind of "slack" by itself.

    The beancounters have cut all the corners on physical products that they could find. Now even design and manufacturing is outsourced to the lowest bidder, a bunch of monkeys paid peanuts to do a job they're woefully unqualified for.

    And the end result is just a market for lemons. Nobody trusts products to be good anymore, so they just buy the cheapest garbage.

    Which, inevitably, is the stuff sold directly by Chinese manufacturers. And so the beancounters are hoisted by their own petard.

    We've seen it happen to small electronics and general goods.

    We're seeing it happen right now to cars. Manufacturers clinging on to combustion engines and cutting corners. Why spend twice the money on a western brand when their quality is rapidly declining to meet BYD models half the price.

    ---

    And we're seeing it happen to software. It was already kind of happening before AI; So much of software was enshittifying rapidly. But AI is just taking a sledgehammer to quality. (Setting aside whether this is an AI problem or a "beancounters push everyone into vibecoding" problem)

    E.g. Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there. Windows is just going down in flames. People are jumping ship now.

    SaaS is quickly going that way as well. If it's all garbage, why pay for it. Either stop using it or just slop something together yourself.

    ---

    And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. So much manufacturing knowledge is just gone, starting a new manufacturing firm in the west is a staffing nightmare. Same story with cars, China has the EV knowledge. And software's going the same way. These beancounters are all chomping at the bit to fire all their devs and replace them with teenagers in the developing world spitting out prompts. They can't move back upmarket after that's done.

    Even when the knowledge still lives, when the people with the skills requires have simply moved to other industries and jobs, who's going to come back? Why leave your established job for the former field, when all it takes is the management or executive in charge being replaced by another dipshit beancounter for everyone to be laid off again.

    • > E.g. Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there.

      Desktop Linux has gotten better, though much of the improvement happened decades ago. I believe the first person to prematurely declare "the year of Linux on the desktop" was Dirk Hohndel in 1999: https://www.linux.com/news/23-years-terrible-linux-predictio...

      And speaking as someone who was running desktop Linux in 1999, I remember just how bad it was. Xfce, XFree86 config files, and endless messing around with everything. The most impressive Linux video game of 2000 was Tux Racer.

      But over the next 10 years, Gnome and KDE matured, X learned how to auto-detect most hardware, and more-and-more installs started working out of the box.

      By the mid-2010s, I could go to Dell's Ubuntu Linux page and buy a Linux laptop that Just Worked, and that came with next day on-site support. I went through a couple of those machines, and they were nearly hassle free over their entire operational life. (I think one needed an afternoon of work after an Ubuntu LTS upgrade.)

      The big recent improvement has been largely thanks to Valve, and especially the Steam Deck. Valve has been pushing Proton, and they're encouraging Steam Deck support. So the big change in recent years is that more and more new game releases Just Work on Linux.

      Is it perfect? No. Desktop Linux is still kind of shit. For examples, Chrome sometimes loses the ability to use hardware acceleration for WebGPU-style features. But I also have a Mac sitting on my desk, and that Mac also has plenty of weird interactions with Chrome, ones where audio or video just stops working. The Mac is slightly less shit, but not magically so.

      5 replies →

    • > And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. (...) They can't move back upmarket after that's done.

      The knowledge isn't the problem. It can be quickly regained, and progress of science and technology often offer new paths to even better quality, which limits the need for recovering details of old process.

      The actual problem is, there is no market to go up to anymore. Once everyone is used to garbage being the only thing on offer, and adjust to cope with it, you cannot compete on quality anymore. Customers won't be able to tell whether you're honest, or just trying to charge suckers for the same garbage with a nicer finish, like every other brand that promises quality. It would take years of effort and low sales to convince the customers to start believing you're the real deal, which (as beancounters will happily tell you) you cannot afford. And even if you could, how are you going to convince people you're not going to start cutting corners again a few years down the line? In fact, how do you convince yourself? If it happened once, if it keeps happening everywhere around across all economy, it's bound to happen to your business too.

      8 replies →

    • > Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there.

      Desktop Linux mostly works these days. It does everything most regular people would want of it, with zero fuss. Including playing games. In some respects, it's easier to use than Mac or Windows.

      When it has trouble with some things, one must remember neither Mac nor Windows is perfect, and they can be extremely frustrating at times.

      Time to update those prejudices!

      2 replies →

    • I think you’re not blaming political leadership enough. NAFTA, and other programs were always going to lead to the state of affairs we have now. This was a choice. Blaming greed is like blaming gravity.

      1 reply →

    • > E.g. Desktop Linux has always been kind of a joke

      And yet I run it every day, and it's by FAR the most enjoyable platform and tooling to use (for me).

  • Engineers seem to think business people don’t know what they are doing, but if your post were true, then companies would add slack to outperform their competitors.

    The broken system likely doesn’t have enough business impact to justify the investment to maintain it.

    • Adding slack works over years.

      Cutting slack gets you quarterly bonuses.

      When you plan working 3-5 years in a single company you don’t care if it crashes and burns month after you leave just to burn down next one.

      Conversely we see the same dynamic with engineers, they build stuff to prop up their CV and don't care if company still supports crap they did after they leave.

    • It's a measurement problem, which engineers also fall prey to, perhaps even more.

      It's the danger of data driven decision making. Cutting people and resources right now gets you a measurable gain. Not cutting them gets you a gain tomorrow.

      But, that gain is unmeasurable! Because in order to measure it you would need to know what happens in an alternate universe where you cut those people. So, if you're only making data driven decisions, you would cut the people 100% of the time.

      But that's why companies aren't run by algorithms, they're run by people. The algorithm would run the company into the ground.

    • > companies would add slack to outperform their competitors.

      I think if they did this they'd get buried by the market. Your slack is someone else's opportunity to undercut you. It's a systemic problem, it's in every individual's self interest to work towards instability.

    • This would be true if everyone was optimizing for the same thing.

      It's not terribly difficult to imagine someone optimizing for, say, a bonus at the end of the year.

  • > optimised immediate profitability over everything else

    Which is the usual complaint that businesses are focused on short term results, sacrificing long term results.

    If that would be generally true, the stock market would be going down steeply, not up, as stock prices are based on expectations of future profits.

    • Are stock market profit expectations mostly long term? Stock markets have been wrong before.

      Besides that, the U.S. stock market went up over several decades while manufacturing capabilities were transferred overseas. That has had, and will continue to have, domestic ramifications that might not be captured by investor profits.

      2 replies →

    • > as stock prices are based on expectations of future profits.

      I thought stock prices were based one what I thought I could sell it for next week.

      5 replies →

  • > Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.

    > You are spot on w.r.t every assertion you've made.

    Huh? What happened to the concept of "debate" on HN. It's just a bunch of people agreeing with each other. Yet the data doesn't support any of OP's thesis.

    Here's a chart of the rise in productivity per hour worked in the United States since 1947. It's a steady linear increase every single year: https://fred.stlouisfed.org/series/OPHNFB

    Yours is the type of story big company workers tell themselves to feel important while refusing to learn anything new and never taking any risks. But the truth is 99.999% of companies are not doing anything that unique or complex. Most companies are not ASML.

    If I had a nickel for every time I've heard someone justify their do-nothing position within a giant bureaucracy while saying the phrase "institutional knowledge" I'd be rich. This is just a sign of a poorly run giant company full of engineers building esoteric and overly complex in-house solutions to already-solved problems as job security.

    The truth is all of this "institutional knowledge" is worthless in the face of disruption, and it has a half life that's getting shorter every day.

    Everybody talks shit about global just-in-time supply chains and specialization...but just because we had a fake toilet paper shortage for a few months during a 100-year global pandemic doesn't mean running things like it's 1947 for the last 70 years would have been better. You enjoy a much higher quality of life today due to these "evil" JIT supply chains which it turns out are far more durable than people want to claim.

    • Most measurements measured in dollars are just stealth measurements of inflation. Even inflation adjusted measurements, because official inflation metrics are always lowball numbers with shady methodology.

    • US aggregate productivity metrics fail to address this nuance. There is a fundamental difference in abstraction layers between a macro-system becoming more efficient and an individual enterprise experiencing operational failure. As a software engineer, distinguishing between these layers is critical. Your argument is akin to claiming that because the Google Play Store sees a higher volume of app releases (increased productivity), the intrinsic quality of individual apps has naturally improved.

      In this analogy, the individual app represents a company, and the Play Store represents the broader US market. Silicon Valley’s highly liquid labor market allows talent to flow freely, which opens up and elevates the baseline of the overall market. However, that is entirely distinct from the fact that individual companies are suffering severe drops in internal quality and productivity.

      Furthermore, in software architecture, 'productivity' and 'quality' are rarely directly proportional. With AI coding tools, we can ship an app orders of magnitude faster. Historically, it took me three months to write 60,000 lines of code; recently, I am generating that same volume in just two weeks. My productivity has undeniably spiked, but can I confidently claim the code quality is better than when I manually scrutinized every single line?

      The real issue is not whether the broader economy has grown more productive since 1947. The core issue is whether a specific organization bleeds capability when the exact people who understand its real-world constraints, failure modes, and operational history walk out the door.

      Both realities can co-exist: National productivity can trend upwards, while individual companies simultaneously suffer operational regressions due to botched migrations, failed refactors, or the loss of tacit knowledge.

      I agree that 'institutional knowledge' is sometimes weaponized to defend unnecessary complexity. However, the opposite fallacy is treating all localized, domain-specific knowledge as worthless. While some of it is merely job-security folklore, the rest is literally the only surviving documentation of why the system functions in the first place

>. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

This is a blindspot to many. People working on entrepreneurial projects need to build a lot. They start with nothing. They need (for example) features. There's a lot to do.

Most firms are not that. Visa, Salesforce, LinkedIn or whatnot. They have a product. They have features. They have been at it for a while. They also have resources. They are very often in a position of finding nails for a "write more software" hammer.

It's unintuitive because they all have big wishlist and to do lists and and a/b testing system for pouring software into but...

If there were known "make more software, make more money" opportunities available, they would have already done them.

Actual growth and new demand needs to come from arenas outside of this. Eg companies that suck at software(either making or acquiring) might be able to get the job done.

The Problem, bringing this back to the article, is fungibility. A lot of this "human capital" stuff cannot be easily repackaged. It's a "living" thing. Talent and skills pipelines can be cut off, and vanish.

A danger in Ai coding (and other fields) is that it leverages preexisting human capital and doesn't generate any for later.

  • > If there were known "make more software, make more money" opportunities available, they would have already done them.

    Sometimes they're available, but not palatable, when the opportunity could threaten their existing investments or patterns. That might mean "self-cannibalism", or changing the ecology so that the main product niche is threatened.

    Then those opportunities are ignored, or actively worked-against via lobbying, embrace-extend-extinguish, etc.

    • Ok... but this just generalizes into the "known things" type.

      Whether the reason of strategic (like your example), internal politics, insufficient knowledge.... The point is that there is a local equilibrium, and most mature firms are at this equilibrium.

      More resources via Ai, at first order, goes after that diminishing returns part of the curve... which is a cliff especially for highly resourced firms topping the S&P500.

      A lot of Ai-optimist:s " mental model" of the economy do not account for this stuff at all.

      "Save time/money" outcomes are not similar at all to "make more stuff" outcomes. Firing employees does freeze up labour... but reutilizing this labour is non-trivial... as this article demonstrates quite well.

  • I agree that any sufficiently complex human operation - whether industrial or scientific or whatever - requires a culture and a living tradition that develops over time and communicates knowledge and understanding across generations. In fact, many problems in our culture can be attributed to a contempt for tradition that developed. (It is true that tradition can ossify. That's can be a problem with attitudes toward tradition rather than tradition itself, or a sign that something needs to be addressed. A good tradition is a dialogue spanning history.)

    However, it is also true that technology develops and produces changes that in the short term cause pain, but in the long term produce a better outcome in some desirable sense. Coding is not an end in itself. Just as switchboard operators and human computers are obsolete, because the conditions that caused the need for them ceased to exist, it may be the case that a certain manual style of programming is also becoming obsolete.

    You can imagine human computers decades ago thinking that computing technology is bad, because people will loose numerical facility. But this misunderstands the structure of the value of practical skills and the difference between knowledge of principles and practical skill. Sure, few if any people today can perform numerical computation as quickly and competently in their heads or on paper as human computers, but...

    1. that's different from understanding the principles of computation which is closer to a theoretical grasp and has eternal or at least lasting value

    2. the value of the practical numerical facility was rooted in the need for obtaining results as quickly as possible, and that particular set of techniques or skills is no longer practical

    Perhaps manual coding is like that. I don't know why people are surprised. Generative programming has always been a desired end in CS for along time. CS grads can still and should still learn the principles of their field and learn them well, but the profile of practical industrial techniques and needed skills is changing. As software eats more and more of the world, it is becoming increasingly impractical for manually fiddling with silly bits of plumbing. We obviously haven't been able to develop abstractions well enough to avoid it, and part of the reason is that appetite comes with eating. Once you make something easier, it makes it easier to achieve even greater things more easily...hence new plumbing and implementation complexity.

    Let's be honest here. Much of programming is intellectually dull. It's is plumbing. It's not algorithmically interesting. It's not interesting from a modeling perspective. It's not interesting conceptually. It's not interesting as a matter of system design. Most programming out in the wild is the same old crap being recapitulated a million times over. If all you want is to become skilled in doing the same thing over and over again, then I can understand why you might find LLMs threatening. Your market value as a maker of yet-another-flask-web-app has plummeted hard. People who enjoy that kind of programming are generally not very intellectually motivated people - at least not where programming is concerned - and likely prefer the tedious comforts of rehearsed ephemeral detail. LLMs can keep us from rabbit holing and focused on the domain.

    In any case, I don't think LLMs are a threat to the field per se. I just think that the skill set is shifting and developing. I think we are still figuring out what it means to develop the right understanding and intuitions to develop software without the benefit of having had done it manually. Time will tell. However, I also think being able to read code has become relatively more important than writing it. When you have to verify the quality of LLM-generated code and put your name behind it, you have to be able to understand it, and that's a somewhat neglected skill in my view. Programming very often prefer to write code than to read it. LLMs might be just the thing to coerce an improvement in the latter sort of literacy. With this also comes a greater importance of formal specification. That's where I would expect the future of the field to shift.

> The core problem is that decision-makers—often far removed from actual engineering work — believe that tacit knowledge can be replaced with documentation, tools, and processes. [It] cannot.

I am not so certain:

For example, I think that a lot of my knowledge about the system that I work on could be documented, and based on this documentation someone new could take over the system.

The problem rather is: the volume of documentation that I would have to write would be insane; I'd consider ten thousands of dense DIN A4 pages to be realistic - and this is a rather small system.

So, a new person who could take over this system would have to cram and understand basically all the details of this documentation insanely well.

This insane effort (write the documentation; new workers on the project then have to cram and understand every detail of this incredibly bulky documentation) is something that no employer wants to spend money on: this is in my experience the real reason why it isn't done.

  • The deeper I wade through Microsoft’s Azure documentation the more I feel the reality of this. There’s so much of it that it basically is unreadable in real terms, most employees will never get the time allocated, and when you do try to exhaustively read up on a specific area you find that the documentation is incomplete and wrong in subtle but important ways. I’m sure Microsoft spends a lot of resources on that documentation, but it seems somewhat of a hopeless mission.

  • I think it's an important property of a system to be documentable not just documented. What I mean essentially, is the system was designed with sound principles, and said principles were written down and followed.

    I have seen this work only once in my life, and it was so nice to see, but yeah, most code is just a ball of twine, and even if there was a guiding principle beneath, it has been long abandoned, and overruled, and the only way to understand the system is to take it all in at once.

    • I think it’s reasonably easy to design a system that’s documentable and documented. It’s very, very hard to maintain and iterate on a system while maintaining those properties.

      Hacky things will make their way in because it takes a month to do the documentable thing and a week to ship the hacky thing.

      It takes a lot of skilled people from varying disciplines to figure out what things are going to survive long enough and be important enough to spend the resources doing the right thing instead of the hacks.

      It bites both ways. I’ve seen core business products crippled by years of digital duct tape, but I’ve also seen internal tooling that never really becomes useful because they insist on doing the “correct” thing and it’s constantly a year behind what we need it to do.

      1 reply →

  • It’s way easier (for this type of scenarios) and far more effective to learn by doing than to learn by reading (even tens of thousands of pages of) documentation, that is the crust of it.

    • > It’s way easier (for this type of scenarios) and far more effective to learn by doing than to learn by reading

      I don't think so: the problem is that there exist lots of parts in the system that are quite complicated but which one very rarely has to touch - except in the rare (but happening) case that something deep in such a part goes wrong a for requirement for this part pops up.

      If you "learned by doing" instead of reading, you are suddenly confronted with a very subtle and complicated subsystem.

      In other words: there mostly exist two kinds of tasks:

      - easy, regular adjustments

      - deep changes that require a really good understanding of the system

      2 replies →

  • This is such a weird counter-argument, that only serves to prove OP’s point.

    “It’s not that it’s not documentable. It’s just that it would take tens of thousands of pages and no one would be able to write that or read that to effectively take over the project.”

    Okay, so surely this is what OP had in mind when they said documentation doesn’t work… Is it no longer safe to assume reasonable expectations when making an argument? Why the need to “well actually” them with this response?

  • << [belief that] knowledge can be replaced with documentation, tools, and processes. [It] cannot. << volume of documentation that I would have to write would be insane

    I am not sure those are mutually exclusive. We all know if situations where a person knows of tiny and typically undocumented system quirks. We even have a corporate name for it: institutional knowledge. The issue is that executives think it can ALL somehow be done, when even cursory real life project lift will quickly teach one how insane average gap between documented and undocumented tends to be. Add to that near constant changes to API, versions, systems, people and I can't help but wonder at executives, who really do think this way.

  • Documentation should serve as a general overview of the system (purpose, architecture, etc.), and elaborate on the interface of that system. Other than documenting historical relics like ADRs, I see it as a net negative in being very granular.

    It quickly becomes outdated and at some point you just need to accept that only the code will be the most accurate source of truth.

  • But you've just perfectly described the tacit knowledge problem.

    Yes, you can spend all your time writing docs, or just mentor a junior and let them grok the system through osmosis.

    Also your doc won't ever have 100% coverage unless you write an absolute tome. Tacit knowledge are things that are so obvious that you wouldnt even think of writing it down in the first place.

I feel like it’s something more fundamental and broad than that. We slowly remove excuses to talk to other people.

The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?

The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.

I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.

  • We see this in our open-source community. We've had a community channel for over two decades, where community members help newcomers and each other solve problems and answer questions.

    Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.

    Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.

    In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.

    • > In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.

      The AI companies have taken all the wrong lessons from social media and learned how to make their products addictive and sticky.

      I’m a certified hater, but even I’ve fallen into the exact trap you’re describing. Late last year I was in the process of buying a house that had a few known issues with a 30 day close. I had a couple sleepless nights because I had asked ChatGPT or Claude about some peculiar situation and the bots would tell me that I was completely screwed and give me advice to get out of the contract or draft a letter to the seller begging for some concession or more time. Then the next day I’d get a call from the mortgage guy or the attorney or the insurance broker and turns out, the people who actually knew what they were doing fixed my problem in 5 minutes.

      1 reply →

    • This _is_ all true but what's also true is that there's an historical pattern (in many communities) of "n00bs" not being or (at least) _feeling_ welcome. So, I can't say I blame people for spinning in circles with LLMs instead of starting with forums or mailing lists where they may be shamed or have their questions closed immediately as "duplicate" or "off-top" (e.g. SO).

      I think if we want newcomers to lead with human interactions, the onus is on us community leaders/elders/whatever need to be a little warmer, understanding and forgiving. (Of course, some communities and venues are already very good about all of this and I'm generalizing to make the larger point.)

    • Personally this type of behavior played a large part in why I left 2 oss communities.

      A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.

      They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions.

      3 replies →

    • I have switched to OpenWRT during the LLM era. I wanted to set up some special network configs, and ChatGPT happily spit out the necessary configs.

      From what little I understood from OpenWRT everything looked fine, but nothing worked. I still to this day have no idea what I (or ChatGPT) did wrong.

      I just reset the router, actually took the time to do everything by the docs, and then it worked.

      Debugging someone's broken code that never worked is a nightmare I wouldn't wish on anyone.

  • People are losing their ability to reason without prompting an LLM first.

    It's affecting their ability to collaborate. They retain the confidence of years of experience, but their brain isn't going through the appropriate process anymore to check their assumptions.

    I've seen a similar thing happen to engineers who move into management, but this is now happening at such a large scale.

  • > if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

    Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.

  • There is a lot of wisdom in this.

    At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.

    It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.

  • I think you are right, but it also makes sense. Human communication is inherently inefficient. Points of view, miscommunication, interpretation... It's the obvious point to automate. Not defending it, just my thoughts

    • I have a couple of colleagues that run all communication through an LLM. It really helps their writing, but it does nothing to help their understanding.

      It also makes me hate communicating with them because they'll (somewhat obviously) prompt the LLM to make the conclusion they want. For example, "respond to this jira with why this isn't an issue"

      1 reply →

  • I am rereading the Asimov robot novels. A decrease in human to human interaction is a major side effect that he has foreseen. Decreasing interaction and collaboration are some of the core themes.

  • Apps like Doordash have introduced me to many good restaurants which I've then visited in person.

This shows Western government system is broken.

In ideal world (where we don't live):

* Corporation - optimizes for mid-to-short term profits (remove slack, run everything thin)

* Government - optimizes for long term profits (introduce regulations to keep the slack time, keep and attract the talent so state gets better)

* Individual - optimizes for their life time (career, family and tries to leverage market conditions to learn skills and get more opportunities from existing pool)

In the west, government is optimizing for "loads and loads of moooney", because of lobby groups and MBAs controlling the corporations which are pushing these ideas through lobbies

  • > In the west, government is optimizing for "loads and loads of moooney"

    More appropriately, government is optimizing for 4 year electoral terms. No one cares about longer timescales necessary to tackle hard problems.

    This is where autocracies like China, or monarchies for example, win over democracies.

    • Counter-examples are France and Japan. Democracies, electoral terms. High-speed rail that the world looks up to, investment in infrastructure everywhere. In France you have Grand Paris, a programme to transform the suburbs into denser housing and commercial space, a calculation and planning that INCLUDES public transport.

      And the green initiatives in France. These, transit, Grand Paris, and much more are initiatives that take many years to realize.

      Now let's move over to New Jersey and New York City. The most densely populated state (NJ) has some of the worst transit despite being in the NYC greater metropolitan area. An old tunnel between the two needs to be replaced, but politicians with four year mental horizons canned it until recently (ARC project). Infrastructure is a fight between Federal, two states and a city politically and partially from a funding perspective.

      We could go on, but I just wanted to point out that the United States is a poor example of good governance. And that we don't need to live in a totalitarian nightmare just because we acknowledge the US fails to produce innovation and investment for the public good.

      And let's not talk about debt, as if it is a unique problem to France or anything new.

      2 replies →

    • >This is where autocracies like China, or monarchies for example, win over democracies.

      Autocracies like China, are able to plan longer term. But, because they don't regularly change their leadership like a democracy, the leaders become old, tired, schlerotic and surrounded by 'yes men'. Hence "Democracy is the worst form of government, except for all the others.".

    • Western democracy is very interesting.

      Corporations promote people to Principal or distinguished engineer only when they prove their worth by running long running large scale projects.

      But when it comes to governing the whole country: lobby, marketing and boom, you are a president for next 4 years, which is anyway not enough to deliver anything big and see the impact. (Except the destruction, destruction is easy to cause)

    • I think that has something to do with the prerequisites of democracy.

      I believe one important factor for a democracy to work properly, is to have a large number of citizens who 1) can stand up and push back when they feel something is wrong, and 2) is sufficiently knowledgeable. We don’t have that anymore. Of course I’m also to be blamed for that.

      1 reply →

    • I think of the four year cycle as one year to whine about the previous (if different) government you took over from, two years of governing and the last as a ”get ready for election”. So in the most optimal scenario you get three ”peaceful” years. It’s very few things that can be done well in three years at ”ruling a country”-scale.

    • It's also where autocracies fail spectacularly and lead to decades of misery for their citizens.

    • > This is where autocracies like China, or monarchies for example, win over democracies.

      This is the wrong characterization, and in fact it's where monarchies lost out to democracies. Without an organized system of replacement in response to poor performance, autocracies with a poor leader are stuck with that poor leader for life. Ask North Korea how that's going. The upside is that if you have a brilliant leader, then you also get the benefit of that brilliant leader for life. The variance in an autocracy is absolutely huge, and that's their weakness in the long term. Democracies take the edge off, and are intentionally designed to have both less upside and less downside, trading performance for stability. Xi Jinping looks good comparatively because we have gormless losers like Trump and Biden to compare to him to, but he makes plenty of his own mistakes as well (the whole Taiwan situation is a unforced error driven by his own ego, similar to Putin with Ukraine), and we've seen historically what China looks like when it's stuck with a shit leader for decades (Great Leap Forward, anyone?).

  • I always think that’s the failure of citizens, not just the officials. Eventually history is going to blame us for not taking action, not pushing back, and pretty much sleep tightly when things fall apart around us.

> The problem is a management pattern .... Short-term cost cutting

Absolutely agree with this. Most MBAs are taught to optimize and reduce the slack.

It works fine with machinery and materials, but not with humans.

When machinery is optimized and run thin, when one of them breaks, you can get exact same in couple days (you usually prepare for it earlier), but with humans, they train their brain and next person is different from the first person.

Humans also break in different ways:

* They stop caring - you wouldn't notice it immediately, they will close tickets, but give bare minimum thought

* Communal brain will not be trained when there is not enough room for experiments and learning - which reduces the innovation eventually

This is exactly the reason it is difficult for US companies to compete with Chinese companies in manufacturing, because their communal brain have already trained and produced very good talent.

Next is the knowledge, more you outsource, more you lose it

  • Perhaps US companies should invest more in their employees then? Advancement, promotions beyond %1-3% COLAs, career paths, etc would go along way to keep employees interested in seeing their employers succeed instead of jumping ship every couple of years. The would require some effort from the C-suite however and since they jump ship every few years as well, I don't see that changing anytime soon.

    • Unfortunately the Wall Street accountants who run our companies don't mind if you jump ship after your 2% 'reward' raise. Because when someone new comes and costs 10 % more plus recruiting costs, that latter person has 'proven' their worth in the market, similar to when a house goes up in value due to scarcity.

      If you were to explain the costs of knowledge lost, of training, of taking a risk on a new unknown person, of relationships, there's no answer because it doesn't show up in any operating expense worksheet.

      What you're supposed to do is find another job, and explain that you love this job so much, but the other offer is really good, can they come up close to it and you'll stay. Repeat this every few years or find a new job and move to it.

    • Invest in employees is very broad statement.

      Before investing to employees I think it should revisit management practices and strategies, which starts in MBA and university.

      Instead of teaching how to increase shareholder value in the short term, it should also teach how to increase value to the society in the long term as well (and focus on it highly) - not just say: if you win society wins kind of generic fluff.

      Without changing management strategies everything becomes short term after a while

      4 replies →

  • > Most MBAs are taught to optimize and reduce the slack.

    With a myopic definition of "optimize". But as long as they are being rewarded for it, the incentives are broken.

Why would anyone have a sight longer than a quarter? I mean how does long term thinking help the execs get their compensation this quarter? Sheesh..worst case scenario is that the work done now will benefit someone else when they've already left.

Also when companies grow big enough "business" becomes the main business of the company. By that I mean everything unrelated to the actual original domain, such as playing in the financial markets, doing stock buybacks, lobbying, cheating etc. When your CEO is an MBA and your real market is Wall Street any actual product RD and support is a real annoying cost that just cuts into the profits and thus into the exec compensation.

  • > Why would anyone have a sight longer than a quarter? I mean how does long term thinking help the execs get their compensation this quarter?

    Vesting schedules, conditional grants, contractual equity ownership requirements

    • >Vesting schedules, conditional grants, contractual equity ownership requirements

      In those filthy low margin industries that HN loves to regulated across the oceans out of sight out of mind capital investments have service lives measured in decades.

  • Would be interesting to get a law that says that all positions supposed to take long-term decision should be paid with X% of their salary in (non-redeemable until Y years?) stocks.

  • > ...any actual product RD and support is a real annoying cost that just cuts into the profits...

    Worse, it might not generate a return. If you have enough profits, you just buy anyone who successfully produced something innovative. Let them take the risks. As Cisco used to say, "Silicon Valley is our R&D lab."

    It is a very difficult mindset to argue against.

That 'real issue' is the lack of formal effective communications training across the board in the United States, and probably all of Western Culture.

The Problem is wider than management, it is understanding the extended ramifications of action, understanding the larger systems one is a member and then identifying with them, protecting them, because you and all your peers understand their extended foundational need.

That type of critical analysis and secondary considerations tacit knowledge is developed through effective communications training, which is an entire perspective, a way of seeing the world. This can be gained by reading a wide diversity of literature, of the Nobel Literature quality; the reason being such literature is first person accounts of institutions crushing individuals, and individuals finding the power within themselves to defeat the institutions. That personal transformation is practically a Nobel Trope, but it teaches the reader how to have such insight and perseverance. Read a half dozen or more such novels, and you are materially a different person. A better, deeper considering person with a longer perspective horizon. We need this civilization wide.

This sounds all true to me, but I think there is more. It is not just decisions by management, it is also the wider economic context. Low interest rates and, for the US, having the world reserve currency as your own currency both seem to make many of these changes attractive or even inevitable. Low interest rates lead to 'innovation' which I put in scare quotes because besides real innovation it can also mean something that passes as innovation but in the end just turns out to be a bubble of stuff that was not valuable enough. The 'innovation' then crowds out investments in more boring sectors like manufacturing. This is also not good for the population in general because fewer jobs are left for people who are not suited for working in highly 'innovative' sectors.

There’s even a management tutorial game which demonstrates the dangers of removing too much slack from systems.

It’s called The Beer Game[1].

One of the funny things about it is even people that have played and discussed it before _still_ make the same fundamental mistakes next time.

Short-termism is the death of companies.

https://en.wikipedia.org/wiki/Beer_distribution_game

  • Wut?

    The point of the beer game is that buffering in the supply chain makes the bullwhip effect worse.

    • If "winning" the beer game means not overreacting to short-term signals, then you can view that as a form of slack. You're sometimes paying a bit extra to hold onto something that you have no immediate short-term use for.

      1 reply →

    • I'm not sure it's the same kind of buffering. I would assume the "winning" strategy for the case when the known final demand is fixed is to maintain fixed the upstream orders, and buffer outcome, and for non-fixed final demand is to model that demand as good as possible and keep upstream orders accordingly to maintain outcome matching the demand model. Large penalties for buffering may make this approach not working, I guess...

    • Proportional-only buffer management makes the bullwhip effect worse.

      You need an integral part that knows what the entire chain reaction time is. But if you don't have a buffer, you can't do that part, you can only react strictly to the conditions your suppliers have right now.

      A derivative part seems quite useless in a non-coordinated game. But it could in theory pass a "message" upchain that they need to start reacting. Anyway in any real situation it will be easier to pass a message around by telling stuff to people.

      At the end of the day, the relative penalty for storage or non-delivery is what will dictate the optimum play style. But it's perfectly viable to use buffers to attenuate the bullwhip instead of making it larger.

> But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.

This tracks the experience throughout my carreer, in all sorts of companies. From established body-shop consulting, to minor early-stage startup, to FAANG, and everything in between.

Essentially everywhere I worked, you would benefit to switch jobs. Companies would at times do quite an effort to hire you, but wouldn't try anything to keep you around.

This always sounded bonkers to me, but as I directly benefited with a rapidly increasing salary when I job-hopped, my response was a vague shrug. "Those who care don't know and those who know don't care".

The thing is, in every place, you typically is at your least useful when you just joined. It takes months, sometimes years, to learn the intricacies of the business, the knowledge that informs your skills so you can make better decisions, better designs, better implementation, better initiatives.

This is, of course, just one facet of a larger trend of how things are typically mismanaged. The article brushes on it when it talks about how governments in the US and Europe had to scramble to get 50-year old manufacturing going anywhere.

This is why I laugh whenever I hear someone talking about "governments should be administered like a business". Bitch, businesses are typically mismanaged due to terrible incentive loops, institutional blindness and corporate rot. That anything seemingly works is more a result of inertia and conformity than a sign that things are well managed.

Agree. There is so much focus on "let's do the same thing we are doing now with fewer people". It is very boring and uninspired. How about "let's do something that we couldn't do before", instead?

"McKinsey comes to town".

Basically same shaped taylorism-derived industrial management has imposed itself as the "default dogma" in private and public administration.

> The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

Exactly. In direct contrast to this would be how Xerox and Bell funded laboratories just to pursue knowledge, without demands of profit. They ended up creating incredibly profitable things when driven my knowledge, and not profit.

I also read a book about math where the author argued that while the Greeks were driven to pursue truth for truth's sake, they ended up being far more productive and innovative. The Romans who were more driven to work for solutions to immediate practical needs, ended up being not so productive and innovative. He used this as a defense for efforts in pure math that seems to have no immediate application but ends up being massively, surprisingly powerful and productive for practical applications down the road. I think the same could be said for software development focussed on truth and correctness, rather than immediate productivity.

> The problem is a management pattern

No, the problem is much more far-reaching than being limited to just corporations - it's a societal problem.

The article says the west is forgetting how to code, but actually the west is forgetting how to do math, how to draw and edit images, how to make music, how to write, how to read and even how to think.

And have you interacted with any kids recently? They're doing ALL their homework using ChatGPT. Forget kids, adults are even worse. At least kids can be supervised, but who's supervising the adults? How many of us have enough self-control to not reach out to the convenient AI app in our phones for every little thing?

This has massive repurcarions for our society as a whole. The bleak future depicted in Idocracy is becoming more and more of a surefire reality.

It's much more than a management problem, experienced software engineers are actively opting into apathy and atrophy of their craft.

I see many peers getting worse in their abilities. It's especially disheartening to see people I admired for their problem solving devolve into someone who delegates more and more of their reasoning to LLMs. It really negatively affects working with them. If you have a concern or criticism of "their" approach to a problem they either dismiss it off hand as invalid, or they go discuss it with their LLM of choice making themselves a bottleneck to collaboration.

As the article suggests I suspect we're in for a real dark age of software as companies struggle to know who to keep, if they can even trust that those who have vital knowledge and skill today will retain it going forward.

This behavior is strongly incentivized by the fact that recruitment and on boarding and training costs don't show up in the quarter, or maybe even the fiscal year, where layoffs are made. You can also hide a bit of age and wage discrimination in layoffs and intentionally dumb down your organization to goose up the quarter a bit more.

Quarterly financial reporting is an obvious target for a rethink. Managers get instantaneous readings from dashboards, but they also like the room for shenanigans that quarterly reporting to shareholders enables. It's going to be hard to get management to give up information asymmetry.

Modern managers: You have to be in the office for synergy and the serendipitous exchanges with coworkers that lead to innovation.

Also modern managers: You were in the bathroom for six minutes. I'm docking your pay.

It's interesting because i find when I'm less busy/stressed at work is when I spend more time motivated, doing better work, and fixing issues that otherwise would get left behind.

Seems to me that - optimistically - this would shift the job of a software engineer into a more formal engineering role, and that the actual implementation is done by AI. In the same way in other areas, engineering and implementation differ and implementation can be (and is) automated.

No idea how this should take form, though, and if it’s even realistic. But it seems like due to AI, formal specs and all kinds of “old school” techniques are having a renaissance while we figure out how to distribute load between people and AI.

  • That sounds right, but it can be superbly wrong because that presupposes that you can debug what the AI gets very confidently wrong.

    There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.

    • Indeed, even if were possible for someone to create any program most of the time just by directing a team of AI agents, when something does not work one needs the ability to zoom in through the abstraction levels and understand exactly the program that is executed, so only knowing to generate prompts becomes insufficient.

      This is the same with compilers. Most of the time a programmer needs to know only the high-level language that is used for writing the program. Nevertheless, when there is a subtle bug or just the desired performance cannot be reached, a programmer who also understands the machine language of the processor has a great advantage by being able to solve the bug or the performance problem, which without such knowledge would be solved in much more time or never.

      3 replies →

  • > this would shift the job of a software engineer into a more formal engineering role

    If only you knew how the civil engineering sausage was made.

    The amount of yolo'ing stuff based on vibes goes up when testing is expensive/impractical. They just paper over it all with disclaimers of the sort that would get laughed at for being non-starters in the software industry.

  • Personally my experience has been that once I manage to describe a problem in good enough detail that a junior engineer would be able to solve it, it's good enough for an LLM as well.

    Which creates incentives I'm not wholly comfortable with, but the fact is that I'm more productive now alone, than I used to be in a team.

    • My experience is that if I manage to describe a problem in enough detail for a junior or LLM to be able to solve it, it would have been faster to do it myself.

      Prior to LLMs the idea was to involve juniors in the engineering process to give them an opportunity to learn rather than necessarily to improve the team's immediate productivity. Some companies famously (and consciously) refused to hire juniors to avoid the performance hit even prior to genAI (eg Netflix).

      Involving LLMs in our engineering processes has very suspect implications for both productivity and quality of our output, since unlike juniors the LLMs don't even learn.

The real problem is that the west is fractured and sees itself only as individuals, corporations and nations.

China sees itself as a civilization. Russia does too, ish (look up Dugin's Eurasian Empire).

The West has a hard time believing anyone wants to destroy it, so doesn't take the threat seriously. Meanwhile other civilizations are working to both destroy the west and ensure their own place in the future.

So we were OK outsourcing our production and knowledge for a quick buck and it's coming back to haunt us.

Even now, we still don't see ourselves as a civilization, actively work to undermine ourselves and help our enemies who openly want to destroy us, and are barely doing anything to defend ourselves. We seem to have also given up on the idea of a "democratic world", which was in vogue when I was growing up (Bush 2 years).

As for the thesis of this article, the positive is that code and knowledge, because of the fact preserving it is basically free, is still there. AI hasn't been good enough to displace it. And our technological advantage is still pretty wide and our military industrial complex is, for better or worse, coming back.

> The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.

You need some experienced people around, but companies that rely on institutional knowledge to get everything done have always been doomed to fail.

Even before AI, turnover was a real thing. People churn jobs a lot in tech even when the pay is good. They get bored and jump companies, leave to join their friends' startup, or move to another city.

Every company I've worked for that operated on a belief that institutional knowledge was king and documentation and processes couldn't replace it eventually had to face the music when key employees left. Ironically this problem was at its worst at a company that compensated very well, because those key employees would often realize they had enough money to retire early or go take some risky startup job instead of sticking around to be the insitutional knowledge base.

  • Companies that have too much tribal knowledge obviously do fail. But for a company to run well, some institutional knowledge will be there. And this will be the most subtle and implicit stuff that really isnt feasible to put down on paper. Or even if it is documented, its hard to understand without building alot of context. Even if that context is present in the docs, its not really possible to internalize it without actually seeing things at work.

    All that is what takes time to learn and it cant really be eliminated either because thats what is critical for you company to run on

I came to comment EXACTLY about this issue. Management lives in a world where they have absolutely no expertise on what they are supposed to manage. So they try to objectify their decisions, with generic KPIs based on efficiency or cost or whatever. And miss MANY additional decision axis very focused on WHAT they are supposed to build. That is a MASSIVE issue, in my opinion.

The way the system is supposed to work is that companies that make bad decisions fail, and provide room for companies that do not make bad decisions to appear or grow bigger. Which works as long as you have an environment with fair competition where people are free to start and grow companies without running into entrenched interests or undue hardship.

The real issue isn't the management pattern. The real issue is outsourcing. Offshore The manufacturing and coding and you wont have the facilities and personnel to do that type of labor anymore. Management has a and in that, but the people and the officials they elect have an even bigger impact (regulation vs invisible hand and all that).

And the next level of this is, that even companies that realize this, mostly go ahead acting like this anyway, because they think someone else can train the juniors. Some other company will appear to do that, but nimby! Over time the lack of good judgement will lead to a decline in their products' quality, which will be difficult to recover from.

I think this is an important distinction. Documentation and automation can preserve artifacts, but not the actual capability.

A runbook can tell you what usually works, but it cannot tell you when the situation is no longer “usual.” That kind of judgment mostly comes from seeing real systems fail in messy ways over time.

Tools are still valuable, of course. But they work best when they help experienced people transfer knowledge, not when they are used as a reason to remove the people who understand the system.

> But documentation is not the same as field experience.

Even if it were, creating good documentation or assessing its quality requires experience in using good and bad documentation. And how would juniors build up that experience if they are using AI for everything.

I'm seeing that in Big Tech now - there's no room anymore and it's super short term thinking (couched in the language of long term thinking). It's a really dangerous game for an incumbent to fritter away the ability to innovate because there is only enough capacity to focus on the here and now.

> What remains is documentation and automation.

In my experience, documentation is unwanted because it creates work and no revenue. Which makes the tacit knowledge all the more valuable.

In the case of the military I'd say the real reason is political. After the fall of the Berlin wall, Europe collectively agreed (knowingly or not) that war is now a thing of the past and the goal should be the complete dismantling of militaries worldwide, starting with Europe. Lead by example, etc.

  • It's subtler than that. Europe was just constantly reminded by its big brother not to duplicate NATO structures, which are dependent on the US.

    • This.

      Plus, of course, each European country has to support their own defense industry, so each one of them needs to have their own howitzer/tank/whatever and they can't agree on common approach that would actually allow for the economy of scale.

  • They agreed that war was a thing of the past, but still continued to push for NATO to allow new members anyway, ironically causing Russia (and China and everyone who is NOT in NATO) to suspect that war was NOT a thing of the past and therefore never quite abandoning their military completely. Unpopular opinion: the West should either NEVER have abandoned its military production (so as to maintain NATO actual preparedness for war, given that's the only reason for its existence) OR it should just have dismantled NATO and announced to the world that it strongly believes war is a thing of the past, and that other countries are advised to follow suit. But we actually chose the easy, halfway path: keep NATO, keep our militaries "looking strong" (which gives the signal our rivals should also do the same, obviously), but not actually be ready for any sort of major war and as the article points out, even lose actual capacity to become ready for war within any realistic timeframe. The worst possible outcome :(.

    • After the USSR fell, they left behind many countries abused by Russia that didn't believe it would leave them alone. Those countries wanted defense guarantees in case of future Russian aggression.

      NATO wanted to be deliberate and slow about admitting any new members, but countries that wanted to join felt that anyone who didn't join might get attacked or face hybrid measures from Russia to prevent them from joining next. So they grouped up and 7 countries joined NATO simultaneously. NATO was never begging them to join, they wanted to join NATO.

      People push this vision of NATO being some hungry bastard that can't get enough, but it's largely outside pressures pushing countries to want to join it.

      Sure enough they were right. Russia invaded both Georgia and Ukraine, which wanted to join already because Russia kept interfering in their societies.

    • It could be matching theory for outcome though. The unpopular opinion may still be wrong too. Russia was quite different in 1999, or better in 1992, to the point of joining NATO, and China was nowhere the threat of today, and it could be different reasons- not keeping NATO - which caused today's standup. So, basically, the situation seem to be more complex.

> The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.

my promotion packet at work always included how great of a document-er i am

What you describe had been happened already when programming task became using search engines, passing data between libraries, and delegating coding to off-shore workers.

AI is almost a distraction from the older pattern: management discovers a way to make the spreadsheet look better this quarter and the hidden cost only shows up years later

I think the problem is even more general than that, and has existed since before LLMs. All of the decision makers are incentivized to chase short term gains and ignore everything else. Many tech companies already had huge gaps in knowledge around their own codebases simply because such knowledge and expertise is basically treated as a liability/expense rather than an asset.

I'm actually very optimistic about LLMs/AI for basically the opposite reason tech leadership/MBAs are - I think it will allow us to overcome organizational/business/marketing ing hurdles that tech companies rely on short-sighted MBA-style 'leadership' for in the first place. And not because I believe in OpenAI and Anthropic - I think the future is self-hosted or community -hosted open models, and open collaboration among willing peers, building open software to solve real problems in honest ways, rather than hierarchical top-down corporate hellholes pumping out pre-enshitified crapware full of ads, tracking and dark patterns.

> The real issue, in my view, is not AI itself

in shootings technically the guns are not the issue since they dont fire on their own.. they do enable the ability to shoot though

  • Only in 2026 is AI the answer to everything and when the negative traits of our behaviour are amplified due to AI it clearly has nothing to do with AI even when the article exactly about that.

I'm not sure if the tweet was a joke, but some companies are apparently hiring junior developers back because it's cheaper than AI.

There's economic / capitalist pressure to reduce cost / increase revenue and optimize for short-term profits; that's on the corporate side, anyway.

But applying the military hardware stuff to software is IMO a bit of a leap; I get the similarities, but where demand for software hasn't slowed down at all, demand for military hardware and ammunition just wasn't there.

The alternative would have been to keep all the factories alive, maintained, staff employed (or training staff ready to onboard rapidly hired staff when capacity had to go up), supplied stockpiled (and rotated), etc. And who would be willing to pay for that?

In times of peace the voter wouldn't want the government to spend billions on the military if it wasn't necessary... except for the US which still spends billions a year on the military even in peacetime. But not on their production facilities it seems.

Most workforce reductions are using AI as a cover up for greedy short term bonuses.

Any exec using AI to pay fewer people lacks imagination.

This is all basic economics.

Companies can grow organically or through strategy and adding new verticals to a point. Eventually they're too large for that. They own the whole market, they can't get regulatory approval for acquisitions and so on. At this point they only really grow at the same rate the industry or the economy does.

At this point (or often long before), the only way to increase profits is to raise prices and/or reduce costs. Profits tend to decline over time so there is constant pressure to reduce costs to statisfy the insatiable need for increasing profits.

This is the real product AI is selling: cutting wages. It's a combination of displacing workers (which, thus far, hasn't been all that successful). Where it is successful is to have the threat of layoffs hanging over your workers, getting them to do extra unpaid work for the same wages and making sure they can't ask for raises.

That's what's paying for all this AI investment.

So I agree with you: the real problem isn't AI. It's capitalism.

> The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

It's always seemed to me that the problem is corporate profit and personal profit above all. 'Management' is a subset of this, and so is pretty much everything else, including the current drive for AI.

It's the Western, perhaps American, approach to business and emphasived by MBAs and the media. Lowering costs, driving share price, dividends and corporate profit.

This race over the few decades has hollowed out most Western companies.

Listen to any entrepreneur podcast, or read any website, and it's all about 'how quickly can I get to exit', i.e. personal profit.

Capitalism is the worst form of economic system, apart from all the rest.

  • I have worked for companies in different countries.

    I think the striking thing is how US companies tend to have no idea how to be wealthy. Record profits, so the ceos use all of their tricks to get rich quick? They are already rich! Don't fix what isn't broken. Not every company needs to expand into 10 new markets, or have 5% lay offs or double in revenue. Some of this is investor pressure, but often it's not. Some guy who made it to the top is bored, doesn't feel like he is obviously doing enough, so he keeps making decisions to justify his position.

    This isn't to insight flames but the European companies I worked for knew how to be wealthy! The market took a down turn from COVID, they ate the cost to keep their people. Some flashy new vertical is trending. They decided it's not for them, they have a brand and customers that they should focus on while everyone else works out the kinks. The company decides, why go public at all, we are successful and don't need anyone else's influence over us.

    People say "you cannot project beyond 1 quarter". This is true in terms of catastrophe or gambler success. But its not true, if you act in q1 like there will be a q2 or even 5 years from now or heaven forbid a second or third generation you make different moves. You value different things.

The problem, in other words, is quarterly earnings in specific and shareholder capitalism in general.

> The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

I think that's still a symptom. The real problem is ideology: the monomaniacal focus on profit-making business, which infects our political leaders, down to capitalists and business leaders, down to the indoctrinated rank-and-file. Towards the end of the cold war, the last constraint on it were abolished, the the victory over the Soviet Union made it unquestioned.

The Chinese don't have that ideological problem. Their government appears to not give a shit about how much profit individual business make, they care about building out supply chains and a capabilities. They will bury the West, so long as the West remains in the thrall of libertarian business ideology.

  • The US is stuck in this weird irony where they recognize that Soviet-style central planning is a disaster but can't recognize that it's what megacorps do when they're insulated from competition. Internal politics, perverse incentives and a system that can sustain massive inefficiencies right up until the point that it doesn't.

    In general productive economic activity generates a surplus and that surplus allows for slack. Human beings intuitively understand this. Hobbies are frequently de facto training for things that aren't currently happening but might later. Family-owned and operated businesses are much less likely to try to outsource their core competency for the sake of quarterly profits.

    But regulatory capture and market consolidation causes the surplus to go to the corporate bureaucracies capturing the regulators instead of human beings with self-determination and goals other than number go up, and then the system optimizes for capturing the government rather than satisfying the people. "When you legislate buying and selling the first things to be bought and sold are the legislators." You throw away the competitive market and subject yourselves to the unaccountable bureaucracy, and then try to pretend it's not the same thing because this time the central planners are wearing business suits.

    • I wonder if it would work if top US companies implemented a system like the NFL draft, where companies competing for top engineers out of college get to pick from the best engineers inversely proportionally based on how they did before financially.

      While it sounds counter intuitive, it maintains a good distribution of talent across the industry.

      But that system would only work if healthy competition was the goal, not moneymaking.

      1 reply →

    • Yes - ultimately it's the same system. Far from being daring and innovatory, it's backward-looking, unimaginative, and bureaucratic.

      Vision for the future is limited to grandiose fantasies straight out of 1950s pulps and the "heroic" creation of narcissistic corporations that are cynically extractive and treat employees and customers with equal contempt.

      The differences which used to provide a convincing cover story - no single Great Leader, a functional consumer economy, votes that appear to make a difference - are being dismantled now.

      What's left are the same mechanisms of total monitoring (updated with modern tech) and reality-denying totalitarian oppression, run for the exclusive benefit of a tiny oligarchy which self-selects the very worst people in the system.

    • > megacorps do when they're insulated from competition. Internal politics, perverse incentives and a system that can sustain massive inefficiencies right up until the point that it doesn't.

      You just described Lucent.

      1 reply →

    • Yes, many Americans and other Westerners believe that the so-called "socialist" economies, like those of the Soviet Union and of Eastern Europe were non-capitalist.

      This is only an illusion created by the fact that the communists were careful to rename all important things, to fool the weaker minds that the renamed things are something else than what they really are.

      In reality, the "socialist" economies were more capitalist than the capitalist economies of USA and Western Europe. They behaved exactly like the final stage of capitalism, where monopolies control every market and there is no longer any competition.

      Unfortunately, after a huge sequence of mergers and acquisitions started in the late nineties of the last century, the economies of USA and of the EU states resemble more and more every year the former socialist economies, instead of resembling the US and W. European economies of a few decades ago.

      14 replies →

  • West: We need profits and then we’ll try to build something useful.

    China: We need to build this useful thing and then later let’s try to make profits, too.

  • What do you think the war in the gulf is about, the US cannot compete with China so they are destroying the global system that enabled them. There is no plan to have a peace with Iran, only perpetual war and the destruction of the middle east, starvation in East Asia and poverty and nationalist wars in Europe, potentially with Russia taking over vast swathes of Eastern Europe again. Suddenly Russia is the one in charge of the China-Russia relationship. It's such a stupid plan for the US that you might think it was designed by Putin himself.

    • You started well, but then the train got derailed...

      Russia has no need for Eastern Europe (they have enough land and resources, why saddle yourself with hostile population?), as long as the said Easter Europe is not threatening them with NATO bases/missiles (US has repeatedly shown that they do not hesitate to use their muscle if they think they can get away with it, so Russia's paranoia is not entirely unfounded).

      Even if Russia somehow took over Eastern Europe (most likely way: they learn from US how to do soft 'regime change'), they have no chance against China (China is just so much bigger and better organized; the population's mentality also matters a lot). China and Russia are rather complementary, there is not reason for confrontation between them.

      But you are correct, what US is doing is really totally stupid ... although it seems designed by Netanyahu, not Putin.

      14 replies →

> What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

And workforce reduction is a nobel goal. In fact, I think it's one of the most important things humanity should focus on. We should strive for a workforce of zero. Humans currently was an enormous amount of their life working instead of more worthwhile pursuits.

I despise the rhetoric around this, we didn't "lose jobs" over AI, we saved ourselves a lot of work. What it does do is highlight a problem in our current society: the link between labour and the access to resources (e.g. money).

I don't think that AI is the ultimate answer to the problem of work, but it can contribute to it.

  • The time to solve that resource problem is before AI concentrates power, not after. It’s LESS likely to happen when a tiny elite increases their already huge amount of power.

You sound convincing, but it also reads very AI generated. A lot of people will stop reading half way.

  • People downvoting the above should consider whether the genetic fallacy is really so much a fallacy when we're talking to a bunch of machines.

    I'm pretty sure the parent comment is the work of a bot, and I have no interest in talking to an automated sock puppet.

  • ... Yeah, that was my issue as well. I'm pretty sure it _is_ AI generated, and flagged it. Hopefully moderation can follow up.

    • The worst part is that several of the replies are probably AI generated too. So AI feeds AI, essentially. Is this going end forums like HN? I hope not.

You're absolutely right. And the root cause is simple: the stock market / shareholders. The incentive is for quarterly returns, not long term. That's why CEOs look for that - that's the job they are assigned by shareholders and the board. For a shareholder what matter is the stock going up. Heck, you can make money even if it goes down, but you can't if it stands still.

  • No. It’s pure greed dominating the world. My employer is owned by bigger private company and the shitshow is the same as in big megacorporation. There are hordes of colleagues to stab one for 100€ more salary a month. Disgusting.

    The company is manufacturing special computers. The initial owner/founder ordered CPU modules and memory cards always looking at the price break. His question was always „how many to buy to get best price?“. So he ordered sometimes 200-300 parts more than needed immediately. Then the follow up order came and he emptied the storage. Now new manager always orders EXACT amount memory cards as ordered computers. Price is secondary thing, most important thing to work without warehouse and get things delivered just in time. What doesn’t work at all for the while already. The high prices buying small quantities is eating up the profit, so people are getting fired to save costs. It is pure greed dominating western world. Everything is done to look accounting nicely at every cost, get whole bonus despite ruining the company long term. I see this pattern recently very often.