Comment by twodave

5 months ago

Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:

  1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
  
  2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.

"The only reason to reduce headcount is to remove people who already weren’t providing much value."

I wish corporations really acted this rationally.

At least where I live hospitals fired most secretaries and assistants to doctors a long time ago. The end result? High-paid doctors spending significant portion of their time on administrative and bureaucratic tasks that were previously handled by those secretaries, preventing them from seeing as many patients as they otherwise would. Cost savings may look good on spreadsheet, but really the overall efficiency of the system suffered.

  • That's what I see when companies cut juniors as well. AI cannot replace a junior because a junior has full and complete agency, accountability, and purpose. They retain learning and become a sharper bespoke resource for the business as time goes on. The PM tells them what to do and I give them guidance.

    If you take away the juniors, you are now asking your seniors to do that work instead which is more expensive and wasteful. The PM cannot tell the AI junior what to do for they don't know how. Then you say, hey we also want you to babysit the LLM to increase productivity, well I can't leave a task with the LLM and come back to it tomorrow. Now I am wasting two types of time.

    • > well I can't leave a task with the LLM and come back to it tomorrow

      You could actually just do that, leave an agent on a problem you would give a junior, go back on your main task and whenever you feel like it check the agent's work.

      4 replies →

  • But wouldnt these spreadsheets be tracking something like total revenue? If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes?

    I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.

    • First of all, it's not unlikely that the dentist is the owner. And in any case, when you have a small system of less than 150 people, it's easy enough for a handful of people to see what's actually going on.

      Once you get to something in the thousands or tens of thousands, you just have spreadsheets; and anything that doesn't show up in that spreadsheet might as well not exist. Furthermore, you have competing business units, each of which want to externalize their costs to other business units.

      Very similar to what GP described -- when I was in a small start-up, we had an admin assistant who did most of the receipt entry and what-not for our expense reports; and we were allowed to tell the company travel agent our travel constrants and give us options for flights. When we were acquired by a larger company, we had to do our own expense reports, and do our own flight searches. That was almost certainly a false economy.

      And then when we became a major conglomerate, at some point they merged a bunch of IT functions; so the folks in California would make a change and go home, and those of us in Europe or the UK would come in to find all the networks broken, with no way to fix it until the people in California started coming in at 4pm.

      In all cases, the dollars saved is clearly visible in the spreadsheet, while the "development velocity" lost is noisy, diffuse, and hard to quantify or pin down to any particular cause.

      I suppose one way to quantify that would be to have the Engineering function track time spent doing admin work and charge that to the Finance function; and time spent idle due to IT outages and charge that to the IT department. But that has its own pitfalls, no doubt.

      2 replies →

    • > If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes?

      I am going to assume that the Doctors are just working longer hours and/or aren't as attentive as they could be and so care quality declines but revenue doesn't. Overworking existing staff in order to make up for less staff is a tried and true play.

      > I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.

      By conflating 'Doctors' and 'Dentists' you are basically saying the equivalent of 'all Doctors' and 'Doctors of a certain specialty'. Dentists are 'Doctors for teeth' like a pediatrician is a 'Doctor for children' or an Ortho is a 'Doctor for bones'.

      Teeth need maintenance, which is the time consuming part of most visits, and the Dentist has staff to do that part of it. That in itself makes the specialty not really that comparable to a lot of others.

      3 replies →

    • Probably because dentists are more cash based and less battling with insurance for payments.

      Customers are more price sensitive so the dentists have to be too.

  • I'm a full-stack developer, Recently i find that almost 90% of my work deadlines have been brought forward, and the bosses' scheduling has become stricter. the coworker who is particularly good at pair programming with AI prefers to reduce his/her scheduling(kind of unconsciously)。Work is sudden,but salary remains steady。what a bummer

  • Disagreed. You need more doctors, not useless secretaries. Generating bureaucratic bullshit doesn't make any work go faster; it actually just creates more work at best and, in general, just slows everything down.

    It is perfect that the primary stakeholder is responsible of his own bureaucratic impact. This way he'll learn to generate the minimum amount that is viable to be efficient. Otherwise they don't care and generate waste by the metric ton.

    Because of the French hospital bureaucratic nightmare, for a simple 15-minute intervention (cyst removal), I had 2 appointments and received 4 different letters by post. Not only did they waste more of my time than necessary (every time you need to wait about 45 minutes before anything happens), but since the physician cannot be duplicated and I had to meet him each time, nothing of value was gained as well.

    With modern technologies, secretaries should barely exist. They still do because it's all about the laws and compliance; everyone is protecting his ass first and foremost. Without this, a system without the bureaucracy would be much more efficient. It's how they do it outside the western world basically.

Funny the original post doesn’t mention AI replacing the coding part of his job.

There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.

I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption.

I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in.

  • >There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.

    Yes, it’s a god of the gaps situation. We don’t know what the ceiling is. We might have hit it, there might be a giant leap forward ahead, we might leap back (if there is a rug pull).

    The most interesting questions are the ones that assume human equivalency.

    Suppose an AI can produce like a human.

    Are you ok with merging that code without human review?

    Are you ok with having a codebase that is effectively a black box?

    Are you ok with no human being responsible for how the codebase works, or able to take the reins if something changes?

    Are you ok with being dependent on the company providing this code generation?

    Are we collectively ok with the eventual loss of human skills, as our talents rust and the new generation doesn’t learn them?

    Will we be ok if the well of public technical discussion LLMs are feeding from dries up?

    Those are the interesting debates I think.

    • > Are you ok with having a codebase that is effectively a black box?

      When was the last time you looked at the machine code your compiler was giving you? For me, doing embedded development on an architecture without a mature compiler the answer is last Friday but I expect that the vast majority of readers here never look at their machine code. We have abstraction layers that we've come to trust because they work in practice. To do our work we're dependent on the companies that develop our compilers where we can at least see the output, but also companies that make our CPUs which we couldn't debug without a huge amount of specialized equipment. So I expect that mostly people will be ok with it.

      2 replies →

    • Have you ever double-checked (in human fashion, not just using another calculator) the output from a calculator?

      When calculators were first introduced I'm sure some people such as scientists and accountants did exactly that. Calculators were new, people likely had to be slowly convinced that these magic devices could be totally accurate.

      But you and I were born well after the invention of calculators, our entire lives nobody has doubted that even a $2 calculator can immediately determine the square root of an 8-digit number and be totally accurate. So nobody verifies, and also, a lot of people can't do basic math.

    • I dont think it really matters if your or I or regular people are ok with it if the people with power are. There doesnt seem to be much any of us regular folks can do to stop it, especially as Ai eliminates more and more jobs thus further reducing the economic power of everyday people

      2 replies →

  • Well, I would just say to take into account the fact that we're starting to see LLMs be responsible for substantial electricity use, to the point that AI companies are lobbying for (significant) added capacity. And remember that we're all getting these sub-optimal toys at such a steep discount that it would be price gouging if everyone weren't doing it.

    Basically, there's an upper limit even to how much we can get out of the LLMs we have, and it's more expensive than it seems to be.

    Not to mention, poorly-functioning software companies won't be made any better by AI. Right now there's a lot of hype behind AI, but IMO it's very much an "emperor has no clothes" sort of situation. We're all just waiting for someone important enough to admit it.

  • I’m deeply sceptical. Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.

    If anything the quality has gotten worse, because the models are now so good at lying when they don’t know it’s really hard to review. Is this a safe way to make that syscall? Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect, and it’ll either be right or lying, it never says “I don’t know”.

    Every time OpenAI or Anthropic or Google announce a “stratospheric leap forward” and I go back and try and find it’s the same, I become more convinced that the lying is structural somehow, that the architecture they have is not fundamentally able to capture “I need to solve the problem I’m being asked to solve” instead of “I need to produce tokens that are likely to come after these other tokens”.

    The tool is incredible, I use it constantly, but only for things where truth is irrelevant, or where I can easily verify the answer. So far I have found programming, other than trivial tasks and greenfield ”write some code that does x”, much faster without LLMs

    • > Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect

      Fully agree, in fact, this has literally happened to me a week ago -- ChatGPT was confidently incorrect about its simple lock structure for my multithreaded C++ program, and wrote paragraphs upon paragraphs about how it works, until I pressed it twice about a (real) possibility of some operations deadlocking, and then it folded.

      > Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.

      As an university assistant professor trying to keep up with AI while doing research/teaching as before, this also happens to me and I am dismayed by that. I am certain there are models out there that can solve IMO and generate research-grade papers, but the ones I can get easy access to as a customer routinely mess up stuff, including:

      * Adding extra simplifications to a given combinatorial optimization problem, so that its dynamic programming approach works.

      * Claiming some inequality is true but upon reflection it derived A >= B from A <= C and C <= B.

      (This is all ChatGPT 5, thinking mode.)

      You could fairly counterclaim that I need to get more funding (tough) or invest much more of my time and energy to get access to models closer to what Terrence Tao and other top people trying to apply AI in CS theory are currently using. But at least the models cheap enough for me to get access as a private person are not on par with what the same companies claim to achieve.

  • idk man, I work at a big consultant company and all I'm hearing is dozens of people coming out of their project teams like, "yea im dying to work with AI, all we're doing is talking about with clients"

    It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet

  • > There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.

    Any sources on that? Except for some big tech companies I dont see that happening at all. While not empirical most devs I know try to avoid it like the plague. I cant imagine that many devs actually jumped on the hype train to replace themselves...

    • This is what I also see. AI is used sparingly. Mostly for information lookup and autocomplete. It's just not good enough for other things. I could use it to write code if I really babysit it and triple check everything it does? Cool cool, maybe sometime later.

      1 reply →

> The only reason to reduce headcount is to remove people who already weren’t providing much value.

There were many secretaries up until the late 20th century that took dictation, either writing notes of what they were told or from a recording, then they typed it out and distributed memos. At first, there were many people typing, then later mimeograph machines took away some of those jobs, then copying machines made that faster, then printers reduced the need for the manual copying, then email reduced the need to print something out, and now instant messaging reduces email clutter and keep messages shorter.

All along that timeline there were fewer and fewer people involved, all for the valuable task of communication. While they may not have held these people in high esteem, they were critical for getting things done and scaling.

I’m not saying LLMs are perfect or will replace every job. They make mistakes, and they always will; it’s part of what they are. But, as useful as people are today, the roles we serve in will go away and be replaced by something else, even if it’s just to indicate at various times during the day what is or isn’t pleasing.

  • The thing that replaces the old memos is not email, its meetings. It not uncommon for meetings with hundreds of participants that in the past would be a simple memo.

    It would be amazing if LLMs could replace the role that meetings has in communication, but somehow I strongly doubt that will happens. It is a fun idea to have my AI talk with your AI so no one need to actually communicate, but the result is more likely to create barriers for communication than to help it.

  • The crucial observation is the fact that automation has historically been a net creator of jobs, not destroyer.

    • Sure, if you're content to stack shelves.

      AI isn't automation. It's thinking. It automates the brain out of human jobs.

      You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place.

      5 replies →

    • That observation is only useful if you can point at a capability that humans have that we haven't automated.

      Hunter-Gatherers were replaced by the technology of Agriculture. Humans still are needed to provide the power to plow the earth and reap the crops.

      Human power was replaced by work animals pulling plows, but you only humans can make decisions about when to harvest.

      Jump forward a good long time,

      Computers can run algorithms to indicate when best to harvest. Humans are still uniquely flexible and creative in their ability to deal with unanticipated issues.

      AI is intended to make "flexible and creative" no longer a bastion of human uniquness. What's left? The only obvious one I can think of is accountability: as long as computers aren't seen as people, you need someone to be responsible for the fully automated farm.

    • 'Because thing X happened in past it is guaranteed to happen in the future and we should bet society on it instead of trying to you know, plan for the future. Magic jobs will just appear, trust me'

      1 reply →

  • > At first, there were many people typing, then later [...]

    There were more people typing than ever before? Look around you, we're all typing all day long.

    • I think they meant that there was a time when people’s jobs were:

      1. either reading notes in shorthand, or reading something from a sheet that was already fully typed using a typewriter, or listening to recorded or live dictation

      2. then typing that content out into a typewriter.

      People were essentially human copying machines.

This is a very insightful take. People forget that there is competition between corporations and nations that drives an arms race. The humans at risk of job displacement are the ones who lack the skill and experience to oversee the robots. But if one company/nation has a workforce that is effectively 1000x, then the next company/nation needs to compete. The companies/countries that retire their humans and try to automate everything will be out-competed by companies/countries that use humans and robots together to maximum effect.

  • Overseeing robot is a time limited activity. Even building robot has a finite horizon.

    Current tech can't yet replace everything but many jobs already see the horizon or are at sunset.

    Last few time this happened the new tech, whether textile mills or computers, drove job creation as well as replacement.

    This time around some component of progress are visibile, because end of the day people can use this tech to create wealth at unprecedented scale, but other arent as the tech is run with small teams at large scale and has virtually no related ondustries is depends on like idk cars would. It's energy and gpus.

    Maybe we will be all working on gpu related industries? But seems another small team high scale job. Maybe few tens of million can be employed there?

    Meanwhile I just dont see the designer + AI job role materializing, I see corpos using AI and cutting the middleman, while designers + AI get mostly ostracized, unable to raise, like a cran in a bucket of crabs.

    • > because end of the day people can use this tech to create wealth at unprecedented scale

      _Where?_ so far the only technology to have come out widespread for this is to shove a chatbot interface into every UI that never needed it.

      Nothing has been improved, no revelatory tech has come out (tools to let you chatbot faster don’t count).

      4 replies →

  • I think you’ve missed the point. Cars replaced horses - it wasn’t cars+horses that won. Computers replaced humans as the best chess players, not computers with human oversight. If successful, the end state is full automation because it’s strictly superhuman and scales way more easily.

    • > Computers replaced humans as the best chess players, not computers with human oversight.

      Oh? I sat down for a game of chess against a computer and it never showed up. I was certain it didn't show up because computers are unable to without human oversight, but tell me why I'm wrong.

      14 replies →

    • Humans still play chess and horses are still around as a species.

      (Disclaimer: this is me trying to be optimistic in a very grim and depressing situation)

      12 replies →

    • Unless the state of the art has advanced, it was the case that grandmasters playing with computer assistance ("centaur chess") played better than either computers or humans alone.

    • Perhaps you have missed the essential point. Who drives the cars? It's not the horses, is it? And a chess computer is just as unlikely to start a game of chess on its own as a horse is to put on its harness and pull a plow across a field. I'm not entirely sure what impact all this will have on the job market, but your comparisons are flawed.

      2 replies →

  • I think the big problem here though, is that humans go from being mandatory to being optional, and this changes the competitive landscape between employers and workers.

    In the past a strike mattered. With robots, it may have to go on for years to matter.

    • A strike going long enough and becoming big enough becomes a political matter. In the limit, if politicians don't find a solution, blood gets spilled. If military and police robots are in place by that time, you can ask yourself what's the point of those unproductive human leeching freeriders at all.

      1 reply →

    • In this scenario wages will have been driven down so much that there will be barely anyone left to buy the products made by these fully automated corps. A strike won't work, but a revolt may and is more likely to happen.

      1 reply →

> most companies will still have more work to do than resources to assign to those tasks

This is very important yet rarely talked about. Having worked in a well-run group on a very successful product I could see that no matter how many people would be on a project there was alway too much work. And always too many projects. I am no longer with the company but I can see some of the ideas talked about back then being launched now, many years later. For a complex product there is always more to do and AI would simply accelerate development.

Yip, the famous example here being John Maynard Keynes, of Keynesian economics. [1] He predicted a 15 hour work week following productivity gains that we have long since surpassed. And not only did he think we'd have a 15 hour work week, he felt that it'd be mostly voluntary - with people working that much only to give themselves a sense of purpose and accomplishment.

Instead our productivity went way above anything he could imagine, yet there was no radical shift in labor. We just instead started making billionaires by the thousand, and soon enough we can add trillionaires. He underestimated how many people were willing to designate the pursuit of wealth as the meaning of life itself.

[1] - https://en.wikipedia.org/wiki/Keynesian_economics

  • Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours.

    At least since the Industrial Revolution, and probably before, the only advance that has led to shorter work weeks is unions and worker protections. Not technology.

    Technology may create more surplus (food, goods, etc) but there’s no guarantee what form that surplus will reach workers as, if it does at all.

    • Margins require a competitive edge. If productivity gains are spread throughout a competitive industry, margins will not get bigger; prices will go down.

      7 replies →

    • > Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours

      I mean, that basically just sums up how capitalism works. Profit growth is literally (even legally!) the only thing a company can care about. Everything else, like product quality, pays service to that goal.

      3 replies →

    • Failure of politics and the media then. Majority of voters have been fooled into voting against their economic interests.

  • In the same essay ("Economic Possibilities for our Grandchildren," 1930) where he predicted the 15-hour workweek, Keynes wrote about how future generations would view the hoarding of money for money's sake as criminally insane.

    "There are changes in other spheres too which we must expect to come. When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession – as distinguished from the love of money as a means to the enjoyments and realities of life – will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease. All kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital, we shall then be free, at last, to discard."

    • A study [1] I was looking at recently was extremely informative. It's a poll from UCLA given to incoming classes that they've been carrying out since the 60s. In 1967 86% of student felt it was "essential" or "very important" to "[develop] a meaningful philosophy of life", while only 42% felt the same of "being very well off financially." By 2015 those values had essentially flipped, with only 47% viewing a life philosophy as very important, and 82% viewing being financially well off as very important.

      It's rather unfortunate it only began in 1967, because I think we would see an even more extreme flip if we were able to just go back a decade or two more, and back towards Keynes' time. As productivity and wealth accumulation increased, society seems to have trended in the exact opposite direction he predicted. Or at least there's a contemporary paradox. Because I think many, if not most, younger people hold wealth accumulation with some degree of disdain yet also seek to do the exact same themselves.

      In any case, in a society where wealth is seen as literally the most important aspect in life, it's not difficult to predict what follows.

      [1] - https://www.heri.ucla.edu/monographs/50YearTrendsMonograph20...

      5 replies →

    • > We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years

      Still haven’t gotten rid of work for work’s sake being a virtue, which explains everything else. Welfare? You don’t “deserve” it. Until we solve this problem, we’re not or less heading straight for feudalism.

  • > We just instead started making billionaires by the thousand, and soon enough we can add trillionaires.

    Didn’t we also get standards of living much higher than he would ever imagine? I think blaming everything on billionaires is really misguided and shallow.

    • It depends on how you value things. I'd prefer to have a surplus of time and a scarcity of gizmos, rather than a surplus of gizmos and a scarcity of time. Obviously basic needs being met is very important, but we've moved way beyond that as a goal, while somehow also kind of simultaneously missing it.

I feel like this sort of misses the point. I didn't think the primary thrust of his article was so much about the specific details of AI, or what kind of tasks AI can now surpass humans on. I think it was more of a general analysis (and very well written IMO) that even when new technologies advance in a slow, linear progression, the point at which they overtake an earlier technology (or "horses" in this case), happens very quickly - it's the tipping point at which the old tech surpasses the new. For some reason I thought of Hemingway's old adage "How did you go bankrupt? - Slowly at first, then all at once."

I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized.

Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once".

If there’s more work than resources, then is that low value work or is there a reason the business is unable to increase resources? AI as a race to the bottom may be productive but not sure it will be societally good.

  • Not low-value or it just wouldn't be on the board. Lower value? Maybe, but there are many, many reasons things get pushed down the backlog. As many reasons as there are kinds of companies. Most people don't work at one of the big tech companies where work priorities and business value are so stratified. There are businesses that experience seasonality, so many of the R&D activities get put on the backburner until the busy season is over. There are businesses that have high correctness standards, where bigger changes require more scrutiny, are harder to fit into a sprint, and end up getting passed over for smaller tasks. And some businesses just require a lot of contextual knowledge. I wouldn't trust an AI to do a payroll calculation or tabulate votes, for instance, any more than I would trust a brand new employee to dive into the deep end on those tasks.

> 1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.

They have more work to do until they don't.

The number of bank tellers went up for a while after the invention of the ATM, but then it went down, because all the demand was saturated.

We still need food, farming hasn't stopped being a thing, nevertheless we went from 80-95% of us working in agriculture and fishing to about 1-5%, and even with just those percentages working in that sector we have more people over-eating than under-eating.

As this transition happened, people were unemployed, they did move to cities to find work, there were real social problems caused by this. It happened at the same time that cottage industries were getting automated, hand looms becoming power-looms, weaving becoming programmable with punch cards. This is why communism was invented when it was invented, why it became popular when it did.

And now we have fast-fashion, with clothes so fragile that they might not last one wash, and yet still spend a lower percentage of our incomes on clothes than the pre-industrial age did. Even when demand is boosted by having clothes that don't last, we still make enough to supply demand.

Lumberjacks still exist despite chainsaws, and are so efficient with them that the problem is we may run out of rainforests.

Are there any switchboard operators around any more, in the original sense? If I read this right, the BLS groups them together with "Answering Service", and I'm not sure how this other group then differs from a customer support line: https://www.bls.gov/oes/2023/may/oes432011.htm

> 2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.

This would be absolutely correct — I've made the analogy to Amdahl's law myself previously — if LLMs didn't also do so many of the other things. I mean, the linked blog post is about answering new-starter questions, which is also not the only thing people get paid to do.

Now, don't get me wrong, I accept the limitations of all the current models. I'm currently fairly skeptical that the line will continue to go up as it has been for very much longer… but "very much longer" in this case is 1-2 years, room for 2-4 doublings on the METR metric.

Also, I expect LLMs to be worse at project management than at writing code, because code quality can be improved by self-play and reading compiler errors, whereas PM has slower feedback. So I do expect "manage the AI" to be a job for much longer than "write code by hand".

But at the same time, you absolutely can use an LLM to be a PM. I bet all the PMs will be able to supply anecdotes about LLMs screwing up just like all the rest of us can, but it's still a job task that this generation of AI is still automating at the same time as all the other bits.

  • I agree mostly, though personally I expect LLMs to basically give me whitewashing. They don't innovate. They don't push back enough or take a step back to reset the conversation. They can't even remember something I told them not to do 2 messages ago unless I twist their arm. This is what they are, as a technology. They'll get better. I think there's some impact associated with this, but it's not a doomsday scenario like people are pretending.

    We are talking about trying to build a thing we don't even truly understand ourselves. It reminds me of That Hideous Strength where the scientists are trying to imitate life by pumping blood into the post-guillotine head of a famous scientist. Like, we can make LLMs do things where we point and say, "See! It's alive!" But in the end people are still pulling all the strings, and there's no evidence that this is going to change.

    • Yup, I think that's fair.

      I'm not sure how many humans know how to be genuinely innovative; nor if it's learnable; and also, assuming that it is learnable, whether or not known ML is sample-efficient enough learn that skill from however many examples currently exist.

      As you say, we don't understand what we're trying to build. It's remarkable how far we got without understanding what we build: for all that "cargo cult" is seen as a negative in the 20th century onwards, we didn't understand chemistry for thousands of years but still managed cement, getting metals from ores, explosives, etc.

      Then we did figure out chemistry and one of the Nobel prizes in it led to both chemical weapons and cheap fertiliser.

      We're all over the place.