← Back to context

Comment by sd9

10 hours ago

The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

To be honest, I’m looking at leaving software because the job has turned into a different sort of thing than what I signed up for.

So I think this article is partly right, Bob is not learning those skills which we used to require. But I think the market is going to stop valuing those skills, so it’s not really a _problem_, except for Bob’s own intellectual loss.

I don’t like it, but I’m trying to face up to it.

> So if Bob can do things with agents, he can do things.

The problem arrises when Bob encounters a problem too complex or unique for agents to solve.

To me, it seems a bit like the difference between learning how to cook versus buying microwave dinners. Sure, a good microwave dinner can taste really good, and it will be a lot better than what a beginning cook will make. But imagine aspiring cooks just buying premade meals because "those aren't going anywhere". Over the span of years, eventually a real cook will be able to make way better meals than anything you can buy at a grocery store.

The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

  • Precisely. The first 10 rungs of the ladder will be removed, but we still expect you to be able to get to the roof. The AI won't get you there and you won't have the knowledge you'd normally gain on those first 10 rungs to help you move past #10.

    • People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

      The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).

      This isn't guaranteed to play out, but it should be the default expectation until we actually see greatly diminishing outputs at the frontier of science, engineering, etc.

      33 replies →

    • But AI might actually get you there in terms of superior pedagogy. Personal Q&A where most individuals wouldn't have afforded it before.

    • There are a lot of people in academia who are great at thinking about complex algorithms but can't write maintainable code if their life depended on it. There are ways to acquire those skills that don't go the junior developer route. Same with debugging and profiling skills

      But we might see a lot more specialization as a result

      8 replies →

    • That’s a good analogy but I think we’ve already went from 0 to 10 rungs over the last couple of years. If we assume that the models or harnesses will improve more and more rungs will be removed. Vast majority of programmers aren’t doing novel, groundbreaking work.

  • The correct distinction is: if you can't do something without the agent, then you can't do it.

    The problem that the author describes is real. I have run into it hundreds of times now. I will know how to do something, I tell AI to do it, the AI does not actually know how to do it at a fundamental level and will create fake tests to prove that it is done, and you check the work and it is wrong.

    You can describe to the AI to do X at a very high-level but if you don't know how to check the outcome then the AI isn't going to be useful.

    The story about the cook is 100% right. McDonald's doesn't have "chefs", they have factory workers who assemble food. The argument with AI is that working in McDonald's means you are able to cook food as well as the best chef.

    The issue with hiring is that companies won't be able to distinguish between AI-driven humans and people with knowledge until it is too late.

    If you have knowledge and are using AI tools correctly (i.e. not trying to zero-shot work) then it is a huge multiplier. That the industry is moving towards agent-driven workflows indicates that the AI business is about selling fake expertise to the incompetent.

  • > The problem arrises when Bob encounters a problem too complex or unique for agents to solve.

    It’s actually worse than that: the AI will not stop and say ”too complex, try in a month with the next SOTA model”. Rather, it will give Bob a plausible looking solution that Bob cannot identify as right or wrong. If Bob is working on an instant feedback problem, it’s ok: he can flag it, try again, ask for help. But if the error can’t be detected immediately, it can come back with a vengeance in a year. Perhaps Bob has already gotten promoted by then, and Bobs replacement gets to deal with it. In either case, Bob cannot be trusted any more than the LLM itself.

  • > The problem arrises when Bob encounters a problem too complex or unique for agents to solve.

    Or even sooner, when Bob’s internet connection is down, or he ran out of tokens, or has been banned from his favourite service, or the service is down, or he needs to solve a problem with a machine unable to run local models, or essentially any situation where he’s unable to use an LLM.

  • To me it feels more like learning to cook versus learning how to repair ovens and run a farm. Software engineering isn’t about writing code any more than it’s about writing machine code or designing CPUs. It’s about bringing great software into existence.

    • Or farming before and after agricultural machines. The principles are the same but the ”tactical” stuff are different.

  • That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise. Life throws us hard problems. I don't recall if we even assumed Bob was unusually capable, he might be one of life's flounderers. I'd give good odds that if he got through a program with the help of agents he'll get through life achieving at least a normal level of success.

    But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs. At the point, Bob may discover that anything agents can't do, Alice can't do because she is limited by trying to think using soggy meat as opposed to a high-performance engineered thinking system. Not going to win that battle in the long term.

    > The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

    The market values bulldozers. Whether a human does actual work or not isn't particularly exciting to a market.

    • > we're trending towards superintelligence with these AIs

      The article addresses this, because, well... no we aren't. Maybe we are. But it's far from clear that we're not moving toward a plateau in what these agents can do.

      > Whether a human does actual work or not isn't particularly exciting to a market.

      You seem to be convinced these AI agents will continue to improve without bound, so I think this is where the disconnect lies. Some of us (including the article author) are more skeptical. The market values work actually getting done. If the AIs have limits, and the humans driving them no longer have the capability to surpass those limits on their own, then people who have learned the hard way, without relying so much on an AI, will have an advantage in the market.

      I already find myself getting lazy as a software developer, having an LLM verify my work, rather than going through the process of really thinking it through myself. I can feel that part of my skills atrophying. Now consider someone who has never developed those skills in the first place, because the LLM has done it for them. What happens when the LLM does a bad job of it? They'll have no idea. I still do, at least.

      Maybe someday the AIs will be so capable that it won't matter. They'll be smarter and more through and be able to do more, and do it correctly, than even the most experienced person in the field. But I don't think that's even close to a certainty.

      5 replies →

    • > we're trending towards superintelligence with these AIs

      I wouldn't count on that because even if it happens, we don't know when it ill happen, and it's one of those things where how close it looks to be is no indication of how close it actually is. We could just as easily spend the next 100 years being 10 years away from agi. Just look at fusion power, self driving cars, etc.

      1 reply →

    • >But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs

      do you have any evidence for that, though? Besides marketing claims, I mean.

      3 replies →

    • > That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise.

      I have literally never run into this in my career..challenges have always been something to help me grow.

    • The authors point went a little over your head.

      It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.

      From the article:

      If you hand that process to a machine, you haven't accelerated science. You've removed the only part of it that anyone actually needed.

      4 replies →

    • Market values bulldozers for bulldozing jobs. No one is going to use bulldozers to mow a lawn.

      If Bob is going to spend $500 in tokens for something I can do for $50.

      I think Bob is not going to stay long in lawn mowing market driving a bulldozer.

    • "Things that have never been done before in software" has been my entire career. A lot of it requires specific knowledge of physics, modelling, computer science, and the tradeoffs involved in parsimony and efficiency vs accuracy and fidelity.

      Do you have a solution for me? How does the market value things that don't yet exist in this brave new world?

    • > Not going to win that battle in the long term.

      I would take that bet on the side of the wet meat. In the future, every AI will be an ad executive. At least the meat programming won't be preloaded to sell ads every N tokens.

    • From the article:

      > There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023.

      We're not trending towards superintelligence with these AIs. We're trending towards (and, in fact, have already reached) superintelligence with computers in general, but LLM agents are among the least capable known algorithms for the majority of tasks we get them to do. The problem, as it usually is, is that most people don't have access to the fruits of obscure research projects.

      Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special.

      3 replies →

  • How many people who cook professionally are gourmet chefs? I think it ends up that gourmet cooking is so infrequently needed that we don’t require everyone who makes food to do it, just a small group of professionally trained people. Most people who make food for a living work somewhere like McDonald’s and Applebee’s where a high level of skill is not required.

    There will still be programming specialists in the future — we still have assembly experts and COBOL experts, after all. We just won’t need very many of them and the vast majority of software engineers will use higher-level tools.

    • That's the problem though: programmers who become the equivalent of McDonald's workers will be paid poorly like McDonald's workers and be treated as disposable like McDonald's workers.

  • I held this point of view for a while but I came to the (possibly naive) conclusion that it was just forced self-assurance. Truth is, the issues with sub-par output are just a prompting and supervision deficiency. An agent team can produce better end product if supervised and promoted correctly. The issue is most don’t take the time to do that. I’m not saying I like that this is true, quite the opposite. It is the reality of things now.

    • > Truth is, the issues with sub-par output are just a prompting and supervision deficiency. An agent team can produce better end product if supervised and promoted correctly.

      Which is more work, and less fun, than doing it myself. No thanks.

  • Just because Bob doesn't know e.g. Rust syntax and library modules well, doesn't mean that Bob can't learn an algorithm to solve a difficult problem. The AI might suggest classes of algorithms that could be applicable given the real world constraints, and help Bob set up an experimental plan to test different algorithms for efficacy in the situation, but Bob's intuition is still in the drivers's seat.

    Of course, that assumes a Bob with drive and agency. He could just as easily tell the AI to fix it without trying to stay in the loop.

    • But if Bob doesn't know rust syntax and library modules well, how can he be expected to evaluate the output generating Rust code? Bugs can be very subtle and not obvious, and Rust has some constructs that are very uncommon (or don't exist) in other languages.

      Human nature says that Bob will skim over and trust the parts that he doesn't understand as long as he gets output that looks like he expects it to look, and that's extremely dangerous.

      2 replies →

  • Bob+agents is going to be able to solve much more complex problems than Bob without agents.

    That's the true AI revolution: not the things it can accelerate, the things it can put in reach that you wouldn't countenance doing before.

  • Worse, soon fewer and fewer people will taste good food, including even higher and higher scale restaurants just using pre-made.

    As fewer know what good food tastes like, the entire market will enshitify towards lower and lower calibre food.

    We already see this with, for example, fruits in cold climates. I've known people who have only ever bought them from the supermarket, then tried them at a farmers when they're in season for 2 weeks. The look of astonishment on their faces, at the flavour, is quite telling. They simply had no idea how dry, flavourless supermarket fruit is.

    Nothing beats an apple picked just before you eat it.

    (For reference, produce shipped to supermarkets is often picked, even locally, before being entirely ripe. It last longer, and handles shipping better, than a perfectly ripe fruit.)

    The same will be true of LLMs. They're already out of "new things" to train on. I question that they'll ever learn new languages, who will they observe to train on? What does it matter if the code is unreadable by humans regardless?

    And this is the real danger. Eventually, we'll have entire coding languages that are just weird, incomprehensible, tailored to LLMs, maybe even a language written by an LLM.

    What then? Who will be able to decipher such gibberish?

    Literally all true advancement will stop, for LLMs never invent, they only mimic.

    • > As fewer know what good food tastes like, the entire market will enshitify towards lower and lower calibre food.

      This happened a long time ago in the US. Drive through California's Central Valley sometime and sample the fruit sold fresh along the side of the road. It's a completely different experience than the version you get at Safeway.

    • Ironically, apples are one of the fruits where tree ripening isn't a big deal for a lot of varietals. You should have used tomato as the example, the difference there is night and day pretty much across the board.

      If humans can prove that bespoke human code brings value, it'll stick around. I expect that the cases where this will be true will just gradually erode over time.

  • Real-world cooks don't exactly avoid those newfangled microwave ovens though. They use them as a professional tool for simple tasks where they're especially suitable (especially for quick defrosting or reheating), which sometimes allows them to cook even better meals.

I'm glad you've posted this comment because I strongly feel more people need to see sentiment, and push back against what many above want to become the new norm. I see capitulation and compliance in advance, and it makes me sad. I also see two very valid, antipodal responses to this phenomenon: Exit from the industry, and malicious compliance through accelerationism.

To the reader and the casual passerby, I ask: Do you have to work at this pace, in this manner? I understand completely that mandates and pressure from above may instill a primal fear to comply, but would you be willing to summon enough courage to talk to maybe one other person you think would be sympathetic to these feelings? If you have ever cared about quality outcomes, if for no other reason than the sake of personal fulfillment, would it not be worth it to firmly but politely refuse purely metrics-focused mandates?

> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

"Being able to deliver using AI" wasn't the point of the article. If it was the point, your comment would make sense.

The point of the program referred to in the article is not to deliver results, but to deliver an Alice. Delivering a Bob is a failure of the program.

Whether you think that a Bob+AI delivers the same results is not relevant to the point of the article, because the goal is not to deliver the results, it's to deliver an Alice.

  • I am aware of that - I was adding something along the lines of: I don’t think people care if we deliver Alices any more.

    • People never cared about delivering Alices; they were an implementation detail. I think the article argues that they're still an important one, but one that isn't produced automatically anymore

      2 replies →

    • > I am aware of that - I was adding something along the lines of: I don’t think people care if we deliver Alices any more.

      That's irrelevant to the goal of the program - they care. Once they stop caring, they'd shut that program down.

      Maybe it would be replaced with a new program that has the goal of delivering Bobs+AI, but what would be the point? I mean, the article explained in depth that there is no market for the results currently, so what would be the point of efficiently generating those results?

      The market currently does not want the results, so replacing the current program with something that produces Bobs+AI would be for... what, exactly?

      1 reply →

They aren't going away but for some they may become prohibitively expensive after all the subsidies end.

I do think coding with local agents will keep improving to a good level but if deep thinking cloud tokens become too expensive you'll reach the limits of what your local, limited agent can do much more quickly (i.e. be even less able to do more complex work as other replies mention).

  • > They aren't going away but for some they may become prohibitively expensive after all the subsidies end.

    Even if inference was subsidized (afaik it isn't when paying through API calls, subscription plans indeed might have losses for heavy users, but that's how any subscription model typically work, it can still be profitable overall).

    Models are still improving/getting cheaper, so that seems unlikely.

    • > afaik it isn't when paying through API calls

      There is no evidence for this. The claims that API is "profitable on inference" are all hearsay. Despite the fact that any AI executive could immediately dismiss the misconception by merely making a public statement beholden to SEC regulation, they don't.

      > Models are still improving/getting cheaper

      The diminishing returns have set in for quality, and for a while now that increased quality has come at the cost of massive increases in token burn, it's not getting cheaper.

      Worse yet, we're in an energy crisis. Iran has threatened to strike critical oil infrastructure, and repairs would take years.

      AI is going to get significantly more expensive, soon.

    • It probably is still subsidized, just not as much. We won't know if these APIs are profitable unless these companies go public, and till then it's safe to bet these APIs are underpriced to win the market share.

      5 replies →

> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading.

I dread the flip side of this which is dealing with obtuse bullshit like trying to understand why Oracle ADF won’t render forms properly, or how to optimize some codebase with a lot of N+1 calls when there’s looming deadlines and the original devs never made it scalable, or needing to dig into undercommented legacy codebases or needing to work on 3-5 projects in parallel.

Agents iterating until those start working (at least cases that are testable) and taking some of the misery and dread away makes it so that I want to theatrically defenestrate myself less.

Not everyone has the circumstance to enjoy pleasant and mentally stimulating work that’s not a frustrating slog all the time - the projects that I actually like working on are the ones I pick for weekends, I can’t guarantee the same for the 9-5.

  • Oh yes, it’s an entirely privileged position to be able to enjoy your work. But it’s a privilege I have enjoyed and not one I want to give up unless I have to. We spend an extraordinary amount of our waking life at work.

    • I do hope you can find a set of circumstances that don't make you give it up too much. And hey, if you end up moving to another line of work than software, no reason why you couldn't still enjoy working on whatever project you want over the weekend, too.

It's the next level of abstraction. Bob is still learning, he's just learning a different set of skills than Alice.

Also, the premise that it took each of them a year to do the project means Bob was slacking because he probably could've done it in less than a month.

> So if Bob can do things with agents, he can do things.

Yes, but how does he know if it worked? If you have instant feedback, you can use LLMs and correct when things blow up. In fact, you can often try all options and see which works, which makes it ”easy” in terms of knowledge work. If you have delayed feedback, costly iterations, or multiple variables changing underneath you at all times, understanding is the only way.

That’s why building features and fixing bugs is easy, and system level technical decision making is hard. One has instant feedback, the other can take years. You could make the ”soon” argument, but even with better models, they’re still subject to training data, which is minimal for year+ delayed feedback and multivariate problems.

Many things have come and gone in this fashion oriented industry. Everyone is already bored to hell by AI output.

AI in software engineering is kept afloat by the bullshitters who jump on any new bandwagon because they are incompetent and need to distract from that. Managers like bullshit, so these people thrive for a couple of years until the next wave of bullshit is fashionable.

> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

Following the model of how startups have worked for the last 20 years or so, I expect agents to eventually be locked-down/nerfed/ad-infested for higher payments. We are enjoying the fruits of VC money at the moment and they are getting everyone addicted to agents. Eventually they need to turn a profit.

Not sure how this plays out, but I would hang on to any competencies you have for anyone (or business) that wants to stick around in software. Use agents strategically, but don't give up your ability to code/reason/document, etc. The only way I can see this working differently is that there are huge advances in efficiency and open-source models.

  • That's one of several reasons why I'm trying not to rely too much on LLMs. The prospect of only being able to code with a working internet connection and a subscription to some megacorp service is not particularly appealing to me.

    • Local/open LLMs are a thing though. You can build a server for hosting decent sized (100-200B) models at home for a few k$. They may not be Opus-level, but hopefully we can get something matching current SOTA, but that we can run locally, before the megacorps get too greedy.

      Alternatively you could find some other people to share the HW cost and run some larger models (like Kimi-K2.5 at 1.1T params).

  • Even when they're profitable, the premium ad-free service will still be cheaper than humans, so those skills will still be mostly useless.

>The thing is, agents aren’t going away...

Aren't they currently propped up by investor money?

What happens when the investors realize the scam that it is and stop investing or start investing less...

  • > Aren't they currently propped up by investor money?

    Are Chinese model shops propped up by investor money? Is Google?

    Open weights models are only 6 months behind SOTA. If new model development suddenly stopped, and today's SOTA models suddenly disappeared, we would still have access to capable agents.

> if Bob can do things with agents, he can do things.

This point is directly addressed in the paper: Bob will ultimately not be able to do the things Alice can, with or without agents, because he didn't build the necessary internal deep structure and understanding of the problem space.

And if Alice later on ends up being a better scientist (using agents!) than Bob will ever be, would you not say there was something lost to the world?

Learning needs a hill to climb, and somebody to actually climb it. Bob only learned how to press an elevator button.

There is still a lot of engineering to be done with LLMs. Maybe not exactly writing code but I think a lot of optimization problems will be there no matter what.

Some people treat toilet as magic hole where they throw stuff in flush and think it is fine.

If you throw garbage in you will at some point have problems.

We are in stage where people think it is fine to drop everything into LLM but then they will see the bill for usage and might be surprised that they burned money and the result was not exactly what they expected.

  • Yep. I hate to predict the future but I’m betting on small, open models, used as tools here and there. Which is great, you can get 90% of the speed up with 5-10% of the cost once you account for how time consuming it is to make sense of and fix the output.

    The economics and security model on full agents running in loops all day may come home to roost faster than expertise rot.

> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

I am in the same boat, but close enough to retirement that I'm less "scared" about it. For me I'm moving up the chain; not people management, but devoting a lot more of my time up the abstraction continuum. Looking a lot more at overall designs and code quality and managing specs and inputs and requirements.

I wrote some design docs past few days for a big project the team is embarking on. We never had that before, at least not in the level of detail (per time quantum) that I was able to produce. Used 2 models from 2 companies - one to write, one to review, and bounce between them until the 3 of us agree.

Honestly it didn't take any less time than I would have done it alone, but the level of detail was better, and covered more edge cases. Calling it a "win" right now. I still enjoy it, as most of the code I/we was/are writing is mostly fancy CRUD anyway, and doesn't have huge scaling problems to solve (and too few devs I feel are being honest about their work, here).

> if Bob can do things with agents, he can do things

I’ve been reminded lately of a conversation I had with a guy at hacker space cafe around ten years ago in Berlin.

He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.

He was lamenting that these days, software was written in higher level languages, and that more and more programmers no longer had the same level of knowledge about the lower level workings of computers. He had a valid point and I enjoyed talking to him.

I think about this now when I think about agentic coding. Perhaps over time most software development will be done without the knowledge of the higher level programming languages that we know today. There will still be people around that work in the higher level programming languages in the future, and are intimately familiar with the higher level languages just like today there are still people who work in assembly even if the percentage of people has gotten lower over time relative to those that don’t.

And just like there are areas where assembly is still required knowledge, I think there will be areas where knowledge of the programming languages we use today will remain necessary and vibe coding alone wont cut it. But the percentage of people working in high level languages will go down, relative to the number of people vibe coding and never even looking at the code that the LLM is writing.

  • I see these analogies a lot, but I don't like them. Assembly has a clear contract. You don't need to know how it works because it works the same way each time. You don't get different outputs when you compile the same C code twice.

    LLMs are nothing like that. They are probabilistic systems at their very core. Sometimes you get garbage. Sometimes you win. Change a single character and you may get a completely different response. You can't easily build abstractions when the underlying system has so much randomness because you need to verify the output. And you can't verify the output if you have no idea what you are doing or what the output should look like.

    • I think these analogies are largely correct, but TFA is about something subtly different:

      LLMs don't make it impossible to do anything yourself, but they make it economically impractical to do so. In other words, you'll have to largely provide both your own funding and your own motivation for your education, unless we can somehow restructure society quickly enough to substitute both.

      With assembly, we arguably got lucky: It turns out that high-level programming languages still require all the rigorous thinking necessary to structure a programmer's mind in ways that transfer to many adjacent tasks.

      It's of course possible that the same is true for using LLMs, but at least personally, something feels substantially different about them. They exercise my "people management" muscle much more than my "puzzle solving" one, and wherever we're going, we'll probably still need some puzzle solvers too.

  • > He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.

    Please, not this pre-canned BS again!

    Comparing abstractions to AI is an apples to oranges comparison. Abstractions are dependable due to being deterministic. When I write a function in C to return the factorial of a number, and then reuse it again and again from Java, I don't need a damn set of test cases in Java to verify that factorial of 5 is 120.

    With LLMs, you do. They aren't an abstraction, and seeing this worn out, tired and routinely debunked comparison being presented in every bloody thread is wearing a little thin at this point.

    We've seen this argument hundreds of times on this very site. Repeating it doesn't make it true.

  • Lovely story, thanks for sharing.

    I wonder how many assembly programmers got over it and retrained, versus moved on to do something totally different.

    I find the agentic way of working simultaneously more exhausting and less stimulating. I don’t know if that’s something I’m going to get over, or whether this is the end of the line for me.

    • I wasn't there at the time, but I believe that most assembly programmers learned higher-level languages.

      My mother actually started programming in octal. I don't remember her exact words, but she said something to the effect that her life got so much better when she got an assembler. I suspect that going from assembly to compilers was much the same - you no longer had to worry about register allocations and building stack frames.

      1 reply →

  • The difference is that you don’t need to review the machine code produced by a compiler.

    The same is not true for LLM output. I can’t tell my manager I don’t know how to fix something in production the agent wrote. The equivalent analogy would be if we had to know both the high-level language _and_ assembly.

    • I was an engineering manager for a commercial C/C++ toolchain used in embedded systems development. We, and our customers, examined the generated code continously. In our case, to figure out better optimizations (and fix bugs). For some of our customers, because their device had severe memory constraints or trying to do difficult performance optimizations.

      Moving up to an MMU and running Linux was a different (more abstract) world. Although since it was embedded, low-level functions might still be in both assembly and C if not the apps on top.

Can you run an industry level LLM at home?

If not, you're changing learning to cook for Uber only meals.

And since the alternative is starving, Uber will boil the pot.

Don't give up your self sufficiency.

  • > Can you run an industry level LLM at home?

    Assuming that by "at home" you mean using ordinary hardware, not something that costs as much as a car. Yes, very slowly, for simple tests. (Not proprietary models obviously, but quite capable ones nonetheless.) Not exactly viable for agentic coding that needs boatloads of tokens for the simplest things. But then you can run smaller local models that are still quite capable for many things.

  • I’m very good at the handcrafted stuff, I’ve been doing this a while. I don’t feel like giving up my self sufficiency, I just feel like the writing is on the wall.

    • By "you" I actually meant this hypothetical person who's only good enough for AI assisted. Though even for us who are already experienced, we should keep the manual stuff even if it's just as going to the gym. I don't see myself retaining my skills for long by just reviewing LLM output.

      1 reply →

  • The costs just aren't that high. They could be 10x higher and it still wouldn't be a huge deal.

  • Can you build a computer at home?

    There is absolutely nothing self-sufficient about computer hardware

    • Or generate electricity? Or grow enough food to survive? Medicines?

      "Self-sufficiency" arguments coming from tech nerds are so tiring.

    • No, and that's the reason we're now paying twice what we paid a couple years ago. But I can write software at home.

      We're already vulnerable to enshittification in so many areas, why increase the list? How does that work in my favor at all?

I think a good analogy is people not being able to work on modern cars because they are too complex or require specialised tools. True I can still go places with my car, but when it goes wrong I'm less likely to be able to resolve the problem without (paid for) specialised help.

  • And just like modern vehicles rob the user of autonomy, so too for coding agents. Modern tech moves further and further away from empowering normal people and increasingly serves to grow the influence of corporations and governments over our day to day lives.

    It's not inherent, but it is reality unless folks stop giving up agency for convenience. I'm not holding my breath.

    • Cars are actually a good metaphor, it works on so many levels. Modern cars have "democratized" access to long-distance travel in a sense, and most people don't need to do any heavy maintenance themselves. But the flipside is that places that have adopted it have become "car dependent" and build cities assuming access to cars.

      Are we net better off than if we didn't have cars and simply built public transport with walkable cities?

I understand your point, but this is a purely utilitarian view and it doesn’t account for the fact that, even if agents may do everything, it doesn’t mean they should, both in a normative and positive sense.

There is a vast range of scenarios in which being more or less independent from agents to perform cognitive tasks will be both desirable and necessary, at the individual, societal and economic level.

The question of how much territory we should give up to AI really is both philosophical and political. It isn’t going to be settled in mere one-sided arguments.

  • The people who pay my bills operate in a largely utilitarian fashion.

    They’re not going to pay me to manually program because I find it more enjoyable, when they can get Bob to do twice as much for less.

    This is why I say I don’t like it, but it is what it is.

Some people probably enjoyed writing assembly (I am not one of those people, especially when I had to do it on paper in university exams) and code agents probably can do it well - but for the hard tasks, the tasks that are net new, code agents will produce bad results and you still need those people who enjoy writing that to show the path forward.

Code agents are great template generators and modifiers but for net new (innovative! work it‘s often barely usable without a ton of handholding or „non code generation coding“

> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading.

You're still working on intellectually stimulating programming problems. AI doesn't go all the way with any reliability, it just provides some assistance. You're still ultimately responsible for getting things right, even with key AI help.

I don't like it either. But what is really guaranteeing other markets from flunking similarly later on? What's to say other jobs are going to be any better? Back in college, most of my peers would say "I'm not cut out for anything else. This is it". They were, sure enough, computer and/or math people at heart from an early age.

More importantly, what's gonna be the next stable category of remote-first jobs that a person with a tech-adjacent or tech-minded skillset can tack onto? That's all I care about, to be honest.

I may hate tech with a passion at times and be overly bullish on its future, but there's no replacing my past jobs which have graced me and many others with quality time around family, friends, nature and sports while off work.

  • I don’t know, it’s only since about December that I felt things really start to shift, and February when my job started to become very different.

    Personally I’m looking at more physical domains, but it’s early days in my exploration. I think if I wanted to stick to remote work (which I have enjoyed since 2020), then the AI story would just keep playing out.

    I’m also totally open to taking a big pay cut to do something I actually enjoy day to day, which I guess makes it easier.

    • So recent? I've been on sabbatical (the real kind, self-funded) for eighteen months, and while my sense has been things have not stopped heading downhill since I stepped off the ride back in 2024, to hear of such a sudden step change is somewhat novel. "Very different" just how, if you don't mind my asking?

      (I'm also looking for local, personally satisfying work, in exchange for a pay cut. Early days, and I am finding the profession no longer commands quite the social cachet it once did, but I'm not foolish enough to fail to price for the buyer's market in which we now seek to sell our labor. Besides, everyone benefits from the occasional reminder to humility! "Memento mori" and all that.)

      10 replies →

Bob can't do things, Bob's AI can do things that Bob asks it to do. And the AI can only do things that have been done before, and only up to a certain level of complexity. Once that level is reached, the AI can't do things anymore, and Bob certainly isn't going to do anything about that, because Bob doesn't know how to do anything himself. One has to question what value Bob himself even brings to the table.

But let's assume Bob continues to have an active role, because the people above him bought in to the hype and are convinced that "prompt engineer" is the job of the future. When things inevitably start falling apart because the Bobs of the world hit a wall and can't solve the problems that need to be solved (spoiler: this is already happening), what do we do? We need Alices to come in and fix it, but the market actively discourages the existence of Alice, so what happens when there are no more Alices left? Do we just give up and collectively forget how to do things beyond a basic level?

I have a feeling that, yes, we as a species are just going to forget how to do things beyond a certain level. We are going to forget how to write an innovative science paper. We are going to forget how to create websites that aren't giant, buggy piles of React spaghetti that make your browser tab eat 2GB of RAM. We've always been forgetting, really - there are many things that humans in the past knew how to do, but nobody knows how to do today, because that's what happens when the incentive goes missing for too long. Price and convenience often win over quality, to the point that quality stops being an option. This is a form of evolutionary regression, though, and negatively affects our quality of life in many ways. AI is massively accelerating this regression, and if we don't find some way to stop it, I believe our current way of life will be entirely unrecognizable in a few decades.

  • The question is whether it’s more important to be able to do things, or more important to have a good sense and a keen eye for what to do at any given moment. I personally think both are really important, and I also think AI won’t be able to do both better than any human could for another while, and moreso when it comes to doing both at the same time (though I’m not going to claim it’s never going to).

    My point is that both Alice and Bob have a place in this world. In fact, Bob isn’t really doing much different from what a Pricipal Investigator is already doing today in a research context.

    • > The question is whether it’s more important to be able to do things, or more important to have a good sense and a keen eye for what to do at any given moment.

      Those aren't mutually exclusive.

      "People who do things" can do both, and doing the latter is a function of doing the former, so they tend to do the latter sufficiently well.

      "People who prompt things" can only do the latter, and they routinely do it poorly.

      5 replies →

Being able to deliver junior-level work isn't the goal of training juniors.

Programmers have ultimate (or they had) skill to solve anything if you have enough resources.

Now, you don't do thing and do other things when LLMs get stuck. There is no "given enough time I can do it".

I can't see how somebody would go solving slop bugs (slugs :)) in heavy AI generated codebase.

Hope, I'm wrong but that's somehing I personally encountered. Stay sharp.

>The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

He'll get things (papers, code, etc) which he can't evaluate. And the next round of agents will be trained on the slop produced by the previous ones. Both successive Bob's and successive agents will have less understanding.

Agents may not go away, but they are going to fall off significantly once people wake up to how bad they are at making software. It's like in the early 00s when business execs were stoked about the idea that they could cut costs by hiring bottom rate Indian contractors: it turned out to be a disaster for quality, and eventually there was a shift back towards having staff in the US. The same thing is going to happen with LLMs.

The thing is Bob can use HammerAsAService™ to put in a nail. It is so cheap! Way cheaper than buying an actual hammer.

The problem with unlearning generic tools and relying on ones you rent by big corporations is that it is unreliable in the long term. The prices will be rising. The conditions will worsen. Oh nice that Bob made a thing using HammerAsAService™, but the terms of conditions (changing once a week) he accepted last week clearly say it belongs to the company now. Bob should be happy they are not suing him yet, but Bob isn't sure whether the thing that came out a month after was independently developed by that company or not just a clone of his work. Bob wishes he knew how to use a hammer.

  • The majority of nails people might want to rent a HammerAsAService for these days can already easily be put in by open source hammers you can run on consumer, uh… workbenches.

    • Not to stretch the metaphor too far, but those workbenches require understanding (and hammers) to set up.

      Will the paid tools always tell their users how to use the free versions, and if not, how will the users learn to do it independently?

      1 reply →

> So if Bob can do things with agents, he can do things.

I think the key issue is whether Bob develops the ability to choose valuable things to do with agents and to judge whether the output is actually right.

That’s the open question to me: how people develop the judgment needed to direct and evaluate that output.

  • There's a long, detailed, often repeated answer to your open question in the article.

    Namely, if you can't do it without the AI, you can't tell when it's given you plausible sounding bullshit.

    So Bob just wasted everyone's time and money.

> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

Can he? If he outsources all his thinking and understanding to agents, can he then fix things he doesn't know how to fix without agents?

Any skill is practice first and foremost. If Bob has had no practice, what then?

  • My point is it doesn’t matter whether he can fix things without agents. The real world isn’t an exam hall where your boss tells you “no naughty AI!”, you just get stuff done, and if Bob can do that with agents, nobody cares how he did it.

    • But can Bob actually do that with agents, without limit? Right now, he's going to hit a ceiling at some point, and the Alices of the world will run circles around him.

      The question is: will agents improve to the point that even the most capable Alices will never be needed to solve problems? Maybe? Maybe not? I'm worried that they won't improve to that degree.

      And even if they do, what is the purpose of humans in this world?

      1 reply →

> I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

It’s not for me. Being a middle manager, with all of the liability and none of the agency, is not what I want to do for a living. Telling a robot to generate mediocre web apps and SVGs of penguins on bicycles is a lousy job.

> The thing is, agents aren’t going away.

Let’s wait until they a business model that creates profit.

Most of them won’t go away, but many will become outdated or slow or enshittificated.

Imagine building your career based on the quality of google‘s search

> agents aren’t going away

Why not? Once the true cost of token generation is passed on to the end user and costs go up by 10 or 100 times, and once the honeymoon delusion of "oh wow I can just prompt the AI to write code" fades, there's a big question as to if what's left is worth it. If it isn't, agents will most certainly go away and all of this will be consigned to the "failed hype" bin along with cryptocurrency and "metaverse".

The whole premise is bad. If the supervisor can do it in 2 months, then they can do it in 2 weeks with AI.

Didn't PhD projects used to be about advancing the state of art?

Maybe we'll get back to that.