Comment by daxfohl

13 days ago

I worry about the "brain atrophy" part, as I've felt this too. And not just atrophy, but even moreso I think it's evolving into "complacency".

Like there have been multiple times now where I wanted the code to look a certain way, but it kept pulling back to the way it wanted to do things. Like if I had stated certain design goals recently it would adhere to them, but after a few iterations it would forget again and go back to its original approach, or mix the two, or whatever. Eventually it was easier just to quit fighting it and let it do things the way it wanted.

What I've seen is that after the initial dopamine rush of being able to do things that would have taken much longer manually, a few iterations of this kind of interaction has slowly led to a disillusionment of the whole project, as AI keeps pushing it in a direction I didn't want.

I think this is especially true if you're trying to experiment with new approaches to things. LLMs are, by definition, biased by what was in their training data. You can shock them out of it momentarily, whish is awesome for a few rounds, but over time the gravitational pull of what's already in their latent space becomes inescapable. (I picture it as working like a giant Sierpinski triangle).

I want to say the end result is very akin to doom scrolling. Doom tabbing? It's like, yeah I could be more creative with just a tad more effort, but the AI is already running and the bar to seeing what the AI will do next is so low, so....

It's not just brain atrophy, I think. I think part of it is that we're actively making a tradeoff to focus on learning how to use the model rather than learning how to use our own brains and work with each other.

This would be fine if not for one thing: the meta-skill of learning to use the LLM depreciates too. Today's LLM is gonna go away someday, the way you have to use it will change. You will be on a forever treadmill, always learning the vagaries of using the new shiny model (and paying for the privilege!)

I'm not going to make myself dependent, let myself atrophy, run on a treadmill forever, for something I happen to rent and can't keep. If I wanted a cheap high that I didn't mind being dependent on, there's more fun ones out there.

  • > let myself atrophy, run on a treadmill forever, for something

    You're lucky to afford the luxury not to atrophy.

    It's been almost 4 years since my last software job interview and I know the drills about preparing for one.

    Long before LLMs my skills naturally atrophy in my day job.

    I remember the good old days of J2ME of writing everything from scratch. Or writing some graph editor for universiry, or some speculative, huffman coding algorithm.

    That kept me sharp.

    But today I feel like I'm living in that netflix series about people being in Hell and the Devil tricking them they're in Heaven and tormenting them: how on planet Earth do I keep sharp with java, streams, virtual threads, rxjava, tuning the jvm, react, kafka, kafka streams, aws, k8s, helm, jenkins pipelines, CI-CD, ECR, istio issues, in-house service discovery, hierarchical multi-regions, metrics and monitoring, autoscaling, spot instances and multi-arch images, multi-az, reliable and scalable yet as cheap as possible, yet as cloud native as possible, hazelcast and distributed systems, low level postgresql performance tuning, apache iceberg, trino, various in-house frameworks and idioms over all of this? Oh, and let's not forget the business domain, coding standards, code reviews, mentorships and organazing technical events. Also, it's 2026 so nobody hires QA or scrum masters anymore so take on those hats as well.

    So LLMs it is, the new reality.

    • This is a very good point. Years ago working in a LAMP stack, the term LAMP could fully describe your software engineering, database setup and infrastructure. I shudder to think of the acronyms for today's tech stacks.

      3 replies →

    • Ya I agree it's totally crazy.... but, do most app deployments need even half that stuff? I feel like most apps at most companies can just build an app and deploy it using some modern paas-like thing.

      9 replies →

  • Businesses too. For two years it's been "throw everything into AI." But now that shit is getting real, are they really feeling so coy about letting AI run ahead of their engineering team's ability to manage it? How long will it be until we start seeing outages that just don't get resolved because the engineers have lost the plot?

    • From what I am seeing, no one is feeling coy simply because of the cost savings that management is able to show the higher-ups and shareholders. At that level, there's very little understanding of anything technical and outages or bugs will simply get a "we've asked our technical resources to work on it". But every one understands that spending $50 when you were spending $100 is a great achievement. That's if you stop and not think about any downsides. Said management will then take the bonuses and disappear before the explosions start with their resume glowing about all the cost savings and team leadership achievements. I've experienced this first hand very recently.

      2 replies →

  • > It's not just brain atrophy, I think. I think part of it is that we're actively making a tradeoff to focus on learning how to use the model rather than learning how to use our own brains and work with each other.

    I agree with the sentiment but I would have framed it differently. The LLM is a tool, just like code completion or a code generator. Right now we focus mainly on how to use a tool, the coding agent, to achieve a goal. This takes place at a strategic level. Prior to the inception of LLMs, we focused mainly on how to write code to achieve a goal. This took place at a tactical level, and required making decisions and paying attention to a multitude of details. With LLMs our focus shifts to a higher-level abstraction. Also, operational concerns change. When writing and maintaining code yourself, you focus on architectures that help you simplify some classes of changes. When using LLMs, your focus shifts to building context and aiding the model effectively implement their changes. The two goals seem related, but are radically different.

    I think a fairer description is that with LLMs we stop exercising some skills that are only required or relevant if you are writing your code yourself. It's like driving with an automatic transmission vs manual transmission.

    • Previous tools have been deterministic and understandable. I write code with emacs and can at any point look at the source and tell you why it did what it did. But I could produce the same program with vi or vscode or whatever, at the cost of some frustration. But they all ultimately transform keystrokes to a text file in largely the same way, and the compiler I'm targeting changes that to asm and thence to binary in a predictable and visible way.

      An LLM is always going to be a black box that is neither predictable nor visible (the unpredictability is necessary for how the tool functions; the invisibility is not but seems too late to fix now). So teams start cargo culting ways to deal with specific LLMs' idiosyncrasies and your domain knowledge becomes about a specific product that someone else has control over. It's like learning a specific office suite or whatever.

      9 replies →

    • The little experience I have with LLM confidently shows that LLMs are much better at navigating and modifying a well structured code base. And they struggle, sometimes to a point where they can't progress at all, if tasked to work on bad code. I mean, the kind of bad you always get after multiple rounds of unsupervised vibe coding.

  • > I happen to rent and can't keep

    This is my fear - what happens if the AI companies can't find a path to profitability and shut down?

    • This is why local models are so important. Even if the non-local ones shut down, and even if you can't run local ones on your own hardware, there will still be inference providers willing to serve your requests.

    • Recently I was thinking about how some (expensive) customer electronics like the Mac Studio can run pretty powerful open source models with a pretty efficient power consumption, that could pretty easily run on private renewable energy, and that are on most (all?) fronts much more powerful than the original ChatGPT especially if connected to a good knowledge base. Meaning that aside from very extreme scenarios I think it is safe to say that there will always be a way not to go back to how we used to code, as long as we can offer the correct hardware and energy. Of course personally I think we will never need to go to such extreme ends... despite knowing of people who seem to seriously think developed countries heavily run out of electricity one day, which, while I reckon there might be tensions, seems like a laughable idea IMHO.

  • > the meta-skill of learning to use the LLM depreciates too. Today's LLM is gonna go away someday, the way you have to use it will change. You will be on a forever treadmill, always learning the vagaries of using the new shiny model (and paying for the privilege!)

    I haven’t found this to be true at all, at least so far.

    As models improve I find that I can start dropping old tricks and techniques that were necessary to keep old models in line. Prompts get shorter with each new model improvement.

    It’s not really a cycle where you’re re-learning all the time or the information becomes outdated. The same prompt structure techniques are usually portable across LLMs.

    • Interesting, I’ve experienced the opposite in certain contexts. CC is so hastily shipped that new versions often imbalance existing workflows. E.g. people were raving about the new user prompt tools that CC used to get more context but they messed my simple git slash commands

  • I think you have to be aware of how you use any tool but I don’t think this is a forever treadmill. It’s pretty clear to me since early on that the goal is for you the user to not have to craft the perfect prompt. At least for my workflow it’s pretty darn close to that for me.

    • If it ever gets there, then anyone can use it and there's no "skill" to be learned at all.

      Either it will continue to be this very flawed non-deterministic tool that requires a lot of effort to get useful code out of it, or it will be so good it'll just work.

      That's why I'm not gonna heavily invest my time into it.

      3 replies →

  • I have deliberately moderated my use of AI in large part for this reason. For a solid two years now I've been constantly seeing claims of "this model/IDE/Agent/approach/etc is the future of writing code! It makes me 50x more productive, and will do the same for you!" And inevitabely those have all fallen by the wayside and been replaced by some new shiny thing. As someone who doesn't get intrinsic joy out of chasing the latest tech fad I usually move along and wait to see if whatever is being hyped really starts to take over the world.

    This isn't to say LLMs won't change software development forever, I think they will. But I doubt anyone has any idea what kind of tools and approaches everyone will be using 5 or 10 years from now, except that I really doubt it will be whatever is being hyped up at this exact moment.

    • HN is where I keep hearing the “50× more productive” claims the most. I’ve been reading 2024 annual reports and 2025 quarterlies to see whether any of this shows up on the other side of the hype.

      So far, the only company making loud, concrete claims backed by audited financials is Klarna and once you dig in, their improved profitability lines up far more cleanly with layoffs, hiring freezes, business simplification, and a cyclical rebound than with Gen-AI magically multiplying output. AI helped support a smaller org that eliminated more complicated financial products that have edge cases, but it didn’t create a step-change in productivity.

      If Gen-AI were making tech workers even 10× more productive at scale, you’d expect to see it reflected in revenue per employee, margins, or operating leverage across the sector.

      We’re just not seeing that yet.

      5 replies →

  • In my experience all technology has been like this though. We are on the treadmill of learning the new thing with our without LLMs. That's what makes tech work so fun and rewarding (for me anyway).

  • I assume you're living in a city. You're already renting out a lot of things to others (security, electricity, water, food, shelter, transportation), what is different with white collar work?

    • My apartment has been here for years and will be here for many more. I don't love paying rent on it but it certainly does get maintained without my having to do anything. And the rest of the infrastructure of my life is similarly banal. I ride Muni, eat food from Trader Joe's, and so on. These things are not going away and they don't require me to rewire my brain constantly in order to make use of them. The city infrastructure isn't stealing my ability to do my work, it just fills in some gaps that genuinely cannot be filled when working alone and I can trust it to keep doing that basically forever.

I think I should write more about but I have been feeling very similar. I've been recently exploring using claude code/codex recently as the "default", so I've decided to implement a side project.

My gripe with AI tools in the past is that the kind of work I do is large and complex and with previous models it just wasn't efficient to either provide enough context or deal with context rot when working on a large application - especially when that application doesn't have a million examples online.

I've been trying to implement a multiplayer game with server authoritative networking in Rust with Bevy. I specifically chose Bevy as the latest version was after Claude's cut off, it had a number of breaking changes, and there aren't a lot of deep examples online.

Overall it's going well, but one downside is that I don't really understand the code "in my bones". If you told me tomorrow that I had optimize latency or if there was a 1 in 100 edge case, not only would I not know where to look, I don't think I could tell you how the game engine works.

In the past, I could not have ever gotten this far without really understanding my tools. Today, I have a semi functional game and, truth be told, I don't even know what an ECS is and what advantages it provides. I really consider this a huge problem: if I had to maintain this in production, if there was a SEV0 bug, am I confident enough I could fix it? Or am I confident the model could figure it out? Or is the model good enough that it could scan the entire code base and intuit a solution? One of these three questions have to be answered or else brain atrophy is a real risk.

  • I'm worried about that too. If the error is reproducible, the model can eventually figure it out from experience. But a ghost bug that I can't pattern? The model ends up in a "you're absolutely right" loop as it incorrectly guesses different solutions.

    • Are ghost bugs even real?

      My first job had the Devs working front-line support years ago. Due to that, I learnt an important lessons in bug fixing.

      Always be able to re-create the bug first.

      There are no such thing as ghost bugs, you just need to ask the reporter the right questions.

      Unless your code is multi-threaded, to which I say, good luck!

      7 replies →

  • > I've been trying to implement a multiplayer game with server authoritative networking in Rust with Bevy. I specifically chose Bevy as the latest version was after Claude's cut off, it had a number of breaking changes, and there aren't a lot of deep examples online.

    I am interested in doing something similar (Bevy. not multiplayer).

    I had the thought that you ought be able to provide a cargo doc or rust-analyzer equivalent over MCP? This... must exist?

    I'm also curious how you test if the game is, um... fun? Maybe it doesn't apply so much for a multiplayer game, I'm thinking of stuff like the enemy patterns and timings in a soulslike, Zelda, etc.

    I did use ChatGPT to get some rendering code for a retro RCT/SimCity-style terrain mesh in Bevy and it basically worked, though several times I had to tell it "yeah uh nothing shows up", at which point is said "of course! the problem is..." and then I learned about mesh winding, fine, okay... felt like I was in over my head and decided to go to a 2D game instead so didn't pursue that further.

    • >I had the thought that you ought be able to provide a cargo doc or rust-analyzer equivalent over MCP? This... must exist?

      I've found that there are two issues that arise that I'm not sure how to solve. You can give it docs and point to it and it can generally figure out syntax, but the next issue I see is that without examples, it kind of just brute forces problems like a 14 year old.

      For example, the input system originally just let you move left and right, and it popped it into an observer function. As I added more and more controls, it began to litter with more and more code, until it was ~600 line function responsible for a large chunk of game logic.

      While trying to parse it I then had it refactor the code - but I don't know if the current code is idiomatic. What would be the cargo doc or rust-analyzer equivalent for good architecture?

      Im running into this same problem when trying to claude code for internal projects. Some parts of the codebase just have really intuitive internal frameworks and claude code can rip through them and provide great idiomatic code. Others are bogged down by years of tech debt and performance hacks and claude code can't be trusted with anything other than multi-paragraph prompts.

      >I'm also curious how you test if the game is, um... fun?

      Lucky enough for me this is a learning exercise, so I'm not optimizing for fun. I guess you could ask claude code to inject more fun.

      2 replies →

  • I ran into similar issues with context rot on a larger backend project recently. I ended up writing a tool that parses the AST to strip out function bodies and only feeds the relevant signatures and type definitions into the prompt.

    It cuts down the input tokens significantly which is nice for the monthly bill, but I found the main benefit is that it actually stops the model from getting distracted by existing implementation details. It feels a bit like overengineering but it makes reasoning about the system architecture much more reliable when you don't have to dump the whole codebase into the context window.

  • > I don't really understand the code "in my bones".

    Man, I absolutely hate this feeling.

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise." - Socrates on Writing and Reading, Phaedrus 370 BC

  • If one reads the dialogue, Socrates is not the one "saying" this, but he is telling a story of what King Thamus said to the Egyptian god Theuth, who is the inventor of writing. He is asking the king to give out the writing, but the king is unsure about it.

    Its what is known as one of the Socratic "myths," and really just contributes to a web of concepts that leads the dialogue to its ultimate terminus of aporia (being a relatively early Plato dialogue). Socrates, characteristically, doesn't really give his take on writing. In the text, he is just trying to help his friend write a horny love letter/speech!

    I can't bring it up right now, but the end of the dialogue has a rather beautiful characterization of writing in the positive, saying that perhaps logos can grow out of writing, like a garden.

    I think if pressed Socrates/Plato would say that LLM's are merely doxa machines, incapable of logos. But I am just spitballing.

  • Presenting this quote without additional commentary is an interesting Rorschach test.

    Thankfully more and more people are seriously considering the effects of technology on true wisdom and getting of the "all technological progress clearly is great, look at all these silly unenlightened naysayers from the past" train.

    • Socrates was right about the effects. Writing did indeed cause us to loose the talent of memorizing. Where he was wrong though (or rather where this quote without context is wrong) is that it turned out that memorizing was by the most part not the important skill to have.

      When Socrates uses the same warnings about LLMs he may however be correct both on the effect and the importance of the skill being lost. If we loose the ability to think and solve various problems, we may indeed be loosing a very important skill of our humanity.

      4 replies →

  • That is interesting because your mental abilities seem to be correlated with orchestrating a bunch of abstractions you have previously mastered. Are these tools making us stupid because we no longer need to master any of these things? Or are they making us smarter because the abstraction is just trusting AI to handle it for us?

  • It's unclear if you've presented this quote in order to support or criticize the idea that new technologies make us dumber. (Perhaps that's intentional; if so, bravo).

    To me, this feels like support. I was never an adult who could not read or write, so I can't check my experience against Socrates' specific concern. But speaking to the idea of memory, I now "outsource" a lot of my memory to my smartphone.

    In the past, I would just remember my shopping list, and go to the grocery store and get what I needed. Sure, sometimes I'd forget a thing or two, but it was almost always something unimportant, and rarely was a problem. Now I have my list on my phone, but on many occasions where I don't make a shopping list on my phone, when I get to the grocery store I have a lot of trouble remembering what to get, and sometimes finish shopping, check out, and leave the store, only to suddenly remember something important, and have to go back in.

    I don't remember phone numbers anymore. In college (~2000) I had the campus numbers (we didn't have cell phones yet) of at least two dozen friends memorized. Today I know my phone number, my wife's, and my sister's, and that's it. (But I still remember the phone number for the first house I lived in, and we moved out of that house when I was five years old. Interestingly, I don't remember the area code, but I suppose that makes sense, as area codes weren't required for local dialing in the US back in the 80s.)

    Now, some of this I will probably ascribe to age: I expect our memory gets more fallible as we get older (I'm in my mid 40s). I used to have all my credit/debit card numbers, and their expiration dates and security codes, memorized (five or six of them), but nowadays I can only manage to remember two of them. (And I usually forget or mix up the expiration dates; fortunately many payment forms don't seem to check, or are lax about it.) But maybe that is due to new technology to some extent: most/all sites where I spend money frequently remember my card for me (and at most only require me to enter the security code). And many also take Paypal or Google Pay, which saves me from having to recall the numbers.

    So I think new technology making us "dumber" is a very real thing. I'm not sure if it's a good thing or a bad thing. You could say that, in all of my examples, technology serving the place of memory has freed up mental cycles to remember more important things, so it's a net positive. But I'm not so sure.

    • I don‘t think human memory works like that, at least not in theory. Storage is not the limiting factor of human memory, but rather retention. It takes time and effort to retain new information. In the past you spent some time and effort to memorize the shopping list and the phone number. Mulling it over in your mind (or out loud), repeated recalls, exposure, even mnemonic tricks like rhymes, alliterations, connecting with pictures, stories, etc. if what you had to remember was something more complicated/extensive/important. And retention is not forever, unless you repeat it, you will loose it. And you only have so much time for repetition and recall, so inevitably, there will be memories which won‘t be repeated, and can’t be recalled.

      So when you started using technology to offload your memory, what you gained was the time and effort you previously spent encoding these things into your memory.

      I think there is a fundamental difference though between phone book apps and LLMs. Loosing the ability to remember a phone number is not as severe as loosing the ability to form a coherent argument, or to look through sources, or for a programmer to work through logic, to abstract complex logic into simpler chunks. If a scholar looses the skill to look through sources, and a programmer looses the ability to abstract complex logic, they are loosing a fundamental part of their needed to do their jobs. This is like if a stage actor looses the ability to memorize the script, instead relying on a tape-recorder when they are on stage.

      Now if a stage actor losses the ability to memorize the script, they will soon be out of a job, but I fear in the software industry (and academia) we are not so lucky. I suspect we will see a lot of people actually taking that tape recorder on stage, and continue to do their work as if nothing is more normal. And the drop in quality will predictably follow.

  • Yup.

    My personal counterpoint is Norman's thesis in Things That Make Us Smart.

    I've long tried, and mostly failed, to consider the tradeoffs, to be ever mindful that technologies are never neutral (winners & losers), per Postman's Technopoly.

  • And so we learn that over 2000 years before the microphone came to be, Socrates invented the mic drop.

    In all seriousness though, it's just crazy that anybody is thinking these things at the dawn of civilization.

  • Writing/reading and AI are so categorically different that the only way you could compare them is if you fundamentally misunderstand how both of them work.

    And "other people in the past predicted doom about something like this and it didn't happen" is a fallacious non-argument even when the things are comparable.

    • The argument Socrates is making is specifically that writing isn't a substitute for thinking, but it will be used as such. People will read things "without instruction" and claim to understand those things, even if they do not. This is a trade-off of writing. And the same thing is happening with LLMs in a widespread manner throughout society: people are having ChatGPT generate essays, exams, legal briefs and filings, analyses, etc., and submitting them as their own work. And many of these people don't understand what they have generated.

      Writing's invention is presented as an "elixir of memory", but it doesn't transfer memory and understanding directly - the reader must still think to understand and internalize information. Socrates renames it an "elixir of reminding", that writing only tells readers what other people have thought or said. It can facilitate understanding, but it can also enable people to take shortcuts around thinking.

      I feel that this is an apt comparison, for example, for someone who has only ever vibe-coded to an experienced software engineer. The skill of reading (in Socrates's argument) is not equivalent to the skill of understanding what is read. Which is why, I presume, the GP posted it in response to a comment regarding fear of skill atrophy - they are practicing code generation but are spending less time thinking about what all of the produced code is doing.

    • I know managers who can read code just fine, they're just not able/willing to code it. Tho the ai helps with that too. I've had a few managers dabble back into coding esp scripts and whatnot where I want them to be pulling unique data and doing one off investigations.

    • I read grandparent comment as saying people have been claiming that the sky is falling forever… AI will be both good for learning and development and bad. It’s always up to the individual if it benefits them or atrophies their minds.

      6 replies →

    • To understand the impact on computer programming per se, I find it useful to imagine that the first computer programs I had encountered were, somehow, expressed in a rudimentary natural language. That (somewhat) divorces the consideration of AI from its specific impact on programming. Surely it would have pulled me in certain directions. Surely I would have had less direct exposure to the mechanics of things. But, it seems to me that’s a distinction of degree, not of kind.

I've been thinking along these lines. LLMs seem to have arrived right when we were all getting addicted to reels/tic tocks/whatever. For some reason we love to swipe, swipe, swipe, until we get something funny/interesting/shocking, that gives us a short-lasting dopamine hit (or whatever chemicals it is) that feels good for about 1 second, and we want MORE, so we keep swiping.

Using an LLM is almost exactly the same. You get the occasional, "wow! I've never seen it do that before!" moments (whether that thing it just did was even useful or not), get a short hit of feel goods, and then we keep using it trying to get another hit. It keeps providing them at just the right intervals for people to keep them going just like they do with tick tock

I ran into a new problem today: "reading atrophy".

As in if the LLM doesn't know about it, some devs are basically giving up and not even going to RTFM. I literally had to explain to someone today how something works by...reading through the docs and linking them the docs with screenshots and highlighted paragraphs of text.

Still got push back along the lines of "not sure if this will work". It's. Literally. In. The. Docs.

  • That's not really a new thing now, it just shows differently.

    15 years ago I was working in an environment where they had lots of Indians as cheap labour - and the same thing will show up in any environment where you go for hiring a mass of cheap people while looking more at the cost than at qualifications: You pretty much need to trick them into reading stuff that are relevant.

    I remember one case where one had a problem they couldn't solve, and couldn't give me enough info to help remotely. In the end I was sitting next to them, and made them read anything showing up on the screen out loud. Took a few tries where they were just closing dialog boxes without reading it, but eventually we had that under control enough that they were able to read the error messages to me, and then went "Oh, so _that's_ the problem?!"

    Overall interacting with a LLM feels a lot like interacting with one of them back then, even down to the same excuses ("I didn't break anything in that commit, that test case was never passing") - and my expectation for what I can get out of it is pretty much the same as back then, and approach to interacting with it is pretty similar. It's pretty much an even cheaper unskilled developer, you just need to treat it as such. And you don't pair it up with other unskilled developers.

  • The mere existence of the phrase "RTFM" shows that this phenomenon was already a thing. LLMs are the worst thing to happen to people who couldn't read before. When HR type people ask what my "superpower" is I'm so tempted to say "I can read", because I honestly feel like it's the only difference between me and people who suck at working independently.

  • As someone working in technical support for a long time, this has always been the case.

    You can have as many extremely detailed and easy to parse gudies, references, etc. there will always be a portion of customers who refuse to read them.

    Never could figure out why because they aren't stupid or anything.

    • > Never could figure out why because they aren't stupid or anything.

      They may be intelligent, but they don't sound wise.

> Eventually it was easier just to quit fighting it and let it do things the way it wanted.

I wouldn't have believed it a few tears ago if you told me the industry would one day, in lockstep, decide that shipping more tech-debt is awesome. If the unstated bet doesn't pay off, that is, AI development will outpace the rate it generates cruft, then there will be hell to pay.

  • Don't worry. This will create the demand for even more powerful models that are able to untangle the mess created by previous models.

    Once we realize the kind of mess _those_ models created, well, we'll need even more capable models.

    It's a variation on the theme of Kernighan insight about the more "clever" you are while coding the harder it will be to debug.

    EDIT: Simplicity is a way out but it's hard under normal circumstances, now with this kind of pressure to ship fast because the colleague with the AI chimp can outperform you, aiming at simplicity will require some widespread understanding

  • As someone who's been commissioned many times before to work on or salvage "rescue projects" with huge amounts of tech debt, I welcome that day. Still not there yet though I am starting to feel the vibes shifting.

    This isn't anything new of course. Previously it was with projects built by looking for the cheapest bidder and letting them loose on an ill-defined problem. And you can just imagine what kind of code that produced. Except the scale is much larger.

    My favorite example of this was a project that simply stopped working due to the amount of bugs generated from layers upon layers of bad code that was never addressed. That took around 2 years of work to undo. Roughly 6 months to un-break all the functionality and 6 more months to clean up the core and then start building on top.

    • Are you not worried that the sibling comment is right and the solution to this will be "more AI" in the future? So instead of hiring a team of human experts to cleanup, management might just dump more money into some specialized AI refactoring platform or hire a single AI coordinator... Or maybe they skip to rebuild using AI faster, because AI is good at greenfield. Then they only need a specialized migration AI to automate the regular switchovers.

      I used to be unconcerned, but I admit to be a little frightened of the future now.

      2 replies →

  • The industry decided that decades ago. We may like to talk about quality and forethought, but when you actually go to work, you quickly discover it doesn't matter. Small companies tell you "we gotta go fast", large companies demand clear OKRs and focusing on actually delivering impact - either way, no one cares about tech debt, because they see it as unavoidable fact of life. Even more so now, as ZIRP went away and no one can afford to pay devs to polish the turd ad infinitum. The mantra is, ship it and do the next thing, clean up the old thing if it ever becomes a problem.

    And guess what, I'm finally convinced they're right.

    Consider: it's been that way for decades. We may tell ourselves good developers write quality code given the chance, but the truth is, the median programmer is a junior with <5 years of experience, and they cannot write quality code to save their life. That's purely the consequence of rapid growth of software industry itself. ~all production code in the past few decades was written by juniors, it continues to be so today; those who advance to senior level end up mostly tutoring new juniors instead of coding.

    Or, all that put another way: tech debt is not wrong. It's a tool, a trade-off. It's perfectly fine to be loaded with it, if taking it lets you move forward and earn enough to afford paying installments when they're due. Like with housing: you're better off buying it with lump payment, or off savings in treasury bonds, but few have that money on hand and life is finite, so people just get a mortgage and move on.

    --

    Edited to add: There's a silver lining, though. LLMs make tech debt legible and quantifiable.

    LLMs are affected by tech debt even more than human devs are, because (currently) they're dumber, they have less cognitive capability around abstractions and generalizations[0]. They make up for it by working much faster - which is a curse in terms of amplifying tech debt, but also a blessing, because you can literally see them slowing down.

    Developer productivity is hard to measure in large part because the process is invisible (happens in people's heads and notes), and cause-and-effect chains play out over weeks or months. LLM agents compress that to hours to days, and the process itself is laid bare in the chat transcript, easy to inspect and analyze.

    The way I see it, LLMs will finally allow us to turn software development at tactical level from art into an engineering process. Though it might be too late for it to be of any use to human devs.

    --

    [0] - At least the out-of-distribution ones - quirks unique to particular codebase and people behind it.

    • > the median programmer is a junior with <5 years of experience, and they cannot write quality code to save their life

      It's worse, look up perpetual intermediates. Most people in any given field aren't good at what they're doing. They're at best mediocre.

      1 reply →

  • > unstated bet

    (except where it's been stated, championed, enforced, and ultimated in no unequivocal terms by every executive in the tech industry)

    • I'm yet to encounter an AI-bull who admits the LLM tendency towards creating tech debt- outside of footnotes stating it can be fixed by better prompting (with no examples), or solved by whatever tool they are selling

  • > I wouldn't have believed it a few tears ago if you told me the industry would one day, in lockstep, decide that shipping more tech-debt is awesome.

    It's not debt if you never have to pay it back. If a model can regenerate a whole relibale codebase in minutes from a spec, then your assessment of "tech debt" in that output becomes meaningless.

My disillusionment comes from the feeling I am just cosplaying my job. There is nothing to distinguish one cosplayer from another. I am just doordashing software, at this point, and I'm not in control.

  • I don’t get this at all. I’m using LLM’s all day and I’m constantly having to make smart architectural choices that other less experienced devs won’t be making. Are you just prompting and going with whatever the initial output is, letting the LLM make decisions? Every moderately sized task should start with a plan, I can spend hours planning, going off and thinking, coming back to the plan and adding/changing things, etc. Sometimes it will be days before I tell the LLM to “go”. I’m also constantly optimising the context available to the LLM, and making more specific skills to improve results. It’s very clear to me that knowledge and effort is still crucial to good long term output… Not everyone will get the same results, in fact everyone is NOT getting the same results, you can see this by reading the wildly different feedback on HN. To some LLM’s are a force multiplier while others claim they can’t get a single piece of decent output…

    I think the way you’re using these tools that makes you feel this way is a choice. You’re choosing to not be in control and do as little as possible.

    • Exactly.

      Once you start using it intelligently, the results can be really satisfying and helpful. People complaining about 1000 lines of codes being generated? Ask it to generate functions one at a time and make small implementations. People complaining about having to run a linter? Ask it to automatically run it after each code execution. People complaining about losing track? Have it log every modifications in a file.

      I think you get my point. You need to treat it as a super powerful tool that can do so many things that you have to guide it if you want to have a result that conforms to what you have in mind.

    • One challenge is, are those decisions making tangible differences?

      We won't know until the code being produced especially greenfields hits any kind of maturity 5 years+ atleast?

      5 replies →

  • 100% there....it's getting to a point where a project manager reports a bug AND also pastes a response from Claude (he ran Claude against our codebase) on how to fix the bug..Like I'm just copying what Claude said and making sure the thing compiles (.NET). What makes me sleep at night...for now is the fact that Claude isn't supporting 9pm deployments and AWS Infra support ...it's already writing code but not supporting it yet...

  • What kind of software are you writing? Are you just a "code monkey" implementing perfectly described Jira tickets (no offense meant)? I cannot imagine feeling this way with what I'm working on, writing code is just a small part of it, most of the time is spent trying to figure out how to integrate the various (undocumented and actively evolving) external services involved together in a coherent, maintainable and resilient way. LLMs absolutely cannot figure this out themselves, I have to figure it out myself and then write it all in its context, and even then it mostly comes up with sub-par, unmaintainable solutions if I wasn't being precise engouh.

    They are amazing for side projects but not for serious code with real world impact where most of the context is in multiple people's head.

    • No, I am not a code monkey. I have an odd role working directly for an exec in a highly regulated industry, managing their tech pursuits/projects. The work can range from exciting to boring depending on the business cycle. Currently it is quite boring, so I've leaned into using AI a bit more just to see how I like it. I don't think that I do.

I find the atrophy and zoning out or context switching problematic, because it takes a few seconds/ minutes in "thinking" and then BAM! I have 500 lines of all sorts of buggy and problematic code to review and get a sycophantic, not-enough-mature entity to correct.

At some point, I find myself needing to disconnect out of overwhelm and frustration. Faster responses isn't necessarily better. I want more observability in the development process so that I can be a party to it. I really have felt that I need to orchestrate multiple agents working in tandem, playing sort of a bad-cop, good-cop and a maybe a third trying to moderate that discussion and get a fourth to effectively incorporate a human in the mix. But that's too much to integrate in my day job.

I’ve actually found the tool that inspires the most worry about brain atrophy to be Copilot. Vscode is full of flashing suggestions all over. A couple days ago, I wanted to write a very quick program, and it was basically impossible to write any of it without Copilot suggesting a whole series of ways to do what it thought I was doing. And it seems that MS wants this: the obvious control to turn it off is actually just “snooze.”

I found the setting and turned it off for real. Good riddance. I’ll use the hotkey on occasion.

  • Yes! I spent more time trying to figure out how to turn off that garbage copilot suggesting then I did editing this 5 year old python program.

    I use claude daily, no problems with it. But vscode + copilot suggestions was garbage!

My experience is the opposite - I haven't used my brain more in a while.. Typing characters was never what developers were valued for anyway. The joy of building is back too.

  • Same. I feel I need to be way more into the domain and what the user is trying to do than ever before.

  • 100% same, I had brain fog before the llms, I got tired of reading new docs over and over again for new languages. I became a manager and lost it all.

    Now back to IC with 25+ years of experience + LLM = god mode, and its fun again.

> Like there have been multiple times now where I wanted the code to look a certain way, but it kept pulling back to the way it wanted to do things. Like if I had stated certain design goals recently it would adhere to them, but after a few iterations it would forget again and go back to its original approach, or mix the two, or whatever. Eventually it was easier just to quit fighting it and let it do things the way it wanted.

Absolutely. At a certain level of usage, you just have to let it do it's thing.

People are going to take issue with that. You absolutely don't have to let it do its thing. In that case you have to be way more in the loop. Which isn't necessarily a bad thing.

But assuming you want it to basically do everything while you direct it, it becomes pointless to manage certain details. One thing in my experience is that Claude always wants to use ReactRouter. My personal preference is TanStack router, so I asked it to use it initially. That never really created any problems but after like the 3rd time of realizing I forget to specify it, I also realized that it's totally pointless. ReactRouter works fine and Claude uses it fine - its pointless to specify otherwise.

> I want to say it's very akin to doom scrolling. Doom tabbing? It's like, yeah I could be more creative with just a tad more effort, but the AI is already running and the bar to seeing what the AI will do next is so low, so....

Yea exactly, Like we are just waiting so that it gets completed and after it gets completed then what? We ask it to do new things again.

Just as how if we are doom scrolling, we watch something for a minute then scroll down and watch something new again.

The whole notion of progress feels completely fake with this. Somehow I guess I was in a bubble of time where I had always end up using AI in web browsers (just as when chatgpt 3 came) and my workflow didn't change because it was free but recently changed it when some new free services dropped.

"Doom-tabbing" or complete out of the loop AI agentic programming just feels really weird to me sucking the joy & I wouldn't even consider myself a guy particular interested in writing code as I had been using AI to write code for a long time.

I think the problem for me was that I always considered myself a computer tinker before coder. So when AI came for coding, my tinkering skills were given a boost (I could make projects of curiosity I couldn't earlier) but now with AI agents in this autonomous esque way, it has come for my tinkering & I do feel replaced or just feel like my ability of tinkering and my interests and my knowledge and my experience is just not taken up into account if AI agent will write the whole code in multi file structure, run commands and then deploy it straight to a website.

I mean my point is tinkering was an active hobby, now its becoming a passive hobby, doom-tinkering? I feel like I have caught up on the feeling a bit earlier with just vibe from my heart but is it just me who feels this or?

What could be a name for what I feel?

Another thing I’ve experienced is scope creep into the average. Both Claude and ChatGPT keep making recommendations and suggestions that turns the original request into something that resembles other typical features. Sometimes that’s a good thing, because it means I’ve missed something. A lot of times, especially when I’m just riffing on ideas, it turns into something mundane and ordinary and I’ll have lost my earlier train of thought.

A quick example is trying to build a simple expenses app with it. I just want to store a list of transactions with it. I’ve already written the types and data model and just need the AI to give me the plumping. And it will always end up inserting recommendations about double entry bookkeeping.

  • yeah but that's like recommending a webserver for your Internet facing website. If you want to give an example of scope creep, you need a better example than double entry book keeping for an accounting app.

    • You’ve just illustrated exactly the problem. You assumed I was building an accounting app. I’ve experienced the same issue with building features for calculating the brightness of a room, or 3D visualisations of brightness patterns, managing inventory and cataloguing lighting fixtures and so on.

      It’s great for churning out stuff that already exists, but that also means it’ll massage your idea into one of them.

> I worry about the "brain atrophy" part, as I've felt this too. And not just atrophy, but even moreso I think it's evolving into "complacency".

Not trusting the ML's output is step one here, that keeps you intellectually involved - but it's still a far cry from solving the majority of problems yourself (instead you only solve problems ML did a poor job at).

Step two: I delineate interesting and uninteresting work, and Claude becomes a pair programmer without keyboard access for the latter - I bounce ideas off of it etc. making it an intelligent rubber duck. [Edit to clarify, a caveat is that] I do not bore myself with trivialities such as retrieving a customer from the DB in a REST call (but again, I do verify the output).

  • > I do not bore myself with trivialities such as retrieving a customer from the DB in a REST call

    Genuine question, why isn't your ORM doing that? I see a lot of use cases for LLMs that seem to be more expensive ways to do snippets and frameworks...

I've gone years without coding and when I come back to it, it's like riding a bike! In each iteration of my coding career, I have become a better developer, even after a large gap. Now I can "code" during my gap. Were I ever to hand-code again, I'm sure my skills would be there. They don't atrophy, like your ability to ride a bike doesn't atrophy. Yes you may need to warm back up, but all the connections in your brain are still there.

  • Have you ever learnt a foreign language (say Mongolian, or Danish) and then never spoken it, nor even read anything in it for over 10 years? It is not like riding a bike, it doesn’t just come back like that. You have to actually relearn the language, practice it, and you will suck at it for months. Comprehension comes first (within weeks) but you will be speaking with grammatical errors, mispronunciations, etc. for much longer. You won‘t have to learn the language from scratch, second time around is much easier, but you will have to put in the effort. And if you use google translate instead of your brain, you won‘t relearn the language at all. You will simply forget it.

    • Anecdotally, i burned out pretty hard and basically didn't open a text editor for half a year (unemployed too). Eventually i got an itch to write code again and it didn't really feel like I was really worse. Maybe it wasn't long enough atrophy but code doesn't seem to quite work like language though ime.

      2 replies →

    • I studied Spanish for years in school, then never really used it. Ten years later, I started studying Japanese. Whenever I got stuck, Spanish would come out. Spanish that I didn't even consciously remember. AFAIK, foreign languages are all stored in the same part of the brain, and once you warm up those neurons, they all get activated.

      Not that it's in any way relevant to programming. I will say that after dropping programming for years, I can still explain a lot of specifics, and when I dive back in, it all floods right back. Personally, I'm convinced that any competent, experienced programmer could take a multi-year break, then come back and be right up to speed with the latest software stack in only slightly longer than the stack transition would have taken without a break.

    • I have not and I'm actually really bad at learning human languages, but know a dozen programming languages. You would think they would be similar, but for some reason it's really easy for me to program in any language and really hard for me to pick up a human language.

      3 replies →

  • You might still have the skillset to write code, but depending on length of the break your knowledge of tools, frameworks, patterns would be fairly outdated.

    I used to know a person like that - high in the company structure who would claim he was a great engineer, but all the actual engineers would make jokes about him and his ancient skills during private conversations.

    • I’d push back on this framing a bit. There's a subtle ageism baked into the assumption that someone who stepped away from day-to-day coding has "ancient skills" worth mocking.

      Yes, specific frameworks and tooling knowledge atrophy without use, and that’s true for anyone at any career stage. A developer who spent three years exclusively in React would be rusty on backend patterns too. But you’re conflating current tool familiarity with engineering ability, and those are different things.

      The fundamentals: system design, debugging methodology, reading and reasoning about unfamiliar code, understanding tradeoffs ... those transfer. Someone with deep experience often ramps up on new stacks faster than you’d expect, precisely because they’ve seen the same patterns repackaged multiple times.

      If the person you’re describing was genuinely overconfident about skills they hadn’t maintained, that’s a fair critique. But "the actual engineers making jokes about his ancient skills" sounds less like a measured assessment and more like the kind of dismissiveness that writes off experienced people before seeing what they can actually do.

      Worth asking: were people laughing because he was genuinely incompetent, or because he didn’t know the hot framework of the moment? Because those are very different things.

      3 replies →

    • I code in Vim, use Linux... all of those tools are pretty constant. New frameworks are easy to pick up. I've been able to become productive with very little downtime after multi-year breaks several times.

I feel like I'm still a couple steps behind in skill level as my lead and is trying to gain more experience I do wonder if I am shooting myself in the foot if I rely too much on AI at this stage. The senior engineer I'm trying to learn from can very effectively use ai because he has very good judgement of code quality, I feel like if I use AI too much I might lose out on chance to improve my judgement. It's a hard dilemma.

Honestly, this seems very much like the jump from being an individual contributor to being an engineering manager.

The time it happened for me was rather abrupt, with no training in between, and the feeling was eerily similar.

You know _exactly_ why the best solution is, you talk to your reports, but they have minds of their own, as well as egos, and they do things … their own way.

At some point I stopped obsessing with details and was just giving guidance and direction only in the cases where it really mattered, or when asked, but let people make their own mistakes.

Now LLMs don’t really learn on their own or anything, but the feeling of “letting go of small trivial things” is sorta similar. You concentrate on the bigger picture, and if it chose to do an iterative for loop instead of using a functional approach the way you like it … well the tests still pass, don’t they.

  • The only issue is that as an engineering manager you reasonably expect that the team learns new things, improve their skills, in general grow as engineers. With AI and its context handling you're working with a team where each member has severe brain damage that affects their ability to form long term memories. You can rewire their brain to a degree teaching them new "skills" or giving them new tools, but they still don't actually learn from their mistakes or their experiences.

    • As a manager I would encourage them to use the LLM tools. I would also encourage unit tests, e2e testing, testing coverages, CI pipelines automating the testing, automatic pr reviewing etc...

      It's also peeking at the big/impactful changes and ignoring the small ones.

      Your job isn't to make sure they don't have "brain damage" its to keep them productive and not shipping mistakes.

    • Being optimistic (or pessimistic heh), if things keep the trend then the models will evolve as well and will probably be quite better in one year than they are now.

You could probably combat this somewhat with a skill that references to examples of the code you don't want and the code you do. And then each time you tell it to correct the code you ask it to put that example into the references.

You then tell your agent to always run that skill prior to moving on. If the examples are pattern matchable you can even have the agent write custom lints if your linter supports extension or even write a poor man’s linter using ast-grep.

I usually have a second session running that is mainly there to audit the code and help me add and adjust skills while I keep the main session on the task of working on the feature. I've found this far easier to stay engaged than context switching between unrelated tasks.

> Like if I had stated certain design goals recently it would adhere to them, but after a few iterations it would forget again and go back to its original approach, or mix the two, or whatever.

Context management, proper prompting and clear instructions, proper documentation are still relevant.

"I wanted the code to look a certain way, but it kept pulling back to the way it wanted to do things."

I would argue this is ok for front-end. For back-end? very, very bad- if you can't get a usable output do it by hand.

The solution for brain atrophy I personally arrived is to use coding agents at work, where, let’s be honest, velocity is a top priority and code purity doesn’t matter that much. Since we use stack I super familial with, I can quite fast verify produced code and tweak it if needed.

However, for hobby projects where I purposely use tech I’m not very familiar with, I force myself not to use LLMs at all - even as a chat. Thus, operating The old way - writing code manually, reading documentation, etc brings me a joy of learning back and, hopefully, establishes new neurone connections.

I think this is where tools like OpenSpec [1] may help. The deterioration in quality is because the context is degrading, often due to incomplete or amibiguous requests from the coder. With a more disciplined way of creating and persisting locally the specs for the work, especially if the agent got involved in creating that too, you'll have a much better chance of keeping the agent focussed and aligned.

[1] - https://openspec.dev/

> AI keeps pushing it in a direction I didn't want

The AI definitely has preferences and attention issues, but there are ways to overcome them.

Defining code styles in a design doc, and setting up initial examples in key files goes a long way. Claude seems pretty happy to follow existing patterns under these conditions unless context is strained.

I have pretty good results using a structured workflow that runs a core loop of steps on each change, with a hook that injects instructions to keep attention focused.

My advice: keep it on a tight leash.

In the happy case where I have a good idea of the changes necessary, I will ask it to do small things, step by step, and examine what it does and commit.

In the unhappy case where one is faced with a massive codebase and no idea where to start, I find asking it to just “do the thing” generates slop, but enough for me to use as inspiration for the above.

LLMs are yet another layer between us and the end result. I remain wary of this distance and am super grateful I learned coding the hard way.

yeah, because the thing is: at the end of the day: laying things out the way LLMs can understand is becoming more important than doing them the “right” way— a more insidious form of the same complacency. and one in which i am absolutely complicit.

LLMs have some terrible patterns, don't know what do ? Just chuck a class named Service in.

Have to really look out for the crap.