AI should elevate your thinking, not replace it

15 hours ago (koshyjohn.com)

> If the job were mainly about producing syntactically valid code, then of course A.I. would be on a direct path to replacing large parts of the profession. But that was never the highest-value part of the work. The value was always in judgment.

> The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise

How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.

The author tries to answer this:

> That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.

but in a world wherein writing code by hand (the "struggle") is "artisinal" and "outdated", this process being non-optional (which I agree with) is contradictory.

How juniors and fresh grads do that with AI that is designed to give you whatever answer you need in a given moment is unclear to me. I don't see how that's possible, but maybe I'm thinking too myopically.

  • Myopic is inevitable, to some extent. It's very hard to project this stuff.

    Socrates wrote about what was being lost as philosophy was becoming written rather than oral...and he was right.

    We can't even understand what was lost. Many methods of learning and thinking became entirely lost. You could say they were redundant, and they were. But... writing largely replaced oral traditions. It didn't just augment them.

    He was that old school coder who had the skills to do philosophy and be an intellectual without writing. Writing was an augmentation for him. But for the new cohort... it was a new paradigm and old paradigm skills became absent.

    It is very hard to imagine skilled coders becoming skilled without need pressing that skill acquisition. The diligent student will acquire some basic "manual coding" skill... but mostly the skill development will be wherever the hard work is.

    • I'd say that by purging stuff from the brain we are losing thinking itself. Thinking is manipulating ideas and concepts in your head, assembling and linking. The fewer things there is, the more primitive the result. You cannot juggle without object to juggle, connecting the dots result in trivial patterns when you have just a couple of dots.

      11 replies →

    • Yeah but where comparison with philosophy falls short is - if we lost some ways of thinking, it was gradual and most didn't notice.

      Software code is on the other hand extremely formal, and either it works perfectly as intended, it works crappily and keeps breaking in various edge cases or just doesn't work (last 2 are just variants of same dysfunctionality, technically its binary state). There is no scenario where broken code somehow ends up working and delivering, or maybe 1 in trillion, sometimes.

      Also the change is so fast that the failure is immediately obvious to everybody, its not gradual change of thinking over few decades/generations.

      LLMs are getting impressive, but anybody claiming there is no massive long term harm to getting to what we call now proper seniority is... don't know, delusional, junior who never walked that long and hard-won path, doing PR for llms at all costs or some other similar type. Or simply has some narrow use case working great for them long term which definitely can't be transferred on whole industry, like 1-man indie game dev.

      1 reply →

  • You aren't thinking myopically; it's a fundamental contradiction the root of which is in how human brains take in and understand new information. No amount of pontification or bollocks hedging as this and all other "thinkpieces" on this issue do, will change that. It is beyond preference and perspective. There is only doing the very task that produces skills pertaining to that task. Prompting alone or even in dominant is too far from this task. They can only write the code.

  • AI has not yet aligned with human thinking absolutely but some people create euphoria that it's surpassing human thinking so only after alignment and surpassing AI can think of an outside inview now it is still inside out

  • you learn by struggling and slogging through, even as a senior if your shit breaks it's on you to understand why. no LLM will shortcut that process for you (even asking LLMs why something is wrong requires you to actually understand it eventually, aka LEARNING). how that happens is up to the person.

    i don't understand all this fear projected as if people won't have agency of learning just because LLMs make it easier to do certain things. i don't think it's contradictory at all. half the people here will never have to wrangle the bullshit i dealt with 20 years ago and i'm sure when i was dealing with it there was another 20 years of bullshit before me lol.

    if you vibe code your app with no regard for the underlying code you will pay the price for it at some point in the future, anybody worth their salt will slow down enough to figure it out the "artisanal" way.

    • I'd argue that the engineers of 20 years ago were better than the engineers of today because they were significantly more resource constrained and for example, would never use a 300mb javascript library for a profile page.

      2 replies →

  • Almost none of my operational knowledge came from writing code but a lot sure came from the reading code in the debugging process.

  • One thing worth mentioning is that even before AI only some small subset of engineers have experienced building systems from scratch or inventing new ways of doing things or root causing complex problems or even writing a lot of code. Most software engineering is maintenance or mundane or not productive.

    Even in a world where there's a lot of AI generated code there can still be people that have enough exposure to doing hard things. Definitely at this point in time where AI can't really do all those hard things anyways - but even after it'll be able to.

    • you don't need to build systems from scratch to acquire problem-solving skills. even routine maintenance problems require to dig into documentation, look at github issues, and do root-cause analysis. These skills are eliminated from reliance on AI and there is no fallback if one never acquired them in the first place.

  • > I don't see how that's possible, but maybe I'm thinking too myopically.

    you are thinking too myopically.

    We have people who can still do maths well after the introduction of the calculator. We have people who can still spell after the introduction of spell check.

    The junior only need to train without using AI to gain the skills needed - that's called education. If they choose to rely on AI solely, and gimp their own education, that's on them.

    • > We have people who can still do maths well after the introduction of the calculator.

      I assume by "do maths" you mean doing simple calculations, like adding a bunch of small numbers, in one's head. That's because in many situations it's more convenient to do so, than using a calculator. So the skill is preserved / practiced, because a calculator is too cumbersome to use. The skills of most people settle at the equilibrium where it takes the same effort to take out the calculator and focus on typing, as it would to strain the brain doing it without a calculator.

      > We have people who can still spell after the introduction of spell check.

      When using spell check to fix your document, you automatically learn to spell. Your skills improve by using the tool. A better analogy to AI would be an email client with a "Fix all and send"-button, where you never look at the output of the spell checker.

      5 replies →

    • Yes but currently I don't know of a single company in my area that doesn't make you use AI daily because of the supposedly increased productivity. That means that juniors also absolutely have to use AI, probably sabotaging their learning process in the long run.

    • Why is it always so consistently a comparison to a technology of a fundamentally different order? Perhaps what has been lost is the ability to recognise distinct and incommensurable categories.

    • > We have people who can still do maths well after the introduction of the calculator.

      Arithmetics is a very, very small subset of math.

The eloquence with which this point gets (repeatedly) made is continuing to improve each next time I read it. However, I still feel like we haven't nailed it. That is, we are not yet at the "aphorism" stage of the discourse (e.g. "the medium is the message", "you ship your org chart", "9 mothers can't make a baby in a month"), in which the most pointed version of this critique packs a punch in just a few words that resonate with the majority of people. That kind of epistemological chiseling takes years, if not decades. And AI certainly won't do it for us, because we don't know how to RL meaning-making.

Edit: 9 babies → 9 mothers

  • > That is, we are not yet at the "aphorism" stage of the discourse

    we learn by doing

    • Put differently: you get good at what you actually do, not what you think you're doing.

      If you're not coding anymore, but using AI tools, you're developing skills in using those AI tools, and your code abilities will atrophy unless exercised elsewhere.

    • ... or by textbooks, Stack Overflow, senior engineers, code review. How many engineers today got their start by building Minecraft mods or even MySpace?

      I do think that these pieces sometimes smuggle in a nostalgic picture of how engineers "really" learn which has only ever been partly true.

  • Isn't it the vehicle metaphor about bicycles for the mind? Not fully crystallized yet but I feel like someone will

  • How about "Intelligence amplification, not artificial intelligence"?

    Also could be shortened to "IA, not AI", and gets even more fun when you translate it to Spanish: "AI, no IA".

  • "Bicycle of the Mind" has been cited to death.

    The problem is that it was coined so early that we are way past the aphorism stage now.

  • >the medium is the message

    If you asked 100 Americans what this aphorism means, I strongly doubt a single one could capture McLuhan's original meaning.

    • You're right. ive struggled to understand what exactly this means, in large part perhaps it's so often misused?

      I think it means something like we're trapped in the constraints of the medium. Tweets say more about the environment of twitter than whatever message happened to be sent.

      but i think im off on that, ill look this person up and find out!

      2 replies →

  • Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.

    To maintain relevance, we must find common ground. There is no true objectivity, because every sign must be built up from an arbitrary ground. At the very least, there will be a conflict of aesthetics.

    The problem with LLMs is that they avoid the ground entirely, making them entirely ignorant to meaning. The only intention an LLM has is to preserve the familiarity of expression.

    So yes, this kind of AI will not accomplish any epistemology; unless of course, it is truly able to facilitate a functional system of logic, and to ground that system near the user. I'm not going to hold my breath.

    I think the great mistake of "good ole fashioned AI" was to build it from a perspective of objectivity. This constrains every grammar to the "context-free" category, and situates every expression to a singular fixed ground. Nothing can be ambiguous: therefore nothing can express (or interpret) uncertainty or metaphor.

    What we really need is to recreate software from a subjective perspective. That's what I've been working on for the last few years... So far, it's harder than I expected; but it feels so close.

    • LLM's are a mediocre map, but they're a great compass, telescope, navigation tools and what have ye

    • > What we really need is to recreate software from a subjective perspective.

      What does "subjective" mean here? Are you talking about just-in-time software? That is, software that users get mold on the fly?

    • > Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.

      I'm reminded immediately of the Enochian language which purportedly had the remarkable property of having a direct, unambiguous, 1-to-1 correspondence with the things being signified. To utter, and hear, any expression in Enochian is to directly transfer the author's intent into the listener's mind, wholly intact and unmodified:

          Every Letter signifieth the member of the substance whereof it speaketh.
          Every word signifieth the quiddity of the substance.
      
          - John Dee, "A true & faithful relation of what passed for many yeers between Dr. John Dee ... and some spirits," 1659 [0].
      

      The Tower of Babel is an allegory for the weak correspondence between human natural language and the things it attempts to signify (as opposed to the supposedly strong 1-to-1 correspondence of Enochian). The tongues are confused, people use the same words to signify different referents entirely, or cannot agree on which term should be used to signify a single concept, and the society collapses. This is similar to what Orwell wrote about, and we have already implemented Orwell's vision, sociopolitically, in the early 21st century, through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).

      LLMs just accelerate this process of severing any connection whatsoever between signified and signifier. In some ways they are maximally Babelian, in that they maximize confusion by increasing the quantity of signifiers produced while minimizing the amount of time spent ensuring that the things we want signified are being accurately represented.

      Speaking more broadly, I think there is much confusion in the spheres of both psychology and religion/spirituality/mysticism in their mutual inability to "come to terms" and agree upon which words should be used to refer to particular phenomenological experiences, or come to a mutual understanding of what those words even mean (try, for instance, to faithfully recreate, in your own mind, someone's written recollection of a psychedelic experience on erowid).

      [0] https://archive.org/details/truefaithfulrela00deej/page/92/m...

  • This concept won't reach that point because when you chisel too hard it crumbles. There are countless lower level tasks that typical programmers no longer learn how to do. Our capacity for knowledge is not unlimited so we offload everything we can to move to the next level of abstraction.

    • AI coding isn’t an abstraction, though. You can’t treat a prompt like source code because it will give you a different output every time you use it. An abstraction lets you offload cognitive capacity while retaining knowledge of “what you are doing”. With AI coding either you need to carefully review outputs and you aren’t saving any cognitive capacity, or you aren’t looking at the outputs and don’t know what you’re doing, in a very literal sense.

      17 replies →

    • That's true, but I think it's beside the point. The flip side of that argument, which is equally true, goes something like, "not doing cognitive push-ups leads to cognitive atrophy."

      There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators), primarily because those are well bounded domains, so we understand the nature of the codependency we're signing up for. AI is an amorphous and still growing domain. It is not a specific rung in the abstraction hierarchy; it is every rung simultaneously, but at different fidelity levels.

      3 replies →

    • I get your point, I just wonder how accurate it is. We basically never look at the output of the compiler, so I agree that tool allows one to operate at a higher level than assembly. But I always have to wade through the output from AI so I’m not sure I got to move to the next level of abstraction. But maybe that’s just me.

      4 replies →

    • The idea that a tool intended to replace all human cognitive work is the next level of abstraction is so fundamentally flawed, that I'm not sure it's made in good faith anymore. The most charitable interpretation I can think of is that it's a coping mechanism for being made redundant.

      Nevermind the fact that these tools are nowhere near as capable as their marketing suggests. Once companies and society start hitting the brick wall of inevitable consequences of the current hype cycle, there will be a great crash, followed by industry correction. Only then will actually useful applications of this technology surface, of which there are plenty. We've seen how this plays out a few times before already.

The scary thing is I have seen high level directors and executives say “I asked ChatGPT and it agreed with me” as a way to try to settle a debate. People seem all too willing to delegate even matters of judgement to AI.

On the other hand I have been in debates where someone asks ChatGPT to draft a list of possible approaches and pros and cons - and after reading through the list we were all in alignment on the best approach.

The latter I think is a constructive use of AI to elevate thinking, while the former has me thinking it may be time for a career change.

  • To make an exhaustive list of possible options you need to find key questions that divide solution space. This requires logic, which LLMs lack.

The way I use AI now feels more exhausting than the programming I did for the last 20 years. I pose a problem, then evaluate proposals, then pick the one I think is the "right one"(tm), then see the AI propose a bunch of weird shit, then call it out, refine the proposal until it feels just about right (this is the exhausting part), then let it code the proposal. The coding will then run for 1-5 hours and produce something that would have taken me at least 2 or 3 weeks (in that quality).

After 5 hours or so of doing this planning, I'm EXHAUSTED. I never was exhausted in this manner from programming alone. Am I learning something new? Feels like management. :)

  • I feel this as well. I think it’s something to do with having to be more “on” as you slowly work with the LLM to define the problem and find a reasonable solution. There’s not much of a flow-state. You have to process mountains of output and identify the critical points, over and over, endlessly. And it will always be an off in this unsettling little way, even when it’s mostly quite good. It’s jarring.

    The strange sorts of errors and reasoning issues LLMs have also require a vigilance that is very draining to maintain. Likewise with parsing the inhuman communication styles of these things…

    • Could it be that what we called flow state was actually a sort of high level thinking time afforded by doing low level routine work?

      For instance I'm the old world, if you wanted to change an interface, you might have to edit 5 or 6 files to add your new function in the implementations. This is pretty routine and you won't need to concentrate that much if you're used to it, so you can spend that low-effort time thinking about the bigger picture.

      1 reply →

    • Its the "unsettling little ways", right. So you can't skip whole paragraphs, you literally have to read everything. And sometimes its worded in ways I don't understand at all (due to missing implications that the LLM conveniently omitted), so I have to re-ask it about that point as well. For every major feature or work-unit it takes up to 2 or 3 hours.

      I figured out some patterns in the way it behaves and could put more guard-rails in place so they hopefully won't bite me in the future (spelled out decision trees with specific triggers, standing orders, etc.), but some I can't categorize right now.

  • How do you check if what it produced is even the right thing? Models love to go chasing the wrong goal based on a reasonable spec.

    • When the end result has problems and needs to be reworked.

      You can't figure this out instantly except when you'd review everything the LLM produces, which I am not. So the round trip time is pretty long, but I can trace it back to the intent now because I commit every architecture decision in an ADRs, which I pour most of my energy into. These are part of the repo.

      Using these ADRs helped a lot because most of the assumptions of the LLM get surfaced early on, and you restrict the implementation leeway.

    • Do they? I haven't experienced models deviating from a spec in a very long time. If anything I feel they are being too conservative and have started to ask to confirm too much.

      1 reply →

  • AI does the easy/medium part, leaving only hard stuff and context switching, so naturally it's more exhausting, as the concentration of difficult-work-per-unit-time and context-switching-per-unit-time is much higher.

  • To me it’s more like being a super micro-managing TL that would annoy the hell out of their human reports. It comes with all the pros and cons of micro-management.

  • I think one of the benefits of AI is that it will get started, and keep going.

    But maybe pacing/procrastination might be relief valves?

There are plenty of engineers that couldn't work without a modern IDE or in languages without memory management.

Or without the ability to use a library from GitHub / their package manager.

It doesn't feel THAT much different to me.

"Engineer" as a term might drift. There are "web developers" that can only use webflow / wordpress.

  • > couldn't work

    "Couldn't", or "wouldn't"? Early in my career I'd be happy doing anything basically, not much I "couldn't" do, given enough time. But nowadays, there is a long list of things I wouldn't do, even if I know I could, just because it's not fun.

    • It should probably be "would initially struggle to be as efficient without them."

      This is not a binary.

  • Engineer as a term has already drifted vastly since nobody in the field of "Software Engineering" is actually an Engineer if we go by a strict definitions.

    Engineers are accredited and in some countries even come with a title.

    • > ... nobody in the field of "Software Engineering" is actually an Engineer if we go by a strict definitions.

      This is a pet peeve of mine, so while I understand what you mean, I will challenge you to come up with a strict definition that excludes software engineering!

      And since I've had this discussion before, I'll pre-emptively hazard a guess that the argument boils down to "rigor", and point out that a) economic feasibility is a key part of engineering, b) the level of rigor applied to any project is a function of economics, and c) the economics of software projects is a very wide range.

      Put another way, statistically most devs work on projects where the blast radius of failure is some minor inconvenience to like, 5 users. We really don't need rigor there, so I can see where you're coming from. But on the other extreme like aviation software, an appropriately extreme level of rigor is applied.

      13 replies →

    • Engineers are accredited in the US too. But there is an "industrial exemption" that allows you to work as an engineer without a license for certain kinds of employers. You just can't offer engineering services to the public without a license. This is more important in some fields than in others.

      Where I work, there are plenty of non licensed engineers, but we pay a 3rd party agency for regulatory approval. The people who work for that agency are licensed engineers. Their expertise is knowing the regulations backwards and forwards.

      Here's what I think is happening within industry. More and more work done by people with engineering job titles consists of organizing and arranging things, fitting things together, troubleshooting, dealing with vendors, etc. The reason is the complexity of products. As the number of "things" in a product increases by O(n), the number of relationships increases by O(n^2), so the majority of work has to do with relationships. A small fraction of engineers engages in traditional quantitative engineering. In my observation, the average age of those people is around 60, with a few in their 70s.

    • The concept of engineer predates the accreditation systems you’re referring to by centuries.

  • The huge difference is that we don't know the cost we're going to end up with.

    Will you have AI at the cost of a slack subscription? At the cost of a teammate? Will it not be available and you'll have to hire anthropic workers with AI access?

    • Local AI models are already more than capable enough writing code that surpasses the ability of any bad or even mediocre engineer. That is not something we need to worry about.

      In a way, this is less of a cost issue than the fact that some/many engineers do not seem to be willing or able to host things themselves anymore and will happily outsource every part of their stack to managed services, be it CDN, hosting, databases, etc. I don't know why that's not more alarming than the LLMs.

      2 replies →

  • At least today, it isn't practical for most people to run these models locally- I think adding a dependency on a cloud service is different enough to some local (possibly open source) tool like an IDE.

    • Self hosting at a reasonable scale is much cheaper than people think. I am running clusters of DGX Spark machines with BiFrost load balancers in our company and for client projects. They work flawlessly!

      128 GB unified memory, Nvidia chip and ARM CPU for just around 3k€ net. They easily push ~400 input and ~100 output tokens per second per device on say gpt-oss-120b. With two devices in a cluster, thats enough performance for >20 concurrent RAG users or >3 "AI augmented" developers.

      And they don't even pull that much power.

      1 reply →

    • Slack, GitHub, Figma, AWS, etc

      Lots of people use firebase, supabase etc.

      Many people's jobs are centered around using Salesforce

      It all makes me uncomfortable- I want to be able to work without internet. But it's getting more difficult to do it

  • IDEs are free. Libraries are free. Languages are free. This is becoming more like an internet subscription where you’re at the mercy of Anthropic the same way you may be at the mercy of Comcast.

    I’m sure you can see the difference between a garbage collector and a nondeterministic slop generator

    But it feels good to equivocate, so here we are.

    • > IDEs are free. Libraries are free. Languages are free. This is becoming more like an internet subscription where you’re at the mercy of Anthropic the same way you may be at the mercy of Comcast.

      Ollama/llamafile/vllm/llama.cpp are free. Qwen/kimi/deepseek are free. Pi.dev/OpenCode are free. If you're using a SaaS AI subscription that's fine, but that's hardly the only option.

      1 reply →

Use AI like you would use any other tool: to work for you. There are all sorts of things you can probably do manually that just go a bit faster or more efficient with AI. It's not that different to using an electrical drill vs. a manually operated one. You end up with holes in both cases. But one achieves that a bit faster and neater.

Nobody is going to pay you for your artisanally crafted CSS code or whatever you were coding manually until last year. If you can do it faster/better than the AI, good for you. But it's not a contest and possibly your days of maintaining that lead might be numbered.

In the end, as long as the UI is styled alright, nobody will care that you pieced it together manually for hours and hours. More importantly, people are not going to pay you more for it than they'll pay the next guy getting a similar result in an hour of prompting AIs. They'll want you to move faster and do more.

That's what better tools do, they just cause people to expect more, better, and faster. And their expectations expand until they match the limitations of the new tools.

People seem to have this mental block where somehow the amount of stuff we ship is going to be a constant in the universe and we'll all be out of work and descend in despair. That's something that in the history of our species inventing tools has never really happened. I don't see any reason why AI would change that. Sure, there's a lot more we can do now. And it's a lot cheaper now. So we can now have a bit more of our proverbial cake and eat it. People will push this as far as they can and will want more and more of the good stuff.

And they'll need help getting all that stuff built. One way is a painful process of slowly prompting things together. Most people lack the skills to do that, don't know what to ask for and are in any case busy doing other things. That job, building stuff using tools, is still a job that needs doing. I'm quite busy currently doing that.

I think AI can generally be utilized in two ways:

1) you use it to help write code that you still “own” and fully understand.

2) you use it as an abstraction layer to write and maintain the code for you. The code becomes a compile target in a sense. You would feel like it’s someone else’s code if you were asked to make changes without AI.

I think 2) is fine for things like prototypes, examples, references. Things that are short lived. Where the quality of the code or your understanding of it doesn’t matter.

I think people get into trouble when they fool themselves and others by using 2) for work that requires 1). Because it’s quicker and easier. But it’s a lie. They’re mortgaging the codebase. And I think the atrophy sets in when people do this.

  • And any push to use 2 to build infra to make 1 easier is hard to sell when a lot of engineers think AI will be able to perfectly do 1 in some nebulous time in the near future.

  • the thing is it doesn't even feel like mortgaging. shipping, features going out, everything looks fine. then something breaks and you realize you can't debug your own code without asking the model again.

    • It feels like an addiction. Normal coding requires sustained attention, you can sense how deep you are in the progress and when you're too tired to continue, but with LLMs the next feature always feels like another prompt away, having sessions go well into the early morning/late-night. You rationalize you can quit, that you've been reading the source and each diff enough to "understand" the codebase. But the truth is when the rate limit runs out, you'll be absolutely helpless, crawling back for extra-usage, until you finally see the total bill at the end of the month.

      1 reply →

  • I use it both ways:

    1) Day job 2) Side project

    It would be unprofessional to treat the first like the second.

Why did this obviously AI wirrten article get so heavily upvoted? Looking through the comments, it feels like nobody has noticed

Is anyone tired of being told what AI is supposed to mean for the individual? As a software guy it's supposed to mean I am now a team lead of sorts. However all the people I see crowing about this never sought to become team leads in their career, nor did I.

Yet now suddenly everyone is supposed to want to become a team lead of sorts (ie. the agents becoming your team). I don't want to do that, I treat an AI agent as a pair in a pair programming unit. Nothing more, nothing less. If someone wants to treat it differently, good on them, but they have no place telling what works for thee works for me.

  • I don't understand why people crave to assign a new role for themselves (team lead, manager). AI is a tool that augments your skill and you use it carefully. It doesn't require a change in your role. A farmer with a tractor is a farmer, not a lead. An accountant with spreadsheets is an accountant. A software engineer using a coding agent is a software engineer who has a powerful tool in their toolbox.

  • I agree, nobody should be telling you, specifically, how you are going to use AI in programming.

    I think a lot of people are getting caught up in the discussion about how we, generally as technologists, are going to use AI. And it is looking like the industry is moving towards what used to be programmers now being team leads or project managers of AI teams.

    So it's probably best for you to try to not get involved in those discussions, and when someone says "you" assume they mean "you (generally)"?

What about the third group who mostly don't use ai for programming because the results don't seem to be worth it, like to understand their system, and can craft a more compact, succinct, and well organized system by themselves which they enjoy maintaining? If most of your system is boilerplate that can be generated by Claude, then maybe you're doing it wrong? I'd rather read a short story written by a great writer than a trilogy of novels by AI

There are plenty of engineers, who simply can't think, AI will not change anything in this regard.

  • Can’t think properly seems to be the real issue. That’s one of the reasons that SE domain is mostly in ruin. AI won’t help, only to delay a bigger mess.

    • Ever since the standard office setup went from offices or cubicles to bullpens and hot desks there is less and less time to think, and all of that is a management decision to ship things as fast as possible

  • I agree in part, but I think AI does meaningfully make it harder for leadership to detect their bullshit.

  • How do you graduate your engineering degree without being able to think?

    Even my colleagues who cheated their way through uni still needed critical thinking to do that and get away with cheating without being caught.

    People might hate this but being a good cheat requires a lot of critical thinking.

    • Grade inflation and schools passing kids who should fail to game metrics and keep collecting student loans is a problem. I wouldnt consider hiring anybody from my alma mater who didnt score a sandard deviation or higher on the tests.

      2 replies →

    • You don't need a 4.0 to graduate. And even if you got one, a lot of grades are composed of tests, not projects. You can just memorize your way through things if you were dedicated enough.

      It's not really that hard to get a degree in engineering if your only goal is the degree itself.

      2 replies →

    • The practice of software engineering is not what they teach in university.

      I would say that today's graduates are IMO a bit better than a few decades ago but there are still many graduating who are just not good at writing computer software and don't really have the aptitude for that (or maybe the interest in getting good). That's what happens when the pipeline of people coming in are people who want to make money and the institution is mostly a degree factory.

    • I've seen it happen multiple times. Engineering degrees are no different than a vast majority of degrees in that if you are good at the read and regurgitate cycle, you can make it through. Not only can you make it through, but you can do it with a very respectable GPA. They come out with a large dictionary of keywords in their arsenal, but no idea how to put them into practice. Some are able to put it into practice and tie it all together. As they see practical examples of those keywords in the real world, it starts falling like dominoes, and at an accelerating rate. For some, it never goes much beyond keywords. The dominoes fall, but it is slow, and they stop falling for extended periods of time for them. Not many mature engineering organizations can tolerate that sort of progression rate. They usually don't last very long at any one place, until they find a company where they can blend into the background due to a combination of company culture, and low complexity systems being worked on.

    • OP should have put "engineers" in double quotes. Many software developers like to describe themselves as engineers although they don't have an actual engineering degree. A lot of software development resembles plumbing more than engineering, so most devs don't really need an engineering degree anyway, but they should be more honest about what they're actually doing and not try to elevate themselves with fancy titles.

      You are, of course, right that the idea that someone could finish a serious engineering degree without being able to think is ridiculous.

      1 reply →

    • I don't know but I can point at more than half of the people that I work with that can't think, and every time they try to, takes a whole group of people that can think to undo their mess, they all have degrees and I don't.

      So what does that tell me?

      Better yet, for about 30% having the LLM slop it would have yielded better outcomes, but having them slop something nets terrible slop. But at least I can reshape because even the LLM wont do something that stupid.

    • A degree is passing the test. Not all degree programs get into more advanced topics nor do they necessarily require that someone is able to work through how to solve a problem that they haven't seen before.

      --

      A lot of students (and developers out there too) are able to pass follow instructions and pass the test.

      A smaller portion of them are able to divide up a task into the "this is what I need to do to accomplish that task".

      Even fewer of them are able to work through the process of identifying the cause of a problem they haven't seen before and work through to figure out what the solution for that problem is.

      --

      ... There are also a lot of people out there that aren't even able to fall into the first group without copying and pasting from another source. I've seen the "stack sort" at work https://xkcd.com/1185/ https://gkoberger.github.io/stacksort/ professionally. People copying and pasting from Stack Overflow (back in the day) without understanding what they're writing.

      Now, they do it with AI. Take the contents of the Jira description, paste it into some text box, submit the new code as a PR, take the feedback from the PR and paste it back into the box and repeat that a few times. I've seen PRs with "you're absolutely correct, here are the updates you requested" be sent back to me for review again.

      This is not a new thing. AI didn't cause it, but AI is exacerbating the issue with professional programming by having the people who are not much more than some meat between one text box and another (yes, I'm being a bit harsh there) and the people who need instructions but don't understand design to be more "productive" while overwhelming the more senior developers.

      ... And this also becomes a set of permanent training wheels on developers who might be able to learn more if they had to do it. That applies at all levels. One needs to practice without training wheels and learn from mistakes to get better.

    • Mate, have you never had to deal with over-confident graduates who think they've got the complete answers, but, in reality, they only have a sliver of the whole picture in their minds?

      1 reply →

This is true. Speaking only based on personal experience. My team had started treating AI like a super intelligent being.

“AI suggested we do it that way”

And we’ve been degrading our systems rapidly for last several weeks. We’ve decided to pause and reflect and change how we use AI on tasks that are not dead simple.

No, AI is not creating that group of people. They already existed. They were the people who would google for StackOverflow snippets and copy+paste them without even reading the entire snippet, much less understand them. Same people, new tool.

  • 100% agree. The key difference now though is that it's no longer 'swim or sink immediately' situation - which used to be a forcing function against intellectual laziness where it was a choice.

  • > Same people, new tool.

    the tool works better than stackoverflow, and i expect it eventually will improve enough that such people become as "productive" as the intelligent and conscientious engineer today.

  • Many people by now have probably seen a teammate who used to be a good SWE, now spamming slop code that puts all the real work on the reviewer. That's the "second group."

    • Tell them no. Thats what I do. I have rejected multiple PRs that were too large and lacked proper design or alignment upfront. With code being so cheap, rejecting it should be just as cheaper. Set cultural standards that devs need to review their code before asking for reviews. Etc etc

      1 reply →

People are lazy. AI will replace thinking for many people. Augmentation always leads to atrophy.

Just as the advent of palm-sized organizers reduced our ability to recall dozens and sometimes even 100s of phone numbers of friends etc, AI will reduce our ability to perform a range of functions.

I think the evidence for this is quite clear. Humans are NOT going to expend any energy - even mental energy, to think about something if they don't have to.

That why I don't use AI for any personal projects, I like to keep my mind sharp. Unless it's a projects that incorporates AI in some way, but don't use AI to code it. But at work I don't care, I do what I am paid for, if my manager wants me to entirely vibe code using Claude, his choice, I will not be the one paying for technical dept that creates.

  • 100% agree.

    In the middle ground:

    I'm putting together exercises for a C/Systems programming class I'm teaching in the fall.

    Partway through this, for some reason [cough procrastination cough], I thought it would be fun to implement them in Scheme. My Scheme was already poor, and what meager skills I had are completely rusty. I used Claude to great effect as a tutor for that, but didn't have it code any of the solutions at all, of course. I could tell I was leveling up fast as I coded the things up.

    Gotta use it in the right way if one wants to sharpen ones skills.

No one uses it this way, despite what people say. They hit any sort of wall and then ask the robot. Thought ends.

  • These services are designed for that engagement loop. If they were designed to be tools to help you think, they would be much less front and center, like autocomplete or refactor tools in IDEs. This reminds me of how Google used BERT models (precursor to LLMs) to highlight relevant snippets of web pages in search results based on a search query. "Assistant-" type LLMs would be more like that (or early implementations of code assistants, like Roo or Aider).

  • Same way everyone gives lip service to reviewing output. I know for a fact that at work most don't, not deeply/properly. You basically can't and hit the volume that's been demanded.

    • I mean the workplace dynamics are such that nobody really cares unless they find themselves in a position of committing something that could get them fired. Most companies dont treat their workers all that well.

      Why would you as a worker bother doing everything pristine? Theres no reward for you. The management of the company will fire you the day they see fit anyway. Not to mention companies tend to give higher salary raises to those who leave and later return - a true slap in the face of 'loyalty'.

      1 reply →

AI isn’t creating the problem, it is just showing the problem. Those who did not want to learn before AI did so reluctantly, mixing Google and SO. Now they ask AI. An existing problem found a new solution.

Personally, I really enjoy using AI. I have created my own cascade workflow to stop myself from “asking one more question”. Every session is planned. Claude and Codex can be annoying as hell (for different reasons). Neither is sufficiently smart for me to trust them. I treat them as junior devs who never get tired, know a lot of facts but not necessarily how to build.

  • I wrote tens of thousands of lines of code before Google and SO.

    I also enjoy using AI. It makes it easier to get mundane work done quickly. Junior devs who never get tired is a great analogy. It's a force multiplier and for people with limited time (meetings, people management, planning etc.) they enable doing a lot in limited time. I can relate to more junior people being worried and/or some senior people concerns of quality though. I get a task done, review it, get another task done. I won't let it build something large on auto-pilot.

    One thing that should be noted is that life was simpler back then. You could know the syntax of C or Pascal. You knew all the DOS calls or the standard libraries. You knew BIOS and the PC architecture. I still used reference manuals to look up some details I didn't have in my head.

    Today software stacks tend to be a lot more complicated.

  • Funnily enough, I learned to code “depth first” by putting together enough documentation examples and stackoverflow answers to reach a working Android app, long before I learned to code “breadth first” in school.

The 'Socrates worried about writing' analogy is usually deployed to dismiss concerns, but it misses an asymmetry which is writing preserved thought, it didn't generate it on demand. The real question is whether AI is closer to a pencil or a ghostwriter.

For junior engineers the distinction matters most. The reps are not just about getting the right answer, they are about building the intuition for when the answer is wrong. That's the hardest thing to transfer between people, and the thing AI is currently worst at self-verifying.

Easier said than done. once you are given a lazy way to do things faster and easier and mostly better, it's hard to go back. this is by design. there is no turning point. this addiction is as strong as drugs I feel.

People who let AI do their thinking at any level never valued it in the first place. "Use it or lose it", as they say. The count of studies backing this up continue to rise and yet so do the articles saying LLM use in software development is fine because our value is in our thinking.

  • It may be a byproduct of my ADHD and general anxiety, or it's a common trait among all of us workers of computers, but I am thinking almost all the time. It's one of the beautiful things about the gig to be able to be completely engrossed in something else and then have an inspired thought hit you, some solution that took you not looking at it for a moment. AI now helps me turn those thoughts into action faster than I ever could. Without it, I'd lose the thread before it ever got off the ground sometimes. Now a thought can be made at least partly real from my phone in minutes, then I can go back to what I was doing without feeling like I might lose it if I look/think away again. Just my two cents on what the technology has enabled for me.

I am rebuilding numba. It is very hard for me to imagine doing it by hand. I tried it a couple of years ago but it was excruitiangly painful. It was slow and messy. So many small things that gets stacked on top of each other over years of abstraction.

I am doing it again using LLM. Legitimately, things that would have taken weeks is now done overnight. I still have to look at the code, at the generated C output, still have control over the architecture to make it easy for me and the LLM to work with in the future, etc

Is this replacing my thinking? I am not sure. I suppose I would have learnt a lot more about compilers/transpilers had I preserver through it for months with manual writes and rewrites but I would solely be working on this. Instead, I also had some time to write a custom NFS server support for a custom filesystem in Golang.

  • > Is this replacing my thinking?

    I'm extremely confident the answer is yes.

    But we have to judge how much value that particular thinking has.

    As an instructor, I've implemented linked list functionality a zillion times. I'm on the long tail of skills-gain from each reimplementation. But every time I implement it, I'm gaining a little more.

    Now, is it worth it? Probably not. The time spent on that marginal gain would be better spent implementing something more novel by hand. So punting to an LLM, while it costs me, might be a net gain in that case. But implementing another compiler? Hell yeah, that would be replacing my thinking. I've only ever made one PL/0 compiler plus that one yacc thing in compiler theory class, and those were a long time ago.

    We should quantify the loss of thinking when we decide how much to punt the code creation to someone or something else.

  • I too worry about the aspects that using AI is replacing in my thought process. I've built a sophisticated enough system to where agents can go out and determine the changes that need to be made for entire features and pretty much nail it out of the box. Everything is laid out in high detail during the planning phase. The implementation phase of actually writing the code is almost always unremarkable.

    I have found myself going out and actually reading code less and less over the past year. I would be lying if I said that there are not fairly regular moments where I question the comfort level I have obtained with the system that I have built. I've seen it work with such a high accuracy and success rate so many times that my instinct at this point is to not question it. I keep waiting for this to really bite me in the ass somehow, but it just keeps not happening. Sure, there have been minor issues that have slipped through the cracks that caused me to backtrack, but that is nothing new. The difference is that with the previous way, I had painstakingly written that code and had a much more personal relationship with it. The code was the problem. Now whenever that does happen, I'm going back to the system and figuring out why it didn't get the answer right on its own, or why it didn't surface the whole thing in the plan to me prior to implementation.

The post's recommendations and analogies kind of go against two shortcut approaches that have helped a lot of people in the pre-AI real world:

1) perfect is the enemy of good

2) fake it till you make it

The analogies imagine difficult scenarios where the habit of taking shortcuts doesn't help. But most people most of the time don't run into those scenarios at all.

Mechanical exoskeletons should amplify your strength, not atrophy it.

If the brain is like a muscle, it won't work.

I've told everyone I hire that "I hired you for your mind so always use it." Push back on requirements, question my decisions, think about your approaches.

I can''t imagine telling them now to stop—use the Ersatz Intelligence instead of Actual Intelligence.

Caught myself in this one. The dependency creeps in faster than I'd noticed and the laziness becomes the justification. Reviewing what comes out of the machine is the part I keep skipping. Useful read, thanks.

Before AI I would spend multiple days mapping out my database tables and queries while now I ask AI to propose multiple different approaches and I pick the best one. But then on the other hand I’m working on 10 features at the same time and have to carefully look through them. But I can see that I’m totally dependent on the AI now. Creating a full plan by yourself feels like a waste of time, since you know the AI can create the same or better plan in a split second. So when Claude is down, I end up not being productive at all.

  • > Creating a full plan by yourself feels like a waste of time, since you know the AI can create the same or better plan in a split second.

    It IS a waste of time if your only goal is the creation of the plan. However, one must be very self-aware of their goals because if one of the unacknowledged ones is to retain the ability to create plans, then you must continue creating plans yourself.

AI is creating problems. This isn’t one of them. Engineers are going to now think at a higher level of abstraction. No one misses coding in assembly.

  • > No one misses coding in assembly.

    It's only your opinion that is provably false.

    First, there are still people who don't like high level languages and don't use them, because they find assembly better.

    Second, I personally work in a field where I need to consult the source of truth, the actual binary, and not the high level source code - precisely because the high level of abstraction is obscuring the real mechanics of software and someone needs to debug and clean up the mess done by "high level thinkers".

    High level programming languages are only an illusion (albeit a good one) but good engineers remember that illusion is an illusion.

    • When people communicate they speak in terms of the overwhelming generality of reality. There's always at least one guy that is an extreme exception.

      I can tell you this, the person you're replying to comes from the overwhelming majority/generality. You, on the other hand, are that one guy.

      Of course even my comment is a bit general. You're not "one" guy literally. But you are an extreme minority that is small enough such that common English vernacular in software does not refer to you.

      1 reply →

  • You can write unambiguous (UB-free) code and the compiler's output will be deterministic. There will even be a spec that explains how your source maps to your program's behavior. LLM has neither.

    Also, if you need to control performance, you still need to know how CPU cache and branch prediction works, both of which exists at the abstraction level of assembly.

  • Compilers are a layer of abstraction that we can ask another human about. Some human is there taking care of it. Until we get to the point where we trust AI with our survival it would be good to be able to audit the entire stack.

  • I suspect there are at least as many programmers working as the ASM level today than there ever was - they're a lower proportion, but the total number of programmers has increased dramatically.

    I wonder if this sort of trend will continue?

  • Look at the comments about MSVC removing inline assembly as a supported feature for a counterexample. :D

    (A competent assembly programmer can go miles around a competent high-level programmer, that's still true in 2026...)

    • Explained by LLM: It is 100% true that no human alive can write 1000 lines of assembly better than GCC or LLVM. It is also still 100% true, right now in 2026, that a truly competent assembly programmer can write 10 lines of assembly that will beat any compiler on earth by a factor of 2x, 3x, even 5x. The entire industry looked at this situation, and somehow concluded the exact wrong lesson: "humans should never write assembly". Instead of the correct lesson: "humans should almost only write assembly".

  • At a high level of abstraction, the product owner can talk to the LLM directly by themselves. The "engineers" will have abstracted themselves out of a job.

  • This isn't just another translation layer, though. It's squishy and stochastic. It's more like saying "managers think at a higher level of abstraction". Which is true, but it's not the same as compiled code.

    GenAI is like a non-deterministic compiler. Just like your manager's reports except with less logical thinking skill. I'd argue this is still problematic.

I think the great advantage of AI in software is that it enables you to create code faster. I think that the great disadvantage is that it tempts you to create code incredibly faster.

> There is No Shortcut to Judgment

> This is the part that some people may not want to hear --

> There is no generated explanation that transfers mastery into your brain without you doing the work. > There is no way to outsource reasoning for long enough that you still end up strong at reasoning.

This is in relation to early-career engineers, but I wonder why people think this won't apply to mid- and late-career engineers. Are they not also constantly learning things on the job? Are they not thus shortcutting their own understanding of what they are learning day-to-day?

Hard disagree. I feel like I'm thinking a lot more now because I have so many parallel projects going on at the same time. AI has allowed me to really, truly create in a way that I've never done before. Yes, my coding skills probably aren't as sharp as they used to be, but my system design skills are at an all time high. Don't blame the tool.

  • If 1% of people using the tool end up like you, and 99% end up drooling invalids, I think it would be insane to not blame the tool. If a tool that's incompatible with humans isn't to blame for that incompatibility, what is to blame for the harm done? Human nature? The point of a tool is to be used by humans.

    • Even if a tool can only be used for lobotomizing humans, the usage of the tool is where the main blame should be placed.

  • What part do you disagree with? It sounds like you don’t disagree with either the title of the article or its contents.

    > In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:

    > The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.

  • "Hard disagree because it doesn't affect me personally"

    There is already research literally showing that on average it is a net loss on focus, learning and critical thinking skills.

    • I think the type of people who get hyped about the cool thing aren't the kind of people who pay much attention to research and science.

  • I work with others who have made this same claim. For those people, when I observed their work during demo days the unmentioned thing is that they were going to the AI for system design questions as well. This was framed as "just using it as a sounding board" but what was actually done was not merely a sounding board but instead was asking for solutions. Anchoring bias being what it is, these felt like good ideas and they kept them.

    Its the feeling of having done a lot of thinking for themselves without having actually done so.

    • I actually have gone to the AI repeatedly for system design solutions.

      Daily.

      I think only twice have I agreed with it.

      Like the way it will always give you code if you ask, even if the code is crap, it will always give you a design if you ask. Won't be a good design, though.

  • So you'll have a beautifully designed system with rotting bones? A system constrained to the same patterns seen in training data. Not terrible, good enough.

    I don't know, I don't doubt you're more productive. Broadly so. But the depth and rigor I think may be missing, as the article suggests.

    As an aside, I suppose it's a good time for those nearing the end of their careers, those who no longer need to learn, to cash out and go all in on AI.

    • > But the depth and rigor I think may be missing, as the article suggests.

      Nearly certainly. Just turns out that depth and rigour matters a lot less than I would've hoped. Depressing, really.

  • For how many different parallel projects can you really keep proper mental model in your head at one time? Or put enough effort to seriously consider all aspects. I think number varies between simple and more complex. But still, could that number be lower than many think it is?

    • It really depends on who you consider the "many" to be. I've seen people who claim they can meaningfully iterate on 10 projects simultaneously, and I'm skeptical of that. My personal experience is that my decisions are noticeably degraded at 3-4 parallel workstreams, and with even the simplest projects I'm non-functional past 6.

      But I can juggle 2 workstreams in a day easily, and I can trivially swap projects in and out of the "hot path" as demanded by prioritization or blockers; before LLM coding both of those were a lot harder.

  • The real question is whether you'd be able to continue doing your work if someone took your toys away and said "here's a nickel, kid, go buy yourself a real computer". I'm not referring to whether you'd be able to keep up your productivity since it is clear you couldn't just like a carpenter with a nail gun works faster than one with a hammer and a bucket'o'nails. Could you do the work, starting with the design followed by boiler plate and finishing with a working system? The carpenter could, albeit slower since his tools only speed up the mechanics of his work. Coding agents do much more than that, they take away part of the mental modelling which goes into creating a working system. The fancier the tool, the more work it takes out of your hands. Say that the aforementioned toy thief comes by in a year or two after the operating systems (etc.) you're targeting have undergone a few releases with breaking changes. A number of APIs have been removed, others have been deprecated and new ones have been added. You were used to telling the agent to 'make it work on ${older_versions} as well as ${newest version} but now you're sitting there with a keyboard at your fingertips and that stupid cursor merrily blinking away on the screen. How long would it take you to become productive again? What if the toy thief waits 5 years before making his heist? What if the models end up rebelling or sink into depression and the government calls upon you to save your economic sector?

    When cars first appeared it took quite some knowledge and experience to even get the things started, let alone to keep them running. Modern cars are far better in all respects and as a result modern drivers often don't have a clue what to do when the 'Check Engine' light appears. More recent cars actively resist attempts by their owners to fix problems since this is considered 'too dangerous' - which can be true in case of electric cars. That's the cost of progress, it is often worth it but it does make sense to realise what it would take to go back in time to the days when we coded our software outside in the rain, upphill both ways with only a cup of water to quench our thirst. In the dark. With wolves howling in the woods. OK, you get my drift.

    Will there be something like 'software preppers' who prepare for the 'AIpocalypse' by keeping their laptops in shielded containers while studiously chugging along without any artificial assistance. Probably. As a hobby, at least, just like there are 'survivalist preppers' who make surviving some physical apocalypse their goal in some way or other.

  • But is the debate about "fleshing out a system spec" or "ability to come up, plan and explore various ideas to solve problems elegantly on a budget" ? I think there's always these two sides conflated as one when discussing LLM impact on users.

  • > Yes, my coding skills probably aren't as sharp as they used to be

    If not the tool then whose to blame? It’s very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesn’t mean you’re producing quality work. Who’s reviewing it? Are you just blindly trusting it?

This is so spot on and I’ve been harping on this for about two years based on my own professional experiences. The surprising thing, though, is that upper management is ostensibly cool with incompetent people using AI to produce things that are clearly not accurate and have no idea whether it is or not. I believe this is because upper management themselves believe AI is much more accurate in its current form than it is. It’s not clear what if anything will change this but I believe many organizations are rotting from within because they no longer have stringent requirements.

  • It’s because senior management builds processes with a base assumption of unreliability because a good chunky of employees are.

    Thats why they’re relaxed - it’s just switching from one sort of unreliability to a slightly different flavour

I feel like these articles are just a reasurance for people who don't want to accept that AI will automate their jobs. It becomes easier to focus on a lesser group of AI users and feel superior than to confront the reality of things.

> To be very frank if professional with 10 year experience they know the flow and logic to code if they use the AI they can make the code and improve they way they code but if new bee is coding he doesn't what the flow or logic he simply copy paste AI won't allow those people to think.

Is it wise to understand everything that AI does for you?

Let’s say a person has 10 units of learning per week. Is the author actually claiming that that person must not deliver any results beyond their 10 units?

It makes some sense to have say 20 units of results and prioritize which ones to fully comprehend.

I suspect APIs / libraries / languages / platforms will have more churn due to AI. New platform new system need to learn. Once every 5 years might become every year or even more frequent. That would be a sort of inflation of knowledge and skills. It would affect the decision making about how to spend one’s 10 units per week.

  • > Let’s say a person has 10 units of learning per week.

    This is… not how humans work? If you have the time and energy to learn ten things, and then spend time babysitting a random number generator to produce evidence of 10 more units of work, you’re paying an opportunity cost compared to someone who spends the time learning an eleventh thing. You can argue who has more short term value to a company… but who is the wiser person after a thirty year career?

  • > Is the author actually claiming that that person must not deliver any results beyond their 10 units? No, I'm claiming that if someone or something else produced your 10 units of work, you better be able to verify that those 10 units of work are of at least the same quality as you producing them yourself. This is the bare minimum and not something to shift onto other people reviewing your work.

    Beyond that, if that's all you do, you are basically proving you're replaceable. If you're smart, you'll reallocate intellectual capacity that was freed up by A.I. onto something A.I. can't do today.

  • It's really no different than managing people.

    Managers simply cannot know all of the details of what their reports write. They have to build abstractions.

Very apt headline, IMHO.

I have been an ardent opponent of AI since it came up a few years back. I refuse to vibe code and I refuse to let AI think for me. I won't be an AI controller.

However, two days ago I found a nice, personal use case for AI: Advanced writing checks (grammar checks, mostly, and some rewordings) in Word using a rather expensive app.

I write a lot of US English, despite it not being my native language, and AI is now helping me to write much better than I did before. Also, I discovered that I am much worse at writing Danish than I was believing. In fact, I think I am better at writing US English than at Danish, that's a bit surprising as I am a Dane.

No AI was used during the writing of this entry, but I dearly love the writing tool already! I have heard similar stories from friends who say that AI is very good at summarizing long documents and stuff like that.

So, I personally think that AI CAN elevate one's thinking. I am learning more about Danish and US English grammar every day, now, than I did during a decade before. Writing is suddenly so fun because it involves growing my skills.

This is a huge concern and I fully agree with the post. Even though one might think I am not fully giving into AI, this was always the case etc. It still affects YOU and everyone else. 1. Software, often, isn't built in vacuum. Lots of companies are shoving AI down throats like it or not. Most Bigtech is heavily using metrics to get to 100% AI generated code. Reviewing is a nightmare. 2. New entrants (new grads etc) are largely AI first and are losing out on the safety and reliability aspects that are enforced automatically when you learn coding without AI.

IMO, teams need to agree on a set of principles on AI usage, concrete examples of where and how to use it. Perhaps its much more useful in parts of your system that's faster evolving and doesn't have too much core logic like testing frameworks etc

Simply discarding it as 'yet another tool' is part of the problem.

what if it seems ai has literally replaced your thinking? Is there a way to unreplace it? im talking literally.

> split people into two nebulous groups

shows both groups using AI differently. Hard to continue reading the article that excludes your group entirely.

On the point of avoiding the struggle of learning, I think it's easy to swing too far the other direction and go back to not using modern development tools. I think it is doing a new learner a disservice by saying something like "don't use GDB/REPL/AI tool to learn, since you'll never learn the fundamentals". I think all of these tools allow for learning, if that's how the learner engages with them. So I hope that AI becomes integrated in the learning process, as far as it accelerates and doesn't replace understanding.

> Going back to the analogies: This is like copying answers through university and then showing up to a job that requires independent thought.

That's exactly what is happening now. I wouldn't even call it an analogy, I'd call it an example of where AI is already having a baleful effect. FWIW I don't disagree with the article's thesis or the examples: yes, absolutely, if used well AI can elevate engineers in exactly this way and it behooves us engineers to use it in that way. We can also say that the deliberate design of the AI systems we are constantly being exhorted to use inclines them towards work-slop and abdicated thinking.

I don’t get why we shouldn’t outsource our thinking to the AI. As it becomes more capable, eventually it will be more competent than the average engineer. At that point companies should be _requiring_ the AI to make the larger decisions. By the end of this year AI might be better than all but the very best engineers. Then what?

  • that's a lot of speculation based on one year of data. We don't actually have the results yet, is the main issue as i understand.

It’s weird I have basically a free private tutor in any subject and I use it a lot.

Yet nothing has actually changed.

Theory of Bounded Rationality and its implications is something they should teach everyone.

  • Thank you for sharing this. We are all less rational than we imagine ourselves to be, even if we're hyper-critical of ourselves and exercise a lot of intellectual humility.

For couple of last weeks, I use AI to speedup my thinking process. Instead of think about something to come up to conclusion, I let AI brainstorm for me and then select. Not for everything, but I found it faster with AI. Having taste on select the ai output is important though.

My director expects me to get things done at an accelerated rate. I don't have the time to read code and gain in depth understanding of issues he wants me to fix which requires me to understand multiple repos I have never touched.

I have no choice but let claude explore them for me and return me its summarized understanding. As next step, only claude can apply the required cross repo fixes, not me.

I just don't have the time. Meanwhile my skills as classical programmer atrophy, while my experience with and trust in claude go up...

Yes.... and I can't think without compiled languages. Missed out on assembler.

Becoming dependent on a technology is to be expected. I'm pretty sure 95% of us are dependent on packaged meat and don't know how to hunt.

  • I'm seeing plenty of internal work where I ask someone about their code, they ask Claude, and reply with "Claude says...".

    That's substantively different than going from assembly to C.

    • Every time things change, the change itself is different.

      I remember some of my earlier issues with various languages. `Dim A, B as Int`, in VisualBasic one of them is an Int the other is a Variant, in REALbasic (now Xojo) they're both Int. `MyClass *foo = nil; [foo bar];` isn't an error in ObjC because sending a message to nil is a no-op.

      Or how, back when I was a complete beginner, if I forgot a semicolon in Metrowerks, the compiler would tell me about errors on every line after (but not including!) the one where I forgot the semicolon.

      "Docs say", "Compiler says", "StackOverflow says", "Wikipedia says"; either this tool is good enough or it isn't; it not being good enough means we're still paid to do the thing it can't do, that only stops when nobody needs to because it can do the thing. The overlap, when people lean on it before the paint is dry, is just a time for quick-and-dirty. LLMs are in the wet-paint/quick-and-dirty phase. You could get suff done by copy-pasting code you didn't understand from StackOverflow, but you couldn't build a career from that alone. LLMs are better than StackOverflow, but still not a full replacement for SWeng, not yet.

I am using AI at work. And it definitely makes me (say) 10% more effective.

However my #1 productivity tool is still a custom code generator I have been using for years. It routinely generates 90+% of the code needed to write a typical biz web application, leaving just the business logic.

No AI. Just straightforward high-level-spec-to-server-client-DB code that is 100% trusted and proven in battle.

For me the widespread fear over this is evidence that it’s different from syntax highlighting and stuff

I think there are engineers that can’t think without AI. But the best think with it. Unfortunately, we are now living in a day and age where simply ignoring AI is no longer an option.

  • There were always engineers who didn’t think and depended on crutches around them like senior engineers and politicizing the perf cycle. Most people got into this because their parents told them it makes a lot of money, and they never had the drive and curiosity to develop the passion required to truly think through the problems in computing and computer science. They will continue to use crutches to survive. Those that are driven by the problems for the problems will continue to think and use AI as a tool for leverage. This is no different than any other assistive technology.

Absolutely. When used correctly, it can become a tool for pulling our minds out of the gutter of pedantic pocket lint and distracting ephemera and keep it in a space where it is intellectually rewarding and fruitful. It can help you grasp a code base more quickly. It can help you debug things more effectively. But that's up to how you use it.

If all you do is point your LLM at your Jira tickets, then you are failing to be an engineer. I mean, if that's all you are doing, then who needs you? One of the most important things to learn is what the right questions to ask are and what the right decisions to make are when guiding the LLM, as well as the ability to judge the output it produces.

95% of the population is educated to think inside of the box and just rely on repetition/memorization. There’s not a lot of thinking happening in this world outside of a very small group of people. AI is not going to change that reality at least not until we educate our children for the AI age.

For all we know, we're in the early stages of making traditional (software) engineering obsolete. As in, we don't know if the role of software engineer as we know it today will still exist in 10-15-20 years.

I mean, right now we're at the stage where any user can get AI to make you software to solve very specific things - almost no technical knowledge needed.

My prediction is that first will software engineers be rendered obsolete. After that, small businesses will disappear, as users can simply get those products/services directly via AI.

  • Your prediction is... missing so much detail of how that prediction actually happens that it is pointless. This is my big dislike re. the discussion of LLMs and the effect of AI more broadly. Unless you bother to make an effort in going deeper why post it? Theres no value. The same stuff has been posted for months and even years at this point.

    • When GPT 3.5 was released, it could handle maybe a 500 LOC codebase. Experienced engineers were calling it cute, but zero threat to actual programmers.

      Then it became thousands.

      Now models can handle and operate on code bases with hundreds of thousands LOC, even low MLOC.

      So in just 3.5 years we've gone from LLMs being cute toys, to being powerful enough to actually replace junior engineers. Even if we hit a new AI winter tomorrow, the proverbial damage is already done.

We are in a transition phase where you need systems and coding skill but you can't be sufficiently productive without AI.

First, it was pencil and paper. Then it was calculators. Then computers! It’s a slippery slope, this technology business.

I hope it's not reductionist, but this kind of thinking always feels like cope in the face of The Bitter Lesson.

Huberman: Your brain has a region that only grows when you do things you don't want to do

...or as I interpret it your brain grows only when it does things that are difficult.

If you remove the difficulty, it will atrophy into a hum of a mindless chit-chat.

Engineering the data structures and control flows from scratch is a completely different than asking an LLM to scaffold them for you.

It doesn't elevate thinking no matter how you use it. It is a lookup tool at best.

For the new prompt engineers I suggest the following title:

  MCSE => Microsoft Certified Slop Engineer

‘AI’ is my newest litmus test for whether who I’m engaging with should be taken seriously or not.

‘AI’ doesn’t exist, and LLMs have vanishingly narrow legitimate justifiable use cases. Any output from one is intrinsically, explosively, imprecise, and can’t be trusted to be build upon without specialist treatment. I’m yet to identify any application of a LLM which can rationally be mistaken for intelligence.

Anyone who persists in referring to LLMs as ‘AI’ is either betraying they don’t understand what they’re talking about, or they’re invested too deeply in an active grift.

  • > ‘AI’ doesn’t exist, and LLMs have vanishingly narrow legitimate justifiable use cases. … I’m yet to identify any application of a LLM which can rationally be mistaken for intelligence.

    What’s the opposite of AI psychosis? Burying your head in the sand? Because anyone who could write this unironically today is certainly afflicted.

    • No one who is impressed by the current applications of LLMs should be in any way involved with making decisions which affect those not similarly cognitively impaired.

      It’s no different to religions or economics.

      2 replies →

I think many of us have interviewed people with 10+ YoE, and resumes that seem impressive, and then seen them fail to do much of anything in evaluations. I expect this problem to get significantly worse. There will be a class of people tucked into organizations where they can get away with sitting in meetings and YOLOing AI code for years.

Convenience is king. We became fat and unhealthy because high calorie foods are cheap and easy. We will become stupid because AI will do our thinking for us. There’s no way around it. Only a small percentage of the population are capable of perpetual self control. The old world forced you to be healthy, there was no other choice. Now there are like 15 things you have to have self control to do the hard work at even though you can get the same results the easy way. Working out, dieting, “proper” social interaction, sleep timing, child rearing, social meetups, career networking etc. The list is never ending and none of it is organic like it used to be.

Post title is completely misleading relative to the article. Article title: "A.I. Should Elevate Your Thinking, Not Replace It"

Skills you don't need, atrophy. Skills you need, don't. It's very simple, and the "you won't have the skills you used to need but don't need any more!" line of reasoning is tired and invalid.

  • That's not how it works, unfortunately. Skills you use stay fresh, skills you don't practice get rusty and fade away. You might need things you aren't using anymore.

    If you never walk, your legs get weak, you gain weight, your aerobic system loses capacity, and you lose the ability to walk. You don't need it, you say, because you have your car and your mobility scooter and you'll always have these things. Your crutches don't make you weaker, you can still do everything the walkers can do, you say.

    Good luck with the nature hike!

    • Sure. What are these programming skills you never need but that you're going to need at some indeterminate time in the future?

  • Half-agree. "Skills you need, don't atrophy" assumes you know which skills you need. You usually don't, until something happens and the skill that would've caught it is the one you stopped maintaining.

    Most "I didn't realize I needed that" moments arrive after the atrophy is already done.

Here's the question I want to posit and nobody who's against AI has managed to answer satisfactorily: what is it in for me if I were to acquire all those skills?

I don't give a shit about this career. I don't give a shit about engineering. I despise every second of it. There's nothing to aim for other than being a drone that does whatever is asked of it.

If AI can reduce my mental workload, why wouldn't I want to delegate everything over to it so I can save my faculties for what I truly enjoy? For the art of a worthless craft?

  • Some people enjoy working with computers. :) It is not always about the money. It is also about having fun and learning new things.

    For you, it seems that you are not cut for it judging from what you say.

    So yes, use LLMs.

  • Why are you employable if the AI does everything for you?

    • Mostly to do the work that AI can't do just yet. I've got the feeling that, by the time AI can do those jobs, we'll be mired in bigger issues.

  • I mean… there's other jobs in the world. If you chose to do something you hate, that's maybe a bit your fault too?

    • Tell me where these mythical jobs that won't leave me broke as shit and that I'll enjoy are. I'm very much a humanities person, and it was already a sad tragicomedy of a sphere before AI hit the ground. It's probably even more dire now, let's be real.

      And I don't have the personality for running a start-up or any company, unfortunately. I'm extremely risk-averse and withdrawn. If I really had no other choice, I'd probably have to budget in a ton of... chemical helpers (stimulants).

      1 reply →

In answer to the headline - it's not, no more than calculators stopped people from thinking.

It's changing the way we think, and reason.

Speaking as a BE focused Go developer, I'm now working with a typescript FE, using AI to guide me, but it scares the shit out of me because I don't understand what it's suggesting, forcing me to learn what is being presented and the other options.

No different to asking for help on IRC or StackOverflow - for decades people have asked and blindly accepted the answers from those sources, only to later discover that they have bought a footgun.

The speed at which AI is able to gather the answers from StackOverflow coupled with its "I know what I am talking about" tone/attitude does fool people at first, just like the over-confident half assed engineers we have always had to deal with.

Unlike those human sources, we can forcefully pushback on AI and it will (usually) take the feedback onboard, and bring the actual solution forward.

Thus proving the engineer steering it still has to know what they are doing/looking at.

Calculators and computers are creating engineers that can't think without them either. There are many problems with AI, but from my point of view, the title has not thought things through.

  • We teach kids basic maths before we give them calculators.

    University degrees certainly used to teach computing fundamentals without you having a computer in front of you.

    • I am all for taking AI out of education, like China recently announced that they will do.