← Back to context

Comment by nunez

8 hours ago

> If the job were mainly about producing syntactically valid code, then of course A.I. would be on a direct path to replacing large parts of the profession. But that was never the highest-value part of the work. The value was always in judgment.

> The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise

How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.

The author tries to answer this:

> That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.

but in a world wherein writing code by hand (the "struggle") is "artisinal" and "outdated", this process being non-optional (which I agree with) is contradictory.

How juniors and fresh grads do that with AI that is designed to give you whatever answer you need in a given moment is unclear to me. I don't see how that's possible, but maybe I'm thinking too myopically.

I dont understand why software engineers insist on keeping the craftsmanship aspect of writing code. Compare to other engineering disciplines, like civil engineering. Engineering was never about going in the field yourself to build things with your own hands. You can become a great civil engineer without building bridges that fail yourself. To me it doesn’t matter that the thing I design is built with a crane or AI. I can design quality control processes too to ensure the thing is built up to standard, I don’t have to build the thing myself to be sure. There is nothing wrong with artisanal code crafting, I appreciate this too, but professionally that’s not engineering. It seems AI is just forcing us to clear the confusion the hard way.

Myopic is inevitable, to some extent. It's very hard to project this stuff.

Socrates wrote about what was being lost as philosophy was becoming written rather than oral...and he was right.

We can't even understand what was lost. Many methods of learning and thinking became entirely lost. You could say they were redundant, and they were. But... writing largely replaced oral traditions. It didn't just augment them.

He was that old school coder who had the skills to do philosophy and be an intellectual without writing. Writing was an augmentation for him. But for the new cohort... it was a new paradigm and old paradigm skills became absent.

It is very hard to imagine skilled coders becoming skilled without need pressing that skill acquisition. The diligent student will acquire some basic "manual coding" skill... but mostly the skill development will be wherever the hard work is.

  • Socrates was histories first Luddite. He opened Pandora’s box. I wish him and Plato would be radically rejected as the garbage trash they are (basically just a defense of hierarchy and dialects)

    Quoting my boy Max Stirner who also fking hated these guys

    “This war is opened by Socrates, and not until the dying day of the old world does it end in peace.“ - The Ego and its Own, Max Stirner

  • I'd say that by purging stuff from the brain we are losing thinking itself. Thinking is manipulating ideas and concepts in your head, assembling and linking. The fewer things there is, the more primitive the result. You cannot juggle without object to juggle, connecting the dots result in trivial patterns when you have just a couple of dots.

    • It's true for all automation we do get more comfort. We build systems so that we humans have as little struggle as possible, not realising that struggle is the only reason for existence. By eliminating it, we are erasing ourselves from this world.

      7 replies →

    • A lot of paraimony between your statement and Socrates' comments on the transition to writing.

      Interestingly, he placed a lot of importance on memory... where you emphasize manipulation of concepts.

      2 replies →

    • It just becomes more abstracted but the thinking is still there. And who is to say we aren’t going to keep reading books, delving into hobbies, or watching movies. All those concepts will then be mixed into the our brains and who knows what new things we will think of to extract out and desire to build with AI.

    • > I'd say that by purging stuff from the brain we are losing thinking itself

      The idea that there will be less to think about seems a bit short-sighted. Humans are very good at moving to higher levels of abstraction, often with more complexity to deal with, not less.

    • Fuck thinking!

      If I am free as “rational I,” then the rational in me, or reason, is free; and this freedom of reason, or freedom of the thought, was the ideal of the Christian world from of old. They wanted to make thinking – and, as aforesaid, faith is also thinking, as thinking is faith – free; the thinkers, the believers as well as the rational, were to be free; for the rest freedom was impossible. But the freedom of thinkers is the “freedom of the children of God,” and at the same time the most merciless – hierarchy or dominion of the thought; for Isuccumb to the thought. If thoughts are free, I am their slave; I have no power over them, and am dominated by them. But I want to have the thought, want to be full of thoughts, but at the same time I want to be thoughtless, and, instead of freedom of thought, I preserve for myself thoughtlessness. If the point is to have myself understood and to make communications, then assuredly I can make use only of human means, which are at my command because I am at the same time man. And really I have thoughts only as man; as I, I am at the same time thoughtless. He who cannot get rid of a thought is so far only man, is a thrall of language, this human institution, this treasury of human thoughts. Language or “the word” tyrannizes hardest over us, because it brings up against us a whole army of fixed ideas. Just observe yourself in the act of reflection, right now, and you will find how you make progress only by becoming thoughtless and speechless every moment. You are not thoughtless and speechless merely in (say) sleep, but even in the deepest reflection; yes, precisely then most so. And only by this thoughtlessness, this unrecognized “freedom of thought” or freedom from the thought, are you your own. Only from it do you arrive at putting language to use as your property. If thinking is not my thinking, it is merely a spun-out thought; it is slave work, or the work of a “servant obeying at the word.” For not a thought, but I, am the beginning for my thinking, and therefore I am its goal too, even as its whole course is only a course of my self-enjoyment; for absolute or free thinking, on the other hand, thinking itself is the beginning, and it plagues itself with propounding this beginning as the extremest “abstraction” (such as being). This very abstraction, or this thought, is then spun out further

      - The ego and its own, Max Stirner

    • I "purge" - or better yet choose not to retain - the data.

      BUT, BUT! I keep the index.

      My favourite quote from Donald Rumsfeld (a very bad human being, but this is still good)

      > Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.

      What I optimise for is to have as many "known unknowns" as possible. I know a concept, process or a tool exists, but don't understand it or know how to do it. But because I know it exists, I won't start inventing it again from scratch when I need it.

      Like if one needs to do some esoteric task, they might start figuring it out from scratch. But because the index in my brain contains a link ("known unknown") to a tool/process that makes that specific thing a LOT easier, I can start looking into it more.

      Or I might need to do something common like plumbing or some electrical work at home. Do I know how to do that? No. But I Know A Guy I can call, again externalising the knowledge. Either they come over and help me do it or talk me through the process of adjusting the thermostat in my shower faucet (you need to use WAY more force than I was comfortable with without an expert on the phone btw... there are no hidden screws, you just rip the bits off :D)

    • We will never fundamentally get rid of thinking; it's coupled to navigation of 3D reality we live

      And we don't need words to think; cognitive problem solving and language processing are separate processes [1]

      We will shift the problems we need to think about. Same as always; humanity isn't really solving building stone pyramids. Did we stop thinking? No just thought about a different todo list.

      [1] https://www.scientificamerican.com/article/you-dont-need-wor...

      1 reply →

  • Yeah but where comparison with philosophy falls short is - if we lost some ways of thinking, it was gradual and most didn't notice.

    Software code is on the other hand extremely formal, and either it works perfectly as intended, it works crappily and keeps breaking in various edge cases or just doesn't work (last 2 are just variants of same dysfunctionality, technically its binary state). There is no scenario where broken code somehow ends up working and delivering, or maybe 1 in trillion, sometimes.

    Also the change is so fast that the failure is immediately obvious to everybody, its not gradual change of thinking over few decades/generations.

    LLMs are getting impressive, but anybody claiming there is no massive long term harm to getting to what we call now proper seniority is... don't know, delusional, junior who never walked that long and hard-won path, doing PR for llms at all costs or some other similar type. Or simply has some narrow use case working great for them long term which definitely can't be transferred on whole industry, like 1-man indie game dev.

    • I would argue it's virtually impossible going forward for a junior engineer to run that harder path.

      Because the easier path seemingly delivers what's expected of them. Sigh, they may even be demanded to take the faster path.

      I've seen many junior unable to walk that necessary path before LLMs were a thing.

You aren't thinking myopically; it's a fundamental contradiction the root of which is in how human brains take in and understand new information. No amount of pontification or bollocks hedging as this and all other "thinkpieces" on this issue do, will change that. It is beyond preference and perspective. There is only doing the very task that produces skills pertaining to that task. Prompting alone or even in dominant is too far from this task. They can only write the code.

> How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.

It's not by writing syntax that you get there. It's by creating software and gaining the experience of seeing it go wrong.

"Good judgement comes from experience. Experience comes from bad judgement."

AI just shortens the cycle without needing to type out syntax, so you get even more iterations, faster, and learn the lessons more quickly.

Some do not learn from that experience. They were never going to learn without AI either.

  • > It's not by writing syntax that you get there.

    Writing syntax is still an important part of the experience. It is valuable because it requires you to spend time immersed in the nuts and bolts that hold software together. I'd compare it to cooking, if you have an assistant or a machine do everything and you never actually touch a knife or stir a pot, you'll lose your touch. But there is also something valuable about covering more ground and the additional experience that brings.

> How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.

Well this is true, but that doesn't mean that there isn't any other way to acquire this knowledge. Until now, this way of gaining deeper understanding was simply the most practical one, since you needed to write lots of code when starting out as a software engineer.

But it's just as well possible to gain knowledge about useful abstractions and clean code by using AI to do the work. You'll find out after a while which codebases get you stuck and which code abstractions leverage your AI because it needs fewer tokens to read and extend your codebase.

This has happened in other industries before. Drafting for example when CAD arrived. Entry level wasn't "can draw, willing to learn" anymore, but demanded high domain understanding. So the pathway became compressed learning through study, and field exposure.

Study of senior drafter "red lines": what and why they changed the initial drawing, RFI response etc. Reverse engineering good work. Failed design studies etc.

SWE equivalents: PRs, code review, studying high quality codebases (guess what: LLMs are amazing at helping here), pair programming (learning why what the LLM did was wrong, how to improve it, etc), customer support, debugging prod incidents, studying post mortems etc

We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc through some pair programming and the occasional deep end. We give them specific tasks and studies that develop their domain understanding and taste, actively support and mentor them, and expect them to drive some LLMs on the side to solve simple issues that still need human eyes on it.

  • > We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc through some pair programming and the occasional deep end.

    Is that generally the case though? I'm about two years into my first job in the industry and that's exactly my experience, and certainly frustrating...

Almost none of my operational knowledge came from writing code but a lot sure came from the reading code in the debugging process.

you learn by struggling and slogging through, even as a senior if your shit breaks it's on you to understand why. no LLM will shortcut that process for you (even asking LLMs why something is wrong requires you to actually understand it eventually, aka LEARNING). how that happens is up to the person.

i don't understand all this fear projected as if people won't have agency of learning just because LLMs make it easier to do certain things. i don't think it's contradictory at all. half the people here will never have to wrangle the bullshit i dealt with 20 years ago and i'm sure when i was dealing with it there was another 20 years of bullshit before me lol.

if you vibe code your app with no regard for the underlying code you will pay the price for it at some point in the future, anybody worth their salt will slow down enough to figure it out the "artisanal" way.

  • I'd argue that the engineers of 20 years ago were better than the engineers of today because they were significantly more resource constrained and for example, would never use a 300mb javascript library for a profile page.

    • John Carmack did praise restraint of resources when he recalled his early days working as a lone contractor and as an employee of Softdisk, when he and the team had to push out games on a very tight schedule.

      I think this extends to other parts of life, too. I still remember that I fondly played a game over and over again back in high school, when I did not have the Internet and had to borrow CDs from my friends — but when I went into the university and had access to pretty much every game freely on the Intranet, I rarely do that anymore. That’s why I always think an abundance of X may not be the best option for me. That’s why probably includes money, too.

    • As a percentage of good to mediocre, maybe. Engineers of 40 years ago were probably better than engineers 20 years ago. Less of them and more constraints they had to deal with. Democratization of technology makes it easier for more people to use. It applies to programming as much as just using a computer.

    • I never buy these examples. Being a good engineer is more than purely resource optimization. I can think of many times over my career where resource optimization mattered but it’s not always a valuable undertaking.

    • 20 years ago we were complaining about steam being bloated and unnecessary, we were 6 months off vista being a bloated mess and the Office Ribbon debacle being in full swing. PC games were often half baked console ports with atrocious performance and filled with game breaking bugs. Software was super rigid - there was no real cross platform support. We were just heading into the core 2 duo realm and it was a mess.

      Engineers sucked then as much as they suck now

  • Understanding something and learning something are not the same things.

    • nobody said they were, they are related. if you don't understand why something is behaving a certain way you need to learn

One thing worth mentioning is that even before AI only some small subset of engineers have experienced building systems from scratch or inventing new ways of doing things or root causing complex problems or even writing a lot of code. Most software engineering is maintenance or mundane or not productive.

Even in a world where there's a lot of AI generated code there can still be people that have enough exposure to doing hard things. Definitely at this point in time where AI can't really do all those hard things anyways - but even after it'll be able to.

  • you don't need to build systems from scratch to acquire problem-solving skills. even routine maintenance problems require to dig into documentation, look at github issues, and do root-cause analysis. These skills are eliminated from reliance on AI and there is no fallback if one never acquired them in the first place.

> I don't see how that's possible, but maybe I'm thinking too myopically.

you are thinking too myopically.

We have people who can still do maths well after the introduction of the calculator. We have people who can still spell after the introduction of spell check.

The junior only need to train without using AI to gain the skills needed - that's called education. If they choose to rely on AI solely, and gimp their own education, that's on them.

  • > We have people who can still do maths well after the introduction of the calculator.

    I assume by "do maths" you mean doing simple calculations, like adding a bunch of small numbers, in one's head. That's because in many situations it's more convenient to do so, than using a calculator. So the skill is preserved / practiced, because a calculator is too cumbersome to use. The skills of most people settle at the equilibrium where it takes the same effort to take out the calculator and focus on typing, as it would to strain the brain doing it without a calculator.

    > We have people who can still spell after the introduction of spell check.

    When using spell check to fix your document, you automatically learn to spell. Your skills improve by using the tool. A better analogy to AI would be an email client with a "Fix all and send"-button, where you never look at the output of the spell checker.

    • I would also argue, that most school system forbid the usage of a calculator the first couple of years (at least that's how it was Germany a few decades ago). The same with writing per hand. You can spell check by looking the word up and then manually correcting it.

      Both require manual "labor" which leads to learning.

      1 reply →

    • No. These tools are very good at creating illusion of learning, without any learning. When you watch them do stuff, you think, yeah I got this. Once they are gone, you realize all your supposed skill is gone too. Getting a skill requires deliberate practice. You can use AI for that, but just using AI is not that.

      2 replies →

  • Why is it always so consistently a comparison to a technology of a fundamentally different order? Perhaps what has been lost is the ability to recognise distinct and incommensurable categories.

  • Yes but currently I don't know of a single company in my area that doesn't make you use AI daily because of the supposedly increased productivity. That means that juniors also absolutely have to use AI, probably sabotaging their learning process in the long run.

  • > We have people who can still do maths well after the introduction of the calculator.

    Arithmetics is a very, very small subset of math.

AI has not yet aligned with human thinking absolutely but some people create euphoria that it's surpassing human thinking so only after alignment and surpassing AI can think of an outside inview now it is still inside out