← Back to context

Comment by ZaoLahma

6 hours ago

I'm firmly in the LLM fanbase. Not because I can't type code (was doing it for over 17 years, everywhere from low level hardware drivers in C to web frontend to robot development at home as a hobby - coding is fun!), but because in my profession it allows me to focus more on the abstraction layer where "it matters".

I'm not saying that I'm no longer dealing with code at all though. The way I work is interactively with the LLM and pretty much tell it exactly what to do and how to do it. Sometimes all the way down to "don't copy the reference like that, grab a deep copy of the object instead". Just like with any other type of programming, the only way to achieve valuable and correct results is by knowing exactly what you want and express that exactly and without ambiguity.

But I no longer need to remember most of the syntax for the language I happen to work with at the moment, and can instead spend time thinking about the high level architecture. To make sure each involved component does one thing and one thing well, with its complexities hidden behind clear interfaces.

Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.

This mindset is fine (it's mine essentially too).

But it absolutely has to be combined with verification/testing at the same speed as code production.

  • I generally do have that mindset, but over the past 1y of Claude code I do notice that I’m clearly losing my understanding of the internals of projects. I do review LLM generated code, understand it, no problem reading/following through. But then someone asks me a question, and I’m like… wait, I actually don’t know. I remember the instructions I gave and reviewing the code but don’t actually have a fine-details model of the actual implementation crystallized in my mind, I need to check, was that thing implemented the way I thought it was or not? Wait, it’s actually wrong/not matching at all what I thought! It’s definitely becoming uncomfortable and makes me reconsider my use of Claude code pretty significantly

    • I've had this issue too, and I feel it was an important lesson—kind of like the first time getting a hangover.

      On the other hand, LLM-generated code comments better than I do, so given a long enough time horizon, it could be more understandable at a later time than code I've written myself (we've all had the experience of forgetting how things work).

      2 replies →

    • Same experience. I've been writing code for many decades, but that experience doesn't mean I can remember what I read when reviewing generated code. I write small, focused commits, but I have to take a day off each week to make changes by hand just to mentally keep up with my own codeset knowledge, and I still find structures that surprise me. It's not necessarily that the code quality is poor, but it's not like I (thought) I had designed it. It's lead to a weakening of my confidence when adding to or changing existing architecture.

    • I do think that this is natural. When you use LLM coding tools, you're becoming a lot more like an architect/staff/manager, rather than the direct coder. You're setting out the spec, coming up with the design, and coming up with the high level structure of the project.

      However, this comes at the cost of losing track of the minute details of the implementation because you didn't write it yourself. I find it a bit analogous to code I've reviewed vs code I've written.

      However, I've found using AI for code structure summary and questioning tends to be a good way to get around it. I might forget faster, but I also pick it up faster.

  • I've found that for non-trivial features, I typically benefit from 3-4 rounds of: are you sure this isn't tech debt, are you sure this is thoroughly tested for (manually insert the applicable cases, because they aren't great at this, even if explicitly asked), are you sure this isn't re-inventing wheels, adding unnecessary complexity by not using existing infrastructure it should or that other existing code would not benefit from moving to this, are you sure you can't find any bugs, in hind sight, are you sure this is the best design?

    Then, after it says, yes I'm sure this is production ready and we're good to move on, you have Codex and Gemini both review it one last time, and ask it to address their feedback if it's valuable or not.

    After all this, it's the only time I'll look at the code and review it and make sure it's coherent.

    Until then, I assume it's garbage.

    I'd estimate this still improves velocity by 10x, and more importantly, allows me to operate at a pace I couldn't without burning out.

  • One-off tasks and parts of the stack that already have lots of disposable code do not need the same scrutiny as everything else. Just as there is a broad continuum of code importance, there is a broad continuum of testing requirements, and this was the case before AI. Keeping this in mind, AIs can also do some verification and testing, too.

> Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.

Any examples how you see some engineers being left behind?

  • > Any examples how you see some engineers being left behind?

    I don't know where you live, but around where I live in Denmark you'd fail for not using AI at a senior interview in a lot of places. Even places which aren't exactly AI fans use AI to some extend.

    The biggest challenge we face right now is figuring out how you create developers who have enough experience to know how to use the AI tools in a critical manner. Especially because you're typically given agents for various taks, which are already configured to know how we want things to be written.

    • Around here on your southern neighbour, everyone is supposed to be doing AI and being evaluated by this, yet in many projects if clients don't sign off on the use of AI tools, there is no AI to use anyway.

      Additionally there are the AI targets set by C suites based on what everyone is saying on TV, and what we can actually deliver based on the available data sets, integration points, and naturally those sign offs for data governance, and hallucinations guardrails.

  • I'm starting to notice how those who don't use AI end up having to hand tasks over to people who can get them done quicker.

    It is anecdotal for sure, but it's a pattern that seems to be emerging around me that expectations of velocity increases, and those who don't use AI can't keep up.

  • Probably in cognitive surrender. I have one such colleague and he is driving me crazy. "Claude sad that ..."