← Back to context

Comment by davydm

6 hours ago

This post brings up a lot of (imo true) points that I honestly can't share with the ai-lovers at work because they will just get in a huff. But the OP is right - we automate stuff we don't value doing, and the people automating all their code-gen have made a very clear statement about what they want to be doing - they want _results_ and don't actually care about the code (which includes ideas like testing, maintainability, consistent structure, etc).

It's extra hilarious to hear someone you _thought_ treated their code work as a craft refer to "producing 3 weeks worth of work in the last week" because (a) I don't believe it, not for one bit, unless you are the slowest typist on earth and (b) it clearly positions them as a code _consumer_, not a code _creator_, and they're happy about it. I would not be.

Code is my tool for solving problems. I'd rather write code than _debug_ code - which is what code-gen-bound people are destined to do, all day long. I'd rather not waste the time on a spec sheet to convince the llm to lean a little towards what I want.

Where I've found LLMs useful is in documentation queries, BUT (and it's quite a big BUT) they're only any good at this when the documentation is unchanging. Try ask it questions about nuances of the new extension syntax in c# between dotnet 8 and dotnet 10 - I just had to correct it twice in the same session, on the same topic, because it confidently told me stuff that would not compile. Or in the case of elasticsearch client documentation - the REST side has remained fairly constant, but if you want help with the latest C# library, you have to remind it all the time of the fact - not because it doesn't have any information on the latest stuff, but because it consistently conflates old docs with new libraries. An attempt to upgrade a project from webpack4 to webpack5 had the same problems - the llm confidently telling me to do "X", which would not work in webpack 5. And the real kicker is that if you can prove the LLM wrong (eg respond with "you're wrong, that does not compile"), it will try again, and get closer - but, as in the case with C# extension methods, I had to push on this twice to get to the truth.

Now, if they can't reliably get the correct context when querying documentation, why would I think they could get it right when writing code? At the very best, I'll get a copy-pasta of someone else's trash, and learn nothing. At the worst, I'll spin for days, unless I skill up past the level of the LLM and correct it. Not to mention that the bug rate in suggested code that I've seen is well over 80% (I've had a few positive results, but a lot of the time, if it builds, it has subtle (or flagrant!) bugs - and, as I say, I'd rather _write_ code than _debug_ someone else's shitty code. By far.

  > we automate stuff we don't value doing, and the people automating all their code-gen have made a very clear statement about what they want to be doing - they want _results_ and don't actually care about the code (which includes ideas like testing, maintainability, consistent structure, etc)

Not necessarily. I sometimes have a very clear vision of what I want to build, all the architecture, design choices, etc. It's simply easier to formalize a detailed design/spec document + code review if everthing follow what I had in mind, than typing everything myself.

It's like the "bucket" tool in Paint. You don't always need to click pixel by pixel if you already know what you want to fill.

  • I don’t think the analogy holds, because the result of a flood fill in Paint is deterministic.

    Whatever your design document/spec, there are generally a lot of ways and variations of how to implement it, and programmers like the OP do care about those.

    You don’t have Paint perform the flood fill five times and then pick the result you like the most (or dislike the least).

    •   > Whatever your design document/spec, there are generally a lot of ways and variations of how to implement it, and programmers like the OP do care about those.
      

      You could make the same argument about compilers : whatever is the code you wrote, your compiler may produce assembly instructions in an undeterministic way.

      Of course, there are many ways to write the same thing, but the end performance is usually the same (assuming you know what you are doing).

      If your spec is strong enough to hold between different variations, you shouldn't need to worry about the small details.

      1 reply →

  • Couldn't agree more. It's also like managing a team of engineers rather than doing the coding yourself. You don't necessarily value the work less, nor do you necessarily have less technical prowess. You're just operating at a higher level.

> This post brings up a lot of (imo true) points that I honestly can't share with the ai-lovers at work because they will just get in a huff. But the OP is right - we automate stuff we don't value doing, and the people automating all their code-gen have made a very clear statement about what they want to be doing - they want _results_ and don't actually care about the code (which includes ideas like testing, maintainability, consistent structure, etc).

I havent run into this type yet, thankfully. As an AI lover, the architecture of the code is more important than before.

* It’s harder to understand code you didn’t write line by line, readability is more important than it was before.

* Code is being produced faster and with lower bars; code collapsing under its own shitty weight becomes more of a problem than it was before.

* Tests/compiler feedback helps AI self correct its code without you having to intervene; this is, again, more important than it was before.

All the problems I liked thinking about before AI are how I spend my time. Do I remember specific ActiveRecord syntax anymore? No. But that was always a Google search away. Do I care about what those ORM calls actually generate SQL wise and do with the planner? Yes, and in fact it’s easier to get at that information now.