← Back to context

Comment by Nextgrid

20 days ago

If you’re repeatedly prompting, I will defer to my usual retort when it comes to LLM coding: programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language. It’s generally much faster for me to write the terse language directly than play a game of telephone with an intermediary in the verbose language for it to (maybe) translate my intentions into the terse language.

In your example, you mention that you prompt the AI and if it outputs sub-par results you rewrite it yourself. That’s my point: over time, you learn what an LLM is good at and what it isn’t, and just don’t bother with the LLM for the stuff it’s not good at. Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with. That’s not the LLM replacing you, that’s the LLM augmenting you.

Enjoy your sensible use of LLMs! But LLMs are not the silver bullet the billion dollars of investment desperately want us to believe.

> as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with

Your use of the word "should" is pointing to some ideal that doesn't exist anymore.

In current actual reality, you do whatever your employer gives you to do, regardless of your job title.

If you have 40 years of broad development experience but your boss tells you to build more CRUD web apps or start looking for another job in the current ATS hell, then the choice whether to use coding agents seems obvious to me.

  • I think the point is that if you're building yet-another-CRUD web app, why aren't you abstracting more of it away already? It's not like we don't have the facilities for this in programming languages already.

    • The main issue with current LLM hypers is the complete unrealistic scenarios they come up with. When building a CRUD app, the most obvious solution is to use a framework to take care of the common use cases. And such framework will have loads of helpers and tools to speed up boilerplate.

> programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language

Why are we uniquely capable of doing that, but an LLM isn't? In plan mode I've been seeing them ask for clarifications and gather further requirements

Important business context can be provided to them, also

  • An LLM isn’t (yet?) capable of remembering a long-term representation of the codebase. Neither is it capable of remembering a long-term representation of the business domain. AGENTS.md can help somewhat but even those still need to be maintained by a human.

    But don’t take it from me - go compete with me! Can you do my job (which is 90% talking to people to flesh out their unclear business requirements, and only 10% actually writing code)? It so, go right ahead! But since the phone has yet to stop ringing, I assume LLMs are nowhere there yet. Btw, I’m helping people who already use LLM-assisted programming, and reach out to me because they’ve reached their limitations and need an actual human to sanity-check.

  • We are uniquely capable of doing that because we invented that :) It’s a self-serving definition, a job description.

    This isn’t an argument against LLMs capability. But the burden of proof is on the LLMs’ side.

    • True. That capability might be reserved for AGI. The current implementation does feel like a party trick and I don't enjoy working with it

> Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with.

That doesn't seem realistic to me.