Comment by munksbeer
20 days ago
I guide the AI. If I see it produce stuff that I think can be done better, I either just do it myself or point it in the right direction.
It definitely doesn't do a good job of spotting areas ripe of building abstractions, but that is our job. This thing does the boring parts, and I get to use my creativity thinking how to make the code more elegant, which is the part I love.
As far as I can tell, what's not to love about that?
If you’re repeatedly prompting, I will defer to my usual retort when it comes to LLM coding: programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language. It’s generally much faster for me to write the terse language directly than play a game of telephone with an intermediary in the verbose language for it to (maybe) translate my intentions into the terse language.
In your example, you mention that you prompt the AI and if it outputs sub-par results you rewrite it yourself. That’s my point: over time, you learn what an LLM is good at and what it isn’t, and just don’t bother with the LLM for the stuff it’s not good at. Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with. That’s not the LLM replacing you, that’s the LLM augmenting you.
Enjoy your sensible use of LLMs! But LLMs are not the silver bullet the billion dollars of investment desperately want us to believe.
> as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with
Your use of the word "should" is pointing to some ideal that doesn't exist anymore.
In current actual reality, you do whatever your employer gives you to do, regardless of your job title.
If you have 40 years of broad development experience but your boss tells you to build more CRUD web apps or start looking for another job in the current ATS hell, then the choice whether to use coding agents seems obvious to me.
I think the point is that if you're building yet-another-CRUD web app, why aren't you abstracting more of it away already? It's not like we don't have the facilities for this in programming languages already.
2 replies →
> programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language
Why are we uniquely capable of doing that, but an LLM isn't? In plan mode I've been seeing them ask for clarifications and gather further requirements
Important business context can be provided to them, also
An LLM isn’t (yet?) capable of remembering a long-term representation of the codebase. Neither is it capable of remembering a long-term representation of the business domain. AGENTS.md can help somewhat but even those still need to be maintained by a human.
But don’t take it from me - go compete with me! Can you do my job (which is 90% talking to people to flesh out their unclear business requirements, and only 10% actually writing code)? It so, go right ahead! But since the phone has yet to stop ringing, I assume LLMs are nowhere there yet. Btw, I’m helping people who already use LLM-assisted programming, and reach out to me because they’ve reached their limitations and need an actual human to sanity-check.
We are uniquely capable of doing that because we invented that :) It’s a self-serving definition, a job description.
This isn’t an argument against LLMs capability. But the burden of proof is on the LLMs’ side.
1 reply →
> Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with.
That doesn't seem realistic to me.