Comment by gritspants

10 days ago

My disillusionment comes from the feeling I am just cosplaying my job. There is nothing to distinguish one cosplayer from another. I am just doordashing software, at this point, and I'm not in control.

I don’t get this at all. I’m using LLM’s all day and I’m constantly having to make smart architectural choices that other less experienced devs won’t be making. Are you just prompting and going with whatever the initial output is, letting the LLM make decisions? Every moderately sized task should start with a plan, I can spend hours planning, going off and thinking, coming back to the plan and adding/changing things, etc. Sometimes it will be days before I tell the LLM to “go”. I’m also constantly optimising the context available to the LLM, and making more specific skills to improve results. It’s very clear to me that knowledge and effort is still crucial to good long term output… Not everyone will get the same results, in fact everyone is NOT getting the same results, you can see this by reading the wildly different feedback on HN. To some LLM’s are a force multiplier while others claim they can’t get a single piece of decent output…

I think the way you’re using these tools that makes you feel this way is a choice. You’re choosing to not be in control and do as little as possible.

  • Exactly.

    Once you start using it intelligently, the results can be really satisfying and helpful. People complaining about 1000 lines of codes being generated? Ask it to generate functions one at a time and make small implementations. People complaining about having to run a linter? Ask it to automatically run it after each code execution. People complaining about losing track? Have it log every modifications in a file.

    I think you get my point. You need to treat it as a super powerful tool that can do so many things that you have to guide it if you want to have a result that conforms to what you have in mind.

  • One challenge is, are those decisions making tangible differences?

    We won't know until the code being produced especially greenfields hits any kind of maturity 5 years+ atleast?

    • It's not that challenging, the answer is, it depends.

      It's like a junior dev writing features for a product everyday vs a principle engineer. The junior might be adding a feature with O(n^2) performance while principle has seen this before and writes it O(log n).

      If the feature never reaches significance, the "better" solution doesn't matter, but it might!

      The principle may write once and it is solid and never touched, but the junior might be good enough to never need coming back to, same with a llm and the right operator.

      2 replies →

    • What? Of course it makes a difference when I direct it away from a bad solution towards a good solution. I know as soon as I review the output and it has done what I asked, or it hasn't and I make a correction. Why would I need to wait 5 years? That makes no sense, I can see the output.

      If you're using LLM's and you don't know what good/bad output looks like then of course you're going to have problems, but such a person would have the same problems without the LLM...

      1 reply →

100% there....it's getting to a point where a project manager reports a bug AND also pastes a response from Claude (he ran Claude against our codebase) on how to fix the bug..Like I'm just copying what Claude said and making sure the thing compiles (.NET). What makes me sleep at night...for now is the fact that Claude isn't supporting 9pm deployments and AWS Infra support ...it's already writing code but not supporting it yet...

What kind of software are you writing? Are you just a "code monkey" implementing perfectly described Jira tickets (no offense meant)? I cannot imagine feeling this way with what I'm working on, writing code is just a small part of it, most of the time is spent trying to figure out how to integrate the various (undocumented and actively evolving) external services involved together in a coherent, maintainable and resilient way. LLMs absolutely cannot figure this out themselves, I have to figure it out myself and then write it all in its context, and even then it mostly comes up with sub-par, unmaintainable solutions if I wasn't being precise engouh.

They are amazing for side projects but not for serious code with real world impact where most of the context is in multiple people's head.

  • No, I am not a code monkey. I have an odd role working directly for an exec in a highly regulated industry, managing their tech pursuits/projects. The work can range from exciting to boring depending on the business cycle. Currently it is quite boring, so I've leaned into using AI a bit more just to see how I like it. I don't think that I do.