Comment by whywhywhywhy

3 days ago

I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.

I'd say it's far more tiring working that way though, you're breaking the satisfaction loop so you never really get the dopamine you used to get coding by hand, when you had a problem figuring it out was like solving a puzzle and you feel satisfaction at the end of it. With AI it feels most of my day is spent being a QA than a puzzle solver and its exhausting and even when it solves difficult problems for me the LLM slot machine is far less satisfying than if I'd figured it out myself.

Agree with you for my day job (which is coding corporate web app), for sure. I'm still letting A.I. drive more nowadays, but it does feel less fulfilling than it used to.

But for my personal projects, I work on games, and by offloading a lot of the coding work to A.I., my puzzle solving is no longer 'how to fix this stupid library spitting stupid errors at me' or 'how to get this shader working' or 'why is this upgrade breaking all the things' and more 'what does this game need in order to be fun and good?', which I find a lot more fulfilling.

It's also why I switched my focus to board game design for the longest time. I didn't have to fight my tools or learn some new api or library frequently. And if I wanted to try a new mechanic, I didn't need to spend 20 minutes or 2 hours or 2 days implementing it, I could write something on an index card in five seconds and shift mid-game most of the time.

A.I. just brought video games closer to that experience, which actually has made them more fun to work on again, because board games has the immense (financial/logistical if self-publishing or social/networking if attempting to get published through a publisher) challenge of getting physical games published to worry about.

  • The puzzle thought was mostly me trying to figure out why AI coding was more emotionally tiring when I'm literally doing less and creating more, maybe it's something else.

I find this interesting as someone who does primarily devops, my satisfaction has increased with ai. Since for me the code isn't the puzzle but an annoying inconvenience in the way of completing the entire system. For me QA is a big part of solving the puzzle.

  • DevOps is a huge part of my job as a systems engineer and I too have found increased satisfaction with AI.

    I think the reason (for me, at least) is that my markers of success were always perched precariously atop a mountain of systems that I had varying levels of understanding of anyway. Seeing a pipeline "doing the thing" is satisfying regardless of how I sorted it out.

>I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.

This feels unfair to the people dealing with your (LLM’s) code. You don’t vet it at all? Or am I reading this wrong?

  • What does "fair" have to do with anything? This is exactly the issue the author is writing about. Take the easy way, reap the profits, then someone suffer the obviously predictable consequences at some point in the unforeseeable future... likely not you! "Fair" is not relevant.

    The original author points to the consolidation of military suppliers as a major issue, but the truth is that the economies of the western world have been massively dependent on this sort of consolidation and outsourcing for a large portion of the "growth" that they have achieved for a generation.

    It would be convenient to think that the real question is "how do we climb back out of this hole?" but I feel the more pressing question is actually, "when and why will we start trying?"

    The profit motive simply does not drive society in this direction.

    The crises are catastrophic and perhaps even existential, but they are not profitable. You have to be a really lucky market timer to bet on crisis and win.

    Avoiding crisis over the longer term is simply not investable.

    "Fair" is not a relevant or useful conception in this context.

    • > What does "fair" have to do with anything?

      Not wasting other people’s time when they expect your work to at least pass a cursory check. It’s selfish and disrespectful. It reflects poorly on you. I don’t know about all that other stuff you wrote but it’s not really what I’m talking about so I’ll clarify.

      I don’t know what your high school/college was like, but we used to trade papers for editing. It was universally considered bad practice to send rough/first drafts. It’s disrespectful and wastes the time of people who are being generous with it for you. You’re offloading your work in a selfish way.

      Simply put: If I want an LLM’s raw results, I’ll prompt it myself. Why are you involved if I don’t want your work? Your expertise? Want to use an LLM then go for it but don’t just wipe its muddy boots on my work. At least look at the results.

      Unfortunately, this is becoming even more common with LLM’s. I have no problem confronting people about it because 100% of the time they don’t want it done to them. It’s not even an argument, it’s catching them being selfish and they know it.

      10 replies →

  • My boss gets annoyed if I try and do things without AI so eventually I caved but I don't see the point in reading it if thats the culture at the company being pushed.

    Also anyone else dealing with it is just gonna be dealing with it via AI so it doesn't really matter.

    If I worked somewhere where the CEO cared about hand written code I would be writing it and reading it but I don't.

    • Not all of us downstream of you are using AI to sort everything nor is it always applicable.

      I do video editing primarily. Your unedited AI trash = more interpretation by me, which means more work.

  • What makes you think the people dealing with the LLMs' code won't also be using LLMs to "deal with it"?

    We're all now basically junior coders who have no idea what is in the codebase. Without LLMs, we won't be able to "deal" with any of it.

    And I don't like it one bit.

    • Because you can’t assume everyone else is as indifferent about wasting people’s time as you are. Some of us don’t want to actively make our colleagues/customers miserable. That decision forces me to decide if I will be a part of the problem even if I generally do good work I can stand behind. You’re forcing me into a decision making process purely out of your desire to not do the bare minimum when working. That’s not right.

      I also may be staring at consequences you are not. It’s passing the buck with no regard for who is left to deal with the results at the end.

      What if we are working on, say, accessibility tasks? If I see your work won’t actually help those in society who seriously need these features, what am I supposed to do? My kneejerk is 1) fix it (more work for me, selfish on your part), 2) kick it back to your lazy hands that clearly doesn’t see this as an issue, or 3) send it up the chain where someone else has to ask these questions or - worse - it gets shipped and people who need this stuff are screwed. This is basic ethics.

      2 replies →