← Back to context

Comment by JeremyNT

6 days ago

> This stuff is relatively new, I don't think anyone has truly figured out how to best approach LLM assisted development yet. A lot of folks are on it, usually not exactly following the scientific method. We'll get evidence eventually.

I try to think about other truly revolutionary things.

Was there evidence that GUIs would dramatically increase productivity / accessibility at first? I guess probably not. But the first time you used one, you would understand its value on some kind of intuitive level.

Having the ability to start OpenCode, give it an issue, add a little extra context, and have the issue completed without writing a single line of code?

The confidence of being able to dive into an unknown codebase and becoming productive immediately?

It's obvious there's something to this even if we can't quantify it yet. The wildly optimistic takes end with developers completely eliminated, but the wildly pessimistic ones - if clear eyed - should still acknowledge that this is a massive leap in capabilities and our field is changed forever.

> Having the ability to start OpenCode, give it an issue, add a little extra context, and have the issue completed without writing a single line of code?

Is this a good thing? I'm asking why you said it like this, I'm not asking you to defend anything. I'm genuinely curious about your rational/reasoning/context for why you used those words specifically?

I ask, because I wouldn't willingly phrase it like this. I enjoy writing code. The expression of the idea, while not even close to value I assign to fixing the thing, still has meaning.

e.g. I would happily share code my friend wrote that fixed something. But I wouldn't take and pride in it. Is that difference irrelevant to you, or do you still feel that sense of significance when an LLM emits the code for you?

> should still acknowledge that this is a massive leap in capabilities and our field is changed forever.

Equally, I don't think I have to agree with this. Our field is likely changed, arguably for the worse if the default IDE now requires a monthly rent payment. But I have only found examples of AI generating boiler plate. If it's not able to copy the code from some other existing source, it's unable to emit anything functional. I wouldn't agree that's a massive leap. Boilerplate has always been the least significant portion of code, no?

  • We are paid to solve business problems and make money.

    People who enjoy writing code can still do so, just not on a business context if there's a more optimal way

    • > We are paid to solve business problems and make money.

      > People who enjoy writing code can still do so, just not on a business context if there's a more optimal way

      Do you mean optimal, or expedient?

      I hate working with people who's ideas of solving problems is punting it down the road for the next person to deal with. While I do see people do this kinda thing often, I refuse to be someone who claims credit for "fixing" some problem knowing I'm only creating a worse, or different problem for the next guy. If you're working on problems that require collaboration, creating more problems for the next guy is unlikely to give you an optimal output; because soon no one will willingly work with you. It's possible to fix business problems, and maintain your ethics, it's just feels easier to abandon them.

  • Cards on the table: this stuff saps the joy from something I loved doing, and turns me into a manager of robots.

    I feel like it's narrowly really bad for me. I won't get rich and my field is becoming something far from what I signed up for. My skills long developed are being devalued by the second.

    I hate that using these tools increases wealth inequality and concentrates power with massive corporations.

    I wish it didn't exist. But it does. And these capabilities will be used to build software with far less labor.

    Is that trade-off worth the negatives to society and the art of programming? Hard to say really. But I don't get to put this genie back in the bottle.

    • > Cards on the table: this stuff saps the joy from something I loved doing, and turns me into a manager of robots.

      Pick two non-trivial tasks where you feel you can make a half-reasonable estimate on the time it should take, then time yourself. I'd be willing to bet that you don't complete it significantly faster with AI. And if you're not faster using AI, maybe ignore it like I and many others. If you enjoy writing code, keep writing code, and ignore the people lying because they need to spread FUD so they can sell something.

      > But I don't get to put this genie back in the bottle.

      Sounds like you've already bought into the meme that AI is actually magical, and can do everything the hype train says. I'm unconvinced. Just because there's smoke coming from the bottle doesn't mean it's a genie. What's more likely, magic is real? Or someone's lying to sell something?

      1 reply →

You're absolutely right!

Sorry, couldn't resist :P But I do, in fact, agree based on my anecdotal evidence and feeling. And I'm bullish that even if we _haven't_ cracked how to use LLMs in programming well, we will, in the form of quite different tools maybe.

Point is, I don't believe anyone is at the local maximum yet, models changed too much the last years to really get to something stable.

And I'm also willing to leave some doubt that my impression/feeling might be off. Measuring short term productivity is one thing. Measuring long term effects on systems is much harder. We had a few software crises in the past. That's not because people back then were idiots, they just followed what seemed to work. Just like we do today. The feedback loop for this stuff is _long_. Short term velocity gains are just one variable to watch.

Anyway, all my rambling aside, I absolutely agree that LLMs are both revolutionary and useful. I'm just careful to prematurely form a strong opinion on where/how exactly.

> The confidence of being able to dive into an unknown codebase and becoming productive immediately?

I don't think there's any public evidence of this happening, except for the debacles with LLM-generated pull requests (which is evidence against, not for this happening).

I could be wrong, feel free to cite anything.