Comment by fhd2

6 days ago

Bear in mind that there is a lot of money riding on LLMs leading to cost savings, and development (seen as expensive and a common bottleneck) is a huge opportunity. There are paid (micro) influencer campaigns going on and what not.

Also bear in mind that a lot of folks want to be seen as being on the bleeding edge, including famous people. They get money from people booking them for courses and consulting, buying their books, products and stuff. A "personal brand" can have a lot of value. They can't be seen as obsolete. They're likely to talk about what could or will be, more than about what currently is. Money isn't always the motive for sure, people also want to be considered useful, they want to genuinely play around and try and see where things are going.

All that said, I think your approach is fine. If you don't inspect what the agent is doing, you're down to faith. Is it the fastest way to get _something_ working? Probably not. Is it the best way to build an understanding of the capabilities and pit falls? I'd say so.

This stuff is relatively new, I don't think anyone has truly figured out how to best approach LLM assisted development yet. A lot of folks are on it, usually not exactly following the scientific method. We'll get evidence eventually.

> There are paid (micro) influencer campaigns going on and what not.

Extremely important to keep in mind when you read about LLMs, agents and what not both here, on reddit and elsewhere.

Just the other day I got offered 200 USD if I posted about some new version of a "agentic coding platform" on HN, which obviously is too little for me to compromise my ethics and morals, but makes it very clear how much of this must be going on, if me, some random user, gets offered money to just post about their platform. If I was offered that 15-20 years ago when I was broke and cleaning hotels, I'd probably take them up on their offer.

  • > which obviously is too little for me to compromise my ethics and morals

    What would be enough to compromise your ethics and morals? I'm sure they can accommodate.

    • Hah, after submitting my comment, I actually though about it because I knew someone would eventually ask :)

      I'm fortunate enough to live a very comfortable life after working myself to death, so I think for 20,000,000 USD I'd do it, happily so. 2,000,000 would be too little. So probably between those sit the real price to purchase my morals and ethics :)

      3 replies →

  • Why limit to LLMs and agents ?

    This is true about everything you see advertised to you.

    When I went to opensource conference (or whatever it was called) in San Diego ~8y ago, there were so many Kubernetes people. When you talked with them nobody was actually using k8s in production and were clearly devrel/paid people.

    Now it seems to be everywhere... so be careful with what you ignore too

    • Because they are pitching to replace 50% of white collar jobs...

      Doesn't seem like its that necessary to run marketing campaigns if that's the case... people would just do it if it's possible

      2 replies →

  • How can we be sure you aren't paid by bear cartel to push this post?

    • Bear cartel is a bunch of curmudgeons without tons of VC money. If they are being unfair, its purely for love of the game.

> This stuff is relatively new, I don't think anyone has truly figured out how to best approach LLM assisted development yet.

Exactly. But as you say, there are so many people riding the hype wave that it is difficult to come to a sober discussion. LLMs are a new tool that is a quantum leap but they are not a silver bullet for fully autonomous development.

It can be a joy to work with LLMs if you have to write the umpteenth javascript CRUD boilerplate. And it can be deeply frustrating once projects are more complex.

Unfortunately I think benchmaxxing and lm-arena are currently pushing into the wrong direction. But trillions of VC money are at stake and leaning back, digesting, reflecting and discussing things is not an option right now.

  • > But as you say, there are so many people riding the hype wave that it is difficult to come to a sober discussion. LLMs are a new tool that is a quantum leap but they are not a silver bullet for fully autonomous development.

    While I agree with the latter, I actually think on former point - that hype is making sober discussion impossible - is actually directionally incorrect. Like a lot of people I speak to privately, I'm making a lot of money directly from software largely written by LLMs (roadmaps compressed from 1-2 years to months since Claude Code was released), but the company has never mentioned LLMs or AI in any marketing, client communications, or public releases. We all very aware that we need to be able to retire before LLMs swamp or obsolete our niche, and don't want to invite competition.

    Outside of tech companies, I think this is extremely common.

    > It can be a joy to work with LLMs if you have to write the umpteenth javascript CRUD boilerplate.

    There is so much latent demand for slightly customised enterprise CRUD apps. An enormous swathe of corporate jobs are humans performing CRUD and task management. Even if LLMs top out here, the economic disruption from this alone is going to be immense.

    • It is delusional to believe the current frontier models can only write CRUD apps.

      I would think someone would have to only write CRUD apps themselves to believe this.

      It doesn't matter anyway what a person "believes". If anything, I am having the opposite experience that conversing with people is becoming a bigger and bigger waste of time instead of just talking to Gemini. It is not Gemini that is hallucinating all kinds of nonsense vs the average person. It is the opposite.

      3 replies →

  • even for CRUD I'm finding it quite frustrating. The question is no longer whether AI can write the code you specify: it can

    It just writes terrible code I'd never want to maintain. Can I refactor and have it cleaned up by the AI also? Sure... but then I need to specify exactly how it should go about it and eugh should I just be writing this code myself?

    It really excels when there are existing conventions within the app it can use as example

> This stuff is relatively new, I don't think anyone has truly figured out how to best approach LLM assisted development yet. A lot of folks are on it, usually not exactly following the scientific method. We'll get evidence eventually.

I try to think about other truly revolutionary things.

Was there evidence that GUIs would dramatically increase productivity / accessibility at first? I guess probably not. But the first time you used one, you would understand its value on some kind of intuitive level.

Having the ability to start OpenCode, give it an issue, add a little extra context, and have the issue completed without writing a single line of code?

The confidence of being able to dive into an unknown codebase and becoming productive immediately?

It's obvious there's something to this even if we can't quantify it yet. The wildly optimistic takes end with developers completely eliminated, but the wildly pessimistic ones - if clear eyed - should still acknowledge that this is a massive leap in capabilities and our field is changed forever.

  • > Having the ability to start OpenCode, give it an issue, add a little extra context, and have the issue completed without writing a single line of code?

    Is this a good thing? I'm asking why you said it like this, I'm not asking you to defend anything. I'm genuinely curious about your rational/reasoning/context for why you used those words specifically?

    I ask, because I wouldn't willingly phrase it like this. I enjoy writing code. The expression of the idea, while not even close to value I assign to fixing the thing, still has meaning.

    e.g. I would happily share code my friend wrote that fixed something. But I wouldn't take and pride in it. Is that difference irrelevant to you, or do you still feel that sense of significance when an LLM emits the code for you?

    > should still acknowledge that this is a massive leap in capabilities and our field is changed forever.

    Equally, I don't think I have to agree with this. Our field is likely changed, arguably for the worse if the default IDE now requires a monthly rent payment. But I have only found examples of AI generating boiler plate. If it's not able to copy the code from some other existing source, it's unable to emit anything functional. I wouldn't agree that's a massive leap. Boilerplate has always been the least significant portion of code, no?

    • We are paid to solve business problems and make money.

      People who enjoy writing code can still do so, just not on a business context if there's a more optimal way

      1 reply →

    • Cards on the table: this stuff saps the joy from something I loved doing, and turns me into a manager of robots.

      I feel like it's narrowly really bad for me. I won't get rich and my field is becoming something far from what I signed up for. My skills long developed are being devalued by the second.

      I hate that using these tools increases wealth inequality and concentrates power with massive corporations.

      I wish it didn't exist. But it does. And these capabilities will be used to build software with far less labor.

      Is that trade-off worth the negatives to society and the art of programming? Hard to say really. But I don't get to put this genie back in the bottle.

      2 replies →

  • You're absolutely right!

    Sorry, couldn't resist :P But I do, in fact, agree based on my anecdotal evidence and feeling. And I'm bullish that even if we _haven't_ cracked how to use LLMs in programming well, we will, in the form of quite different tools maybe.

    Point is, I don't believe anyone is at the local maximum yet, models changed too much the last years to really get to something stable.

    And I'm also willing to leave some doubt that my impression/feeling might be off. Measuring short term productivity is one thing. Measuring long term effects on systems is much harder. We had a few software crises in the past. That's not because people back then were idiots, they just followed what seemed to work. Just like we do today. The feedback loop for this stuff is _long_. Short term velocity gains are just one variable to watch.

    Anyway, all my rambling aside, I absolutely agree that LLMs are both revolutionary and useful. I'm just careful to prematurely form a strong opinion on where/how exactly.

  • > The confidence of being able to dive into an unknown codebase and becoming productive immediately?

    I don't think there's any public evidence of this happening, except for the debacles with LLM-generated pull requests (which is evidence against, not for this happening).

    I could be wrong, feel free to cite anything.