Comment by branko_d

6 hours ago

From https://kristoff.it/blog/contributor-poker-and-ai/:

"Unfortunately the reality of LLM-based contributions has been mostly negative for us, from an increase in background noise due to worthless drive-by PRs full of hallucinations (that wouldn’t even compile, let alone pass CI), to insane 10 thousand line long first time PRs. In-between we also received plenty of PRs that looked fine on the surface, some of which explicitly claimed to not have made use of LLMs, but where follow-up discussions immediately made it clear that the author was sneakily consulting an LLM and regurgitating its mistake-filled replies to us."

Pretty much sums up the LLM fanbase.

  • I don't think it's the complete fanbase. However, there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason. Programming was a domain that filtered out those people because they found it hard to succeed at it. LLM's have changed that and it's a huge problem. It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.

    • "They may speed up the good programmers a little, but those people were able to program anyway without LLMs."

      I don't think this is realistic. I'm a good programmer, and it speeds up my work a lot, from "make sense of this 10 repo project I haven't worked on recently" to "for this next step I need a vpn multiplexer written in a language I don't use" to, yeah, "this 10k line patch lets me see parts of design space we never could have explored before." I think it's all about understanding the blast radius. Sonetimes a lot of code is helpful, sometimes more like a lot of help proving a fact about one line of code.

      Like Simon says, if I'm driving by someone else's project, I don't send the generated pull request, I just file the bug report / repro that would generate it.

      1 reply →

    • > However, there are lots of people in the world who live their whole life by vibing

      Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?

      I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.

      9 replies →

    • Before LLMs we could already see a growing abundance of half baked engineers only in for the good pay. Willing to work double time to pull things out.

      Management, unsurprisingly deemed those precious. They could email them out anytime, working weekend to fix problems their kind were the cause. Sure sir.

      They excel at communication. Perfecting the art.

      Now LLMs are there to accelerate the trend.

    • > It's hard to know if LLMs will end up being a net win for the industry.

      True, regardless of that, for sure with LLM we are borrowing Technical debt like never before.

      1 reply →

    • > It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.

      If you will forgive an appeal to authority:

      The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.

      - Fred Brooks, 1986

    • For at least the last 3 decades programming was a field that rewarded utter mediocrity with (relatively to other fields) massive remuneration. It has been filled with opportunists for as long as I remember.

      3 replies →

    • > there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason

      This response 1000% was crafted with input from an LLM, or the user spends too much time reading output from llms.

      11 replies →

    • > Programming was a domain that filtered out those people because they found it hard to succeed at it.

      I think this is a very rosy view of programmers, not borne out by history. The people leading the vibe coding charge are programmers, rather than an external group.

      I know it's popular to divide the world into the technically-literate and the credulous, but in this case the technical camp is also the one going all in.

  • Fanbase, maybe. Software engineers using these projects? Probably forking and updating themselves.

    FWIW, I've opened a half dozen PRs from LLMs and had them approved. I have some prompts I use to make them very difficult to tell they are AI.

    However if it is a big anti-llm project I just fork and have agents rebase my changes.

    • Your employer allows/encourages this? Do you run that stuff in production? Would you mind telling us where you work so we can avoid using their products? It is just not possible to trust the software that emerges from the process you've described.

  • I'm firmly in the LLM fanbase. Not because I can't type code (was doing it for over 17 years, everywhere from low level hardware drivers in C to web frontend to robot development at home as a hobby - coding is fun!), but because in my profession it allows me to focus more on the abstraction layer where "it matters".

    I'm not saying that I'm no longer dealing with code at all though. The way I work is interactively with the LLM and pretty much tell it exactly what to do and how to do it. Sometimes all the way down to "don't copy the reference like that, grab a deep copy of the object instead". Just like with any other type of programming, the only way to achieve valuable and correct results is by knowing exactly what you want and express that exactly and without ambiguity.

    But I no longer need to remember most of the syntax for the language I happen to work with at the moment, and can instead spend time thinking about the high level architecture. To make sure each involved component does one thing and one thing well, with its complexities hidden behind clear interfaces.

    Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.

    • This mindset is fine (it's mine essentially too).

      But it absolutely has to be combined with verification/testing at the same speed as code production.

      3 replies →

    • > Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.

      Any examples how you see some engineers being left behind?

      2 replies →

  • Not really - I imagine as with almost everything in life there's a normal distribution, in this case of the quality with which people use AI tools.

Fake it ‘till you make it. Seems like LLM’s have caught-on to that too.

You can curb an LLM into doing what you want. Unfortunately people don't have the patience or the skill.

  • People who have skill can do the same without LLMs, maybe slightly slower on average but on more predictable schedule.

    • I wouldn’t say slightly slower; LLMs are massively useful for software engineering in the right hands.

      For some personal projects I still stick to the basics and write everything by hand though. It’s kinda nice and grounding; and almost feels like a detox.

      For any new software engineer, I’m a strong advocate of zero LLM use (except maybe as a stack overflow alternative) for your first few months.

  • The chat UX with a fake-human lying to you and framing things emotionally really doesn’t help. And it is pretty much not possible to get away from it, or at least I haven’t found yet how.

    I would love to see a model trained to behave way more like a tool instead of auto-completing from Reddit language patterns…