← Back to context

Comment by feverzsj

9 hours ago

Pretty much sums up the LLM fanbase.

I don't think it's the complete fanbase. However, there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason. Programming was a domain that filtered out those people because they found it hard to succeed at it. LLM's have changed that and it's a huge problem. It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.

  • "They may speed up the good programmers a little, but those people were able to program anyway without LLMs."

    I don't think this is realistic. I'm a good programmer, and it speeds up my work a lot, from "make sense of this 10 repo project I haven't worked on recently" to "for this next step I need a vpn multiplexer written in a language I don't use" to, yeah, "this 10k line patch lets me see parts of design space we never could have explored before." I think it's all about understanding the blast radius. Sonetimes a lot of code is helpful, sometimes more like a lot of help proving a fact about one line of code.

    Like Simon says, if I'm driving by someone else's project, I don't send the generated pull request, I just file the bug report / repro that would generate it.

    • I use LLM as a tutor. It tailor their answers exactly to the situation I am in, even if it hallucinate. I can correct them on the fly and that also serves as training. I try not copy and paste and type every line of code by hand. That doesn't always happen, but I usually understand the code I am writing.

    • > I'm a good programmer, and it speeds up my work a lot

      The problem with this line of thinking is the same with "I so good as C developer, my code is so-safe!".

      And we see what reality instead tell: Yes, exist people where this claims are true, not, is not even a decently sized minority.

    • yep. as an expert programmer there are things i did not have access to. for example, i have an embedded-lite hardware project that required a one line patch to a linux kernel Module.

      i know what a kernel module is and im reasonably certain that the patch is safe, but there is no way in hell i would have found that solution (i would have given up). in a world without llms, the project would have died.

    • It's great when I know how the code should look. Sometimes I just can't bring myself to write yet another http handler.

  • Tangential side story, but an interesting one none the less.

    I was a food delivery driver back in the mid 00's to the mid teens. Early on, GPS was rare and expensive, so to do deliveries and do them effectively, you had to be able to read a map and mentally plan out efficient routes from the stochastic flow of orders coming out.

    This acted as a natural filter, and "delivery driver" tended to be an interesting class of people, landing somewhere in the neighborhood of "lazy genius". Higher than average intelligence, lower than average motivation.

    Then when smartphones exploded in the early 10's, the bar for delivering fell through the floor, and the job became swamped with people who would be best identified as "lazy unintelligent". Anyone who had a smartphone and not much life motivation was now looking to drive around delivering food for easy money.

    Not saying the job was ever particularly glamorous, but it did have a natural mental barrier that tech tore down, and the result was exactly as one would predict. That being said, I'm not sure end users noticed much difference.

  • > However, there are lots of people in the world who live their whole life by vibing

    Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?

    I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.

    • > Why do they think they are "helping" with hallucinated rubbish that can't even build?

      Because they can't tell the difference between what the machine is outputting, and what people have built. All they see is the superficial resemblance (long lines of incomprehensbile code) and the reward that the people writing the code have got, and want that reward too.

      1 reply →

    • "Main character energy". What they're really doing is protecting their view of themselves as smart, and they're making a contribution for the sake of trying to perform being an OSS dev rather than out of need or altruism.

      AI is absolutely terrible for people like that, as it's the perfect enabler.

    • > Why do they think they are "helping"

      It's not about helping. It's about the feeling of clout. There are still plenty of people who look at Github profile activity to judge job candiates, etc. What gets measured gets repeated.

      I believe that most of the ills of social media would disappear, if we eliminated the "like" and "upvotes" buttons and the view counts. Most open source garbage pull requests may likewise go away if contributions were somehow anonymous.

    • I think a lot of people who haven't given it more thought might see it as an arbitrary rule or even some kind of gatekeeping or discrimination. They haven't seen why people would want to not deal with the output.

      This might not be helped by the fact that there are a lot of seemingly psychotic commenters attacking anything which might have touched an LLM or any generative model at some point. Their slur and expletive filled outbursts make every critical response look bad by vague association.

      Having sensible explanations like in TFA for the rules and criticism clearly visible should help. But looking at other similar patterns, I'm not optimistic. And education isn't likely to happen since we're way past any eternal september.

    • You're asking why oil doesn't act like water. It's not really an impossible task, it's just not one they agree with.

    • It's the same as cheating in a game. You are given an """advantage""", so lying about it seems like the best option

    • LLMs are in this case enabling bad behavior, but open source software has always been vulnerable to this. Similarly, people who use LLMs to do this kind of thing are the kind of people who would have done it without LLMs but for the large effort it would have taken. We're just learning now how large that group is.

      This is a good thing, it's an opportunity to make open source development processes robust to this kind of sabotage.

      2 replies →

  • Before LLMs we could already see a growing abundance of half baked engineers only in for the good pay. Willing to work double time to pull things out.

    Management, unsurprisingly deemed those precious. They could email them out anytime, working weekend to fix problems their kind were the cause. Sure sir.

    They excel at communication. Perfecting the art.

    Now LLMs are there to accelerate the trend.

  • > It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.

    If you will forgive an appeal to authority:

    The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.

    - Fred Brooks, 1986

  • > It's hard to know if LLMs will end up being a net win for the industry.

    True, regardless of that, for sure with LLM we are borrowing Technical debt like never before.

  • wouldnt llm do all the tasks that determistic programs are doing. like chatgpt files taxes for you instead of using turbotax.

  • For at least the last 3 decades programming was a field that rewarded utter mediocrity with (relatively to other fields) massive remuneration. It has been filled with opportunists for as long as I remember.

    • You are talking about bad programmers who are at least able to fool their managers for at least several years. The people OP is talking about could not even do that and most likely would have dropped out in the first week trying to program full time since they just don’t have the aptitude and patience to get unblocked after their first compilation error. Now they can go very far with a LLM.

    • I think worth noting that a more impactful and maybe even bigger proportion of those opportunists is in management.

      Regarding quality overall, I agree, it's truly a cursed field. It was bad before; and with LLMs, going against that tide seems more difficult than ever.

    • This is an excellent point. LLMs might merely be exposing and amplifying behaviors that were always there. This can be an opportunity, in that shining light on it may allow us to cleanse ourselves of it. It's fundamentally about integrity, and sadly it's becoming clearer how few possess it (if it ever wasn't!). But maybe we'll get better at measuring integrity, and make hiring/collaboration decisions based on it.

  • > there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason

    This response 1000% was crafted with input from an LLM, or the user spends too much time reading output from llms.

    • I have never used an LLM to write. Writing forces me to think (and I edited the comment a couple of times when writing it which helped me clear up my thinking). "It's a viable way to live and sometimes it's the only way to live" is a personal realization that has taken me some time to understand. You can go back through my comment history to the time before LLMs to check if my style was different then.

      10 replies →

    • I don't get that impression at all. LLMs would have avoided the stylistic repetition of "live". Asking an LLM to reformulate the sentences you quoted yields this slop:

      > There are a lot of people who go through life by vibing. And honestly: that’s not automatically “bad.” Sometimes it’s even the only workable way to get through things. The issue is that “vibe-first” people tend to have a pretty loose relationship with truth, rigor, and being pinned down by specifics. They’ll confidently move forward on what sounds right instead of what they can verify.

      I'll finish this post with a sentence containing an em-dash -- just to confuse people -- and by remarking on how sad I find it that people latch onto dashes and complete sentences as the signifiers of LLM use, instead of the inconsistent logic and general sloppiness that's the actual problem.

  • > Programming was a domain that filtered out those people because they found it hard to succeed at it.

    I think this is a very rosy view of programmers, not borne out by history. The people leading the vibe coding charge are programmers, rather than an external group.

    I know it's popular to divide the world into the technically-literate and the credulous, but in this case the technical camp is also the one going all in.

I'm firmly in the LLM fanbase. Not because I can't type code (was doing it for over 17 years, everywhere from low level hardware drivers in C to web frontend to robot development at home as a hobby - coding is fun!), but because in my profession it allows me to focus more on the abstraction layer where "it matters".

I'm not saying that I'm no longer dealing with code at all though. The way I work is interactively with the LLM and pretty much tell it exactly what to do and how to do it. Sometimes all the way down to "don't copy the reference like that, grab a deep copy of the object instead". Just like with any other type of programming, the only way to achieve valuable and correct results is by knowing exactly what you want and express that exactly and without ambiguity.

But I no longer need to remember most of the syntax for the language I happen to work with at the moment, and can instead spend time thinking about the high level architecture. To make sure each involved component does one thing and one thing well, with its complexities hidden behind clear interfaces.

Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.

  • This mindset is fine (it's mine essentially too).

    But it absolutely has to be combined with verification/testing at the same speed as code production.

    • I've found that for non-trivial features, I typically benefit from 3-4 rounds of: are you sure this isn't tech debt, are you sure this is thoroughly tested for (manually insert the applicable cases, because they aren't great at this, even if explicitly asked), are you sure this isn't re-inventing wheels, adding unnecessary complexity by not using existing infrastructure it should or that other existing code would not benefit from moving to this, are you sure you can't find any bugs, in hind sight, are you sure this is the best design?

      Then, after it says, yes I'm sure this is production ready and we're good to move on, you have Codex and Gemini both review it one last time, and ask it to address their feedback if it's valuable or not.

      After all this, it's the only time I'll look at the code and review it and make sure it's coherent.

      Until then, I assume it's garbage.

      I'd estimate this still improves velocity by 10x, and more importantly, allows me to operate at a pace I couldn't without burning out.

    • I generally do have that mindset, but over the past 1y of Claude code I do notice that I’m clearly losing my understanding of the internals of projects. I do review LLM generated code, understand it, no problem reading/following through. But then someone asks me a question, and I’m like… wait, I actually don’t know. I remember the instructions I gave and reviewing the code but don’t actually have a fine-details model of the actual implementation crystallized in my mind, I need to check, was that thing implemented the way I thought it was or not? Wait, it’s actually wrong/not matching at all what I thought! It’s definitely becoming uncomfortable and makes me reconsider my use of Claude code pretty significantly

      6 replies →

    • One-off tasks and parts of the stack that already have lots of disposable code do not need the same scrutiny as everything else. Just as there is a broad continuum of code importance, there is a broad continuum of testing requirements, and this was the case before AI. Keeping this in mind, AIs can also do some verification and testing, too.

  • > Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.

    Any examples how you see some engineers being left behind?

    • > Any examples how you see some engineers being left behind?

      I don't know where you live, but around where I live in Denmark you'd fail for not using AI at a senior interview in a lot of places. Even places which aren't exactly AI fans use AI to some extend.

      The biggest challenge we face right now is figuring out how you create developers who have enough experience to know how to use the AI tools in a critical manner. Especially because you're typically given agents for various taks, which are already configured to know how we want things to be written.

      1 reply →

    • I'm starting to notice how those who don't use AI end up having to hand tasks over to people who can get them done quicker.

      It is anecdotal for sure, but it's a pattern that seems to be emerging around me that expectations of velocity increases, and those who don't use AI can't keep up.

      2 replies →

    • Probably in cognitive surrender. I have one such colleague and he is driving me crazy. "Claude sad that ..."

Fanbase, maybe. Software engineers using these projects? Probably forking and updating themselves.

FWIW, I've opened a half dozen PRs from LLMs and had them approved. I have some prompts I use to make them very difficult to tell they are AI.

However if it is a big anti-llm project I just fork and have agents rebase my changes.

  • Your employer allows/encourages this? Do you run that stuff in production? Would you mind telling us where you work so we can avoid using their products? It is just not possible to trust the software that emerges from the process you've described.

Not really - I imagine as with almost everything in life there's a normal distribution, in this case of the quality with which people use AI tools.

  • The normal distribution doesn't account for things like "huge megacorporations pour billions of dollars into accelerating product adoption" or "other companies force their employees to use AI whether they want to or not" though.