Comment by MrGilbert

6 days ago

And what you will get in return is professional software developers looking at vibe-coded modules that already went into production, stating that "we will never ever touch this", as they don’t want to be responsible for something they would have never put into production in the first place.

Now, they see themselves challenged to defend against the non-technical departments, because all they see are some elitist developers, that deem something as "not good enough", which, from a user standpoint, "is working quite well".

However - it's unmaintainable. That whole situation is a mess, and it's becoming bigger and bigger.

Asking someone to maintain a "vibecoded" project isn't vibecoding anymore, by definition. I feel this whole thing is going the "AGI" way. Everyone is shouting above everyone else, using different definitions and biases, and there is 0 productive discussion going on.

Vibe coding - you don't care about the code. You don't look at the code. You just ask, test that what you received works, and go on with your life.

LLM-assisted coding - you care about the code. You will maintain that code. You take responsibility and treat it as any other software development job, with everything that's required.

Same same, but different.

  • From personal experience, I'd like to add "I don't know what I'm doing, but LLM helps me pretending that I do" coding. And yes, that code ended up in production and caused issues. It was coded outside of the development department.

    The productive discussion left the chat some shareholder rounds ago.

    • Is it much more different (in quality) to "I don't know what I'm doing, but [SO/learn x in 24h bootcamp] helps me pretend that I do"?

      I guess I see your point in quantity. I could see how this would be more widespread than c/p from SO. And the results would "look" better crafted at a glance, but might explode later on.

      1 reply →

  • It’s a spectrum.

    I care when it doesn’t just work.

    I hardly look when it does.

I ran into an AI coded bug recently the generated code had a hard coded path that resolved another bug. My assumption is the coder was too lazy to find the root cause of the bug and asked the LLM to "make it like this". The LLM basically set a flag to true so the business logic seems to work. It shouldn't have got past the test but whatever.

In another code base, all the code was written with this pattern. Its like the new code changed what the old code did. I think that 'coder' kept a big context window and didn't know how to properly ask for something. There was 150 line function that only needed to be 3 lines, a 300 line function that could be done in 10 etc. There were several a sections where the LLM moved the values of a list to another list and then looped through the new list to make sure the values were in the new list. It did this over and over again.