← Back to context

Comment by gyomu

19 days ago

> it completely transformed my workflow, whether it’s personal or commercial projects

> This has truly freed up my productivity, letting me pursue so many ideas I couldn’t move forward on before

If you're writing in a blog post that AI has changed your life and let you build so many amazing projects, you should link to the projects. Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.

A lot of more senior coders when they actively try vibe coding a greenfield project find that it does actually work. But only for the first ~10kloc. After that the AI, no matter how well you try to prompt it, will start to destroy existing features accidentally, will add unnecessary convoluted logic to the code, will leave benhind dead code, add random traces "for backwards compatibility", will avoid doing the correct thing as "it is too big of a refactor", doesn't understand that the dev database is not the prod database and avoids migrations. And so forth.

I've got 10+ years of coding experience, I am an AI advocate, but not vibe coding. AI is a great tool to help with the boring bits, using it to initialize files, help figure out various approaches, as a first pass code reviewer, helping with configuring, those things all work well.

But full-on replacing coders? It's not there yet. Will require an order of magnitude more improvement.

  • > only for the first ~10kloc. After that the AI, no matter how well you try to prompt it, will start to destroy existing features accidentally

    I am using them in projects with >100kloc, this is not my experience.

    at the moment, I am babysitting for any kloc, but I am sure they will get better and better.

    • It's fine at adding features on a non-vibecoded 100kloc codebase that you somewhat understand. It's when you're vibecoding from scratch that things tend to spin out at a certain point.

      I am sure there are ways to get around this sort of wall, but I do think it's currently a thing.

      1 reply →

    • Meanwhile, in the grandparent comment:

      > Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.

      You are in the 90%.

      1 reply →

    • I’m using it in a >200kloc codebase successfully, too. I think a key is to work in a properly modular codebase so it can focus on the correct changes and ignore unrelated stuff.

      That said, I do catch it doing some of the stuff the OP mentioned— particularly leaving “backwards compatibility” stuff in place. But really, all of the stuff he mentions, I’ve experienced if I’ve given it an overly broad mandate.

    • Yes, this is my experience as well. I've found the key is having the AI create and maintain clear documentation from the beginning. It helps me understand what it's building, and it helps the model maintain context when it comes time to add or change something.

      You also need a reasonably modular architecture which isn't incredibly interdependent, because that's hard to reason about, even for humans.

      You also need lots and lots (and LOTS) of unit tests to prevent regressions.

  • Where are you getting the 10kloc threshold from? Nice round number...

    Surely it depends on the design. If you have 10 10kloc modular modules with good abstractions, and then a 10k shell gluing them together, you could build much bigger things, no?

  • I agree with you in part, but I think the market is going to shift so that you won’t so many need “mega projects”. More and more, projects will be small and bespoke, built around what the team needs or answering a single question rather than forcing teams to work around an established, dominant solution.

  • I wonder if you can up the 10kloc if you have a good static analysis of your tool (I vibecoded one in Python) and good tests. Sometimes good tests aren't possible since there are too many different cases but with other forms of codes you can cover all the cases with like 50 to 100 tests or so

  • Don't you think it has gotten an order of magnitude better in the last 1-2 years? If it only requires another an order of magnitude improvement to full-on replace coders, how long do you think that will take?

    • Who is liable for the runtime behavior of the system, when handling users’ sensitive information?

      If the person who is liable for the system behavior cannot read/write code (as “all coders have been replaced”), does Anthropic et al become responsible for damages to end users for systems its tools/models build? I assume not.

      How do you reconcile this? We have tools that help engineers design and build bridges, but I still wouldn’t want to drive on an “autonomously-generated bridge may contain errors. Use at own risk” because all human structural engineering experts have been replaced.

      After asking this question many times in similar threads, I’ve received no substantial response except that “something” will probably resolve this, maybe AI will figure it out

      2 replies →

  • You’re right, but on the other hand once you have a basic understanding security, architecture, etc you can prompt around these issues. You need a couple of years of experience but that’s far less then the 10-15 years of experience you needed in the past.

    If you spend a couple of years with an LLM really watching and understanding what it’s doing and learning from mistakes, then you can get up the ladder very quickly.

    • I find that security, architecture, etc is exactly the kind of skill that takes 10-15 years to hone. Every boot camp, training provider, educational foundation, etc has an incentive to find a shortcut and we're yet to see one.

      A "basic" understanding in critical domains is extremely dangerous and an LLM will often give you a false sense of security that things are going fine while overlooking potential massive security issues.

      3 replies →

    • > If you spend a couple of years with an LLM really watching and understanding what it’s doing and learning from mistakes, then you can get up the ladder very quickly.

      I don't feel like most providers keep a model for more than 2 years. GPT-4o got deprecated in 1.5 years. Are we expecting coding models to stay stable for longer time horizons?

If you look at his github you can see he is in the first week of giving into the vibes. The first week always leads to the person making absurd claims about productivity.

>Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.

Maybe they don't feel like sharing yet another half working Javascript Sudoku Solver or yet another half working AI tool no one will ever use?

Probably they feel amazed about what they accomplished but they feel the public won't feel the same.

  • Then, in my opinion, there's nothing revolutionary about it (unless you learned something, which... no one does when they use LLMs to code)

  • Qualsiasi scarafaggio a bello per sua mamma (Italian proverb saying that to their mother all their kids are beautiful)

    That's the whole point of sharing with the rest of us. If they write for themselves, a private journal to track their progress, then there is no need to share what is actually been built. If they do though make grand claims to everybody then it would be more helpful for people who do read the piece to actually be able to see what has been produced. Maybe it's wonderful for the author but it's not the level of quality required for readers.

  • The article made it seem that the tool made them into the manager of a successful company, rather than the author of a half finished pet project

Here’s mine

https://apps.apple.com/us/app/snortfolio/id6755617457

30kloc client and server combined. I built this as an experiment in building an app without reading any of the code. Even ops is done by claude code. It has some minor bugs but I’ve been using it for months and it gets the job done. It would not have existed at all if I had to write it by hand.

AI is great, harness don't matter (I just use codex). Use state of the art models.

GPT-5.2 fixed my hanging WiFi driver: https://gist.github.com/lostmsu/a0cdd213676223fc7669726b3a24...

  • Fixing mediatek drivers is not the flex you think it is.

    • It is if it's something they couldn't do on their own before.

      It's a magical moment when someone is able to AI code a solution to a problem that they couldn't fix on their own before.

      It doesn't matter whether there are other people who could have fixed this without AI tools, what matters is they were able to get it fixed, and they didn't have to just accept it was broken until someone else fixed it.

      1 reply →

    • It was an easy fix for someone who already knows how WiFi drivers work and functions provided to them by Linux kernel. I am not one of these people though. I could have fixed it myself, but it would take a week just to get accustomed to the necessary tools.

Grifters gotta grift. There is so much money on the line and everyone is trying to be an influencer/“thought leader” in the area.

Nobody is actually using AI for anything useful or THEY WOULDNT BE TALKING ABOUT IT. They’d be disrupting everything and making billions of dollars.

Instead this whole AI grift reads like “how to be a millionaire in 10 days” grifts by people that aren’t, in fact, millionaires.