Comment by bdangubic
9 hours ago
> It’s shipping it. You can have every one vibe coding until their eyes bleed and you’ve drained their will to live. The slowest part will still be testing, verifying, releasing, and maintaining the ball of technical debt that’s been accumulating. You will still have to figure out what to ship, what to fix, what to rush out and what to hold out until it’s right, etc. The more people you have to slower that goes in my experience. AI tools don’t make that part faster.
This type of comments is all that is wrong with our industry. If "shipping it" is an issue there are a colossal failure throughout the entire organization. My team "shipped" 11 times yesterday, 7 on Monday, 21 on Friday... "shipping" is a non-event if you know what the F you are doing. If you don't, you should learn. If adding more people to help you with the amazing shit you are doing makes you slower, you have a lot of work to do up and down your ladder.
Maybe it's just my luck but most engineering teams I've worked with that were building some kind of network-facing service in the last 16-some-odd-years have tried to implementing continuous delivery of one kind or another. It usually started off well but it ends up being just as slow as the versioned-release system they used before.
It sounds like your team is the exception? Many folks I talk to have similar stories.
I've worked with teams to build out a well-oiled continuous delivery system. With code reviews, integration gating, feature flags, a blue-green deployment process, and all of the fancy o11y tools... we shipped several times a day. And people were still afraid to ship a critical feature on a Friday in case there had to be a roll-back... still a pain.
And all of that took way more time and effort than writing the code in the first place. You could get a feature done in an afternoon and it would take days to get through the merge queue, get through reviews, make it through the integration pipeline and see the light of production. All GenAI had done there was increase the input volume to the slowest part of the system.
People were still figuring out the best way to use LLM tools at that time though. Maybe there are teams who have figured it out. Or else they just stop caring and don't mind sloppy, slow, bloated software that struggles to keep one nine of availability.