← Back to context

Comment by agentultra

15 hours ago

If you’ve heard it a number of times and refuse to consider what people are saying then maybe I can’t help you.

I’m talking from personal experience of well over twenty years as both a developer, and for a while, a manager.

The slow part isn’t writing code.

It’s shipping it. You can have every one vibe coding until their eyes bleed and you’ve drained their will to live. The slowest part will still be testing, verifying, releasing, and maintaining the ball of technical debt that’s been accumulating. You will still have to figure out what to ship, what to fix, what to rush out and what to hold out until it’s right, etc. The more people you have to slower that goes in my experience. AI tools don’t make that part faster.

> If you’ve heard it a number of times and refuse to consider what people are saying then maybe I can’t help you.

What someone says “I’ve heard this a thousand times, but…”, it could be that the person is just stupidly obstinate but it could also mean that they have a considered opinion that it might benefit you to learn.

“More people slow down projects” is an oversimplified version of the premise in The Mythical Man Month. If that simplistic viewpoint held, Google would employ a grand total of maybe a dozen engineers. What The Mythical Man Month says is that more engineers slow down a project that is already behind. i.e. You can’t fix a late project by adding more people.

This does not mean that the amount of code/features/whatever a team can produce or ship is unrelated to the size of the team or the speed at which they can write code. Those are not statements made in the book.

  • Sure, I’m not writing a whole critical analysis of TMMM here and am using an aphorism to make a point.

    Let’s imagine we’re going to make a new operating system to compete with Linux.

    If we have a team of 10 developers we’re probably not going to finish that project in a month.

    If we’re going to add 100 developers we’re not going to finish that project in a month.

    If we add a thousand developers we’re still not going to finish that project in a month.

    But which team should ship first? And keep shipping and release fastest?

    My bet would be on the smaller team. The exact number of developers might vary but I know that if you go over a certain threshold it will slow down.

    People trying to understand management of software projects like to use analogies to factory lines or building construction to understand the systems and processes that produce software.

    Yet it’s not like adding more code per unit of time is adding anything to the process.

    Even adding more people to a factory line had diminishing returns in efficiency.

    There’s a sweet spot, I find.

    As for Google… it’s not a panacea of efficiency from what I hear. Though I don’t work there. I’ve heard stories that it takes a long time to get small changes to production. Maybe someone who does work there could step in and tell us what it’s like.

    As a rule though, I find that smaller teams with the right support, can ship faster and deliver higher quality results in the long run.

    My sample size isn’t large though. Maybe Windows is like the ultimate operating system that is fast, efficient, and of such high quality because they have so many engineers working on it.

> It’s shipping it. You can have every one vibe coding until their eyes bleed and you’ve drained their will to live. The slowest part will still be testing, verifying, releasing, and maintaining the ball of technical debt that’s been accumulating. You will still have to figure out what to ship, what to fix, what to rush out and what to hold out until it’s right, etc. The more people you have to slower that goes in my experience. AI tools don’t make that part faster.

This type of comments is all that is wrong with our industry. If "shipping it" is an issue there are a colossal failure throughout the entire organization. My team "shipped" 11 times yesterday, 7 on Monday, 21 on Friday... "shipping" is a non-event if you know what the F you are doing. If you don't, you should learn. If adding more people to help you with the amazing shit you are doing makes you slower, you have a lot of work to do up and down your ladder.

  • Maybe it's just my luck but most engineering teams I've worked with that were building some kind of network-facing service in the last 16-some-odd-years have tried to implementing continuous delivery of one kind or another. It usually started off well but it ends up being just as slow as the versioned-release system they used before.

    It sounds like your team is the exception? Many folks I talk to have similar stories.

    I've worked with teams to build out a well-oiled continuous delivery system. With code reviews, integration gating, feature flags, a blue-green deployment process, and all of the fancy o11y tools... we shipped several times a day. And people were still afraid to ship a critical feature on a Friday in case there had to be a roll-back... still a pain.

    And all of that took way more time and effort than writing the code in the first place. You could get a feature done in an afternoon and it would take days to get through the merge queue, get through reviews, make it through the integration pipeline and see the light of production. All GenAI had done there was increase the input volume to the slowest part of the system.

    People were still figuring out the best way to use LLM tools at that time though. Maybe there are teams who have figured it out. Or else they just stop caring and don't mind sloppy, slow, bloated software that struggles to keep one nine of availability.