When I was at Amazon last year, the bragging (from the AI poo-bah in my section of Amazon, note) about AI included "look at the total line count of commits from the heaviest AI users!"
So if AI screws something up and re-writes it and then screws it up again, needing another re-write, that counted as more positive than if it was done correctly, and simply, the first time.
I don't know where you're working but LLM enhanced development has skyrocketed our rate of feature development. As an example, a project roadmapped to take 7 months was delivered in only 4.5 because of CC/Codex.
I'm confused how anyone could believe it isn't an enhancer, unless they have refused to use any of the technologies.
Yeah I’ve experienced much the same as you. Like it’s overwhelmingly clear from everything it’s enabled for us that we’re going far, far faster than we ever have, and the guardrails we have in play have helped guard the architecture and make it even harder to commit a bad PR. Sometimes in reading these comments I’m left wondering what sorts of experiences people are having elsewhere that’s left them this soured on its usage in business.
You're measuring success with time to delivery, that's a reasonable metric. Same with volume of features shipped. Also good. LoC or tokens burned... not so much.
When I was at Amazon last year, the bragging (from the AI poo-bah in my section of Amazon, note) about AI included "look at the total line count of commits from the heaviest AI users!"
So if AI screws something up and re-writes it and then screws it up again, needing another re-write, that counted as more positive than if it was done correctly, and simply, the first time.
This is like when the Pointy Haired Boss offers a bounty for fixing bugs and Wally pumps his fist and says “I’m gonna go code myself a Porsche!”
It is almost as if Dilbert was a documentary.
It’s honestly 10x worse than LOC. At least in the human era LOC had correlation to shipping features.
It’s more like bragging about compiler cycles spent.
I don't know where you're working but LLM enhanced development has skyrocketed our rate of feature development. As an example, a project roadmapped to take 7 months was delivered in only 4.5 because of CC/Codex.
I'm confused how anyone could believe it isn't an enhancer, unless they have refused to use any of the technologies.
Yeah I’ve experienced much the same as you. Like it’s overwhelmingly clear from everything it’s enabled for us that we’re going far, far faster than we ever have, and the guardrails we have in play have helped guard the architecture and make it even harder to commit a bad PR. Sometimes in reading these comments I’m left wondering what sorts of experiences people are having elsewhere that’s left them this soured on its usage in business.
You're measuring success with time to delivery, that's a reasonable metric. Same with volume of features shipped. Also good. LoC or tokens burned... not so much.
Obligatory:
Negative 2000 Lines of Code
https://news.ycombinator.com/item?id=44381252
Versus my sibling comment to yours, I actually sent that to some internal folks after the bit about AI+total lines committed was said.
I’m surprised that lines removed isn’t something your bosses at the time weren’t also advocating for, TBH. I don’t blame you for looking around.
was there any kind of response or reaction to that? it’s something i would have done and probably wouldn’t have gone well. xD
1 reply →
Timeless classic.