← Back to context

Comment by andrethegiant

18 hours ago

Bragging about token usage is like bragging about LoC written.

When I was at Amazon last year, the bragging (from the AI poo-bah in my section of Amazon, note) about AI included "look at the total line count of commits from the heaviest AI users!"

So if AI screws something up and re-writes it and then screws it up again, needing another re-write, that counted as more positive than if it was done correctly, and simply, the first time.

It’s honestly 10x worse than LOC. At least in the human era LOC had correlation to shipping features.

It’s more like bragging about compiler cycles spent.

  • I don't know where you're working but LLM enhanced development has skyrocketed our rate of feature development. As an example, a project roadmapped to take 7 months was delivered in only 4.5 because of CC/Codex.

    I'm confused how anyone could believe it isn't an enhancer, unless they have refused to use any of the technologies.

    • Yeah I’ve experienced much the same as you. Like it’s overwhelmingly clear from everything it’s enabled for us that we’re going far, far faster than we ever have, and the guardrails we have in play have helped guard the architecture and make it even harder to commit a bad PR. Sometimes in reading these comments I’m left wondering what sorts of experiences people are having elsewhere that’s left them this soured on its usage in business.

    • You're measuring success with time to delivery, that's a reasonable metric. Same with volume of features shipped. Also good. LoC or tokens burned... not so much.

Obligatory:

Negative 2000 Lines of Code

https://news.ycombinator.com/item?id=44381252