← Back to context

Comment by AstralStorm

4 days ago

> Productivity was tracked through metrics such as completed tasks (pull requests), code commits, and successful builds.

Making untested garbage faster to check off tasks quicker. Reopen rate please? New bug task rate? Nobody looked...

> The study also monitored code quality via build success rates. Importantly, increased productivity did not come at the cost of more errors, showing that Copilot helped developers code faster and more accurately.

It builds therefore it works. And the test suite was also AI generated?

Feels like we're back in primary school learning programming.

Classic case of gaming the metrics.

You sound like you haven't given Copilot and friends a thorough chance or evaluation. If you had, you'd know that...

- people don't outsource, they pair-program, because these things cannot do complex tasks on their own

- they are quite good at running tests with coverage, inspecting the results, and fixing both the tests and code

- people make mistakes, expecting AI to be perfect is unreasonable, they are tools, not replacements

> Making untested garbage faster to check off tasks quicker

> Feels like we're back in primary school learning programming.

> Classic case of gaming the metrics.

plain view bias causes others to discount your opinion

  • > people make mistakes, expecting AI to be perfect is unreasonable, they are tools, not replacements

    This is the key. These tools are an improvement for many people, but others pooh-pooh them for not being perfect. Working in a team with other programmers (or looking at my own older code) I often see mistakes, often obvious to me now.