Comment by risyachka
2 months ago
Closed PRs, commits, loc etc are useless vanity metrics.
With ai code you have more loc and NEED more PRs to fix all its slop.
In the end you have increased numbers with net negative effect
2 months ago
Closed PRs, commits, loc etc are useless vanity metrics.
With ai code you have more loc and NEED more PRs to fix all its slop.
In the end you have increased numbers with net negative effect
Most of those studies call this out and try to control for it (edit: "it" here being the usual limitations of LoC and PRs as measures of productivity) where possible. But to your point, no, there is still a strong net positive effect:
> https://www.youtube.com/watch?v=tbDDYKRFjhk (from Stanford, not an RCT, but the largest scale with actual commits from 100K developers across 600+ companies, and tries to account for reworking AI output. Same guys behind the "ghost engineers" story.)
Emphasis added. They modeled a way to detect when AI output is being reworked, and still find a 15-20% increase in throughput. Specific timestamp: https://youtu.be/tbDDYKRFjhk?t=590&si=63qBzP6jc7OLtGyk
Could you try to avoid uncertainties like this by measuring something like revenue growth before and after AI? Given enough data.
Hmm, not an economist but I have seen other studies that look at things at the firm level, so definitely should be possible. A quick search on Google and SSRN didn't turn up some studies but they seem to focus on productivity rather than revenues, not sure why. Maybe because such studies depend on the available data, however, so a lot of key information may be hidden, e.g. revenues of privately held companies which constitute a large part of the economy.
1 reply →