Comment by embedding-shape
5 days ago
So there seems to be an shared underestanding how difficult "measure your results" would be in this case, so could we also agree that asking someone:
> I wonder if they have measured their results? [...] Can you provide data that objects this view, based on these (celebrity) developers or otherwise?
isn't really fair? Because not even you or I really know how to do so in a fair and reasonable manner, unless we start to involve trials with multiple developers and so on.
> isn't really fair? Because not even you or I really know how to do so in a fair and reasonable manner, unless we start to involve trials with multiple developers and so on.
I think in a small conversation like this, it's probably not entirely fair.
However, we're hearing similar things from much larger organisations who definitely have the resources to do studies like this, and yet there's very little decent work available.
In fact, lots of the time they are deliberately misleading people (25% of our code generated by AI being copilot/other autocomplete). Like, that 25% stat was probably true historically with JetBrains products and using any form of code generations (for protobufs et al) so it's wildly deceptive et al.