Comment by trashb

18 days ago

To wrap this up, what I was trying to say is that the feeling of being faster may not align with the reality. Even for people that have a good understanding of the matter it may be difficult to estimate. So I would say be skeptical of claims like this and try to somehow quantize it in a way that matters for the tasks you do. This is something managers of software projects have been trying to tackling for a while now.

There is no exact measurement in this case but you could get an idea by testing certain types of implementations. For example if you are finishing similar tasks on average 25% faster during a longer testing period with and without AI. Just the act of timing yourself doing tasks with or without AI may already give a crude indication of the difference.

You could also run a trail implementing coding tasks like leet code however you will introduce some kind of bias due to having done it previously. And additionally the tasks may not align with your daily activities.

A trail with multiple developers working on the same task pool with or without AI could lead to more substantial results but you won't be able to do that by yourself.

So there seems to be an shared underestanding how difficult "measure your results" would be in this case, so could we also agree that asking someone:

> I wonder if they have measured their results? [...] Can you provide data that objects this view, based on these (celebrity) developers or otherwise?

isn't really fair? Because not even you or I really know how to do so in a fair and reasonable manner, unless we start to involve trials with multiple developers and so on.

  • > isn't this fair?

    We are talking about hear say anecdotal evidence from some influential people in the industry. The people mentioned in the comment I responded to have influence to organize certain research. Some measurements (even if not ideal) can point to 20x vs 0.1x speedup differences at least.

    I indicated that there is at least some research pointing that developers (experienced or not) often overestimate the gains of using AI. There are a lot of other things that may prompt people to say things regarding emergent industries, for example investments into the AI industry.

    I am interested if the claims are real or perhaps overstated. Therefore I asked what kind of information this is based on. This is how science works compared to marketing claims. Hypothesis lead to experiments that result in measurements that lead to a conclusion.

    But as of now I still didn't even get a link to the statements supposedly made by these influential developers, this is the rhetoric with a lot of claims around AI especially. And therefore I am still skeptical about such claims until I see some concrete evidence.

    So I would say yes it is fair to ask if they measured their results to back up their claims, especially if they are influential developers.

  • > isn't really fair? Because not even you or I really know how to do so in a fair and reasonable manner, unless we start to involve trials with multiple developers and so on.

    I think in a small conversation like this, it's probably not entirely fair.

    However, we're hearing similar things from much larger organisations who definitely have the resources to do studies like this, and yet there's very little decent work available.

    In fact, lots of the time they are deliberately misleading people (25% of our code generated by AI being copilot/other autocomplete). Like, that 25% stat was probably true historically with JetBrains products and using any form of code generations (for protobufs et al) so it's wildly deceptive et al.