Comment by latexr
2 days ago
> There are always tar pits of time where you are no better off with AI, but sometimes it's 20x.
This is absurd measuring. You can’t in good faith claim a 20x improvement if it only happens “sometimes” and other times it’s a time sink.
The more detail you keep providing in this thread, the clearer it becomes your assessment lands somewhere between the disingenuous and the delusional.
How do you measure 20x when someone says they do that?
Does that mean you deliver the same amount of code in the same time with 20x less bugs?
Or the same quality code in 20x less time?
Or 10x less bugs in 2x less time?
An honest measurement tries to consider the aggregate, not one single point.
If you had a hammer which could drive a nail through a plank 20x faster but took 60x longer to prepare before each strike, claiming 20x gains would be disingenuous.
The problem is that AI leads to extremely bimodal distribution of improvement.
Sometimes it doesn't help at all. Other times it spits out several hours of work in seconds.
It's like asking what is the weighted average of 1 and infinity? Even if you can quantify how many 1s and how many infinities there are, the answer is always going to be nonsensical.
1 reply →
[dead]