Your productivity has not increased by a few times unless you're measuring purely by lines of code written, which has been firmly established over the decades as a largely meaningless metric.
I needed to track the growth of "tx_ucast_packets" in each queue on a network interface earlier.
I asked my friendly LLM to run every second and dump the delta for each queue into a csv, 10 seconds to write what I wanted, 5 seconds later to run it, then another 10 seconds to reformat it after looking at the output.
It had hardcoded the interface, which is what I told it to do, but I'm happy with it and want to change the interface, so again 5 seconds of typing and it's using argparse to take in a bunch of variables.
That task would have taken me far longer than 30 seconds to do 5 years ago.
Now if only AI can reproduce the intermittent problem with packet ordering I've been chasing down today.
I'm measuring by the amount of time it takes me to write a piece of code that does something I want, like make a plot or calculate some quantity of interest.
Or even the fact that I was able to start coding in an entirely new ML framework right away without reading any documentation beforehand.
I'm puzzled by the denialism about AI-driven productivity gains in coding. They're blindingly obvious to anyone using AI to code nowadays.
Your productivity has not increased by a few times unless you're measuring purely by lines of code written, which has been firmly established over the decades as a largely meaningless metric.
I needed to track the growth of "tx_ucast_packets" in each queue on a network interface earlier.
I asked my friendly LLM to run every second and dump the delta for each queue into a csv, 10 seconds to write what I wanted, 5 seconds later to run it, then another 10 seconds to reformat it after looking at the output.
It had hardcoded the interface, which is what I told it to do, but I'm happy with it and want to change the interface, so again 5 seconds of typing and it's using argparse to take in a bunch of variables.
That task would have taken me far longer than 30 seconds to do 5 years ago.
Now if only AI can reproduce the intermittent problem with packet ordering I've been chasing down today.
I'm measuring by the amount of time it takes me to write a piece of code that does something I want, like make a plot or calculate some quantity of interest.
Or even the fact that I was able to start coding in an entirely new ML framework right away without reading any documentation beforehand.
I'm puzzled by the denialism about AI-driven productivity gains in coding. They're blindingly obvious to anyone using AI to code nowadays.
> like make a plot or calculate some quantity of interest.
This great comment I saw on another post earlier feels relevant: https://news.ycombinator.com/item?id=46850233
3 replies →