Comment by bayindirh
6 days ago
You are right, but with a good optimizing compiler and out of order execution, your code will not work the way you guess most of the time, even though it accomplishes what you want.
On the other hand, while doing high performance compute, the processor will try to act smart to keep everything saturated. As a result, you still need to look at cache trash ratio, IPC, retirement ratio, etc. to see whether you are using the system at its peak performance, and again CPU is doing its thing to keep the numbers high, but that's not enough of course. You have to do your own part and write good code.
In these cases where you share the machine (which can be a cluster node or a mobile phone), maximizing this performance is again beneficial since it allows smoother operation both for your and other users' code in general. Trying to saturate the system with your process is a completely different thing, but you don't have to do that to have nice and performant code.
GPU computation is nice, and you can do big things fast, but it's not suitable for optimizing and offloading every kind of task, and even if though the task is suitable for the GPU, the scale of the computation still matters, because a competent programmer can fit billions of computations until a GPU starts running your kernel. The overhead is just too big.
Knuth's full quote doesn't actually invalidate me, because that's how I operate while writing code, designed for high performance or not.
No comments yet
Contribute on Hacker News ↗