Comment by tuckerman
19 hours ago
I agree that there is hyperbole thrown around a lot here and its possible to still use some hardware for a long time or to sell it and recover some cost but my experience in planning compute at large companies is that spending money on hardware and upgrading can often result in saving money long term.
Even assuming your compute demands stay fixed, its possible that a future generation of accelerator will be sufficiently more power/cooling efficient for your workload that it is a positive return on investment to upgrade, more so when you take into account you can start depreciating them again.
If your compute demands aren't fixed you have to work around limited floor space/electricity/cooling capacity/network capacity/backup generators/etc and so moving to the next generation is required to meet demand without extremely expensive (and often slow) infrastructure projects.
Sure, but I don't think most people here are objecting to the obvious "3 years is enough for enterprise GPUs to become totally obsolete for cutting-edge workloads" point. They're just objecting to the rather bizarre notion that the hardware itself might physically break in that timeframe. Now, it would be one thing if that notion was supported by actual reliability studies drawn from that same environment - like we see for the Backblaze HDD lifecycle analyses. But instead we're just getting these weird rumors.
I agree that is a strange notion that would require some evidence and I see it in some other threads but looking at the parent comments going up it seems people are discussing economic usefulness so that is what I'm responding to.