Comment by TeMPOraL
25 days ago
Don't get the anti-AI propaganda get to you too much. Inference is cheap on the margin.
Consider: there are models capable (if barely) of doing this job, that you can run locally, on a upper-mid-range PC with high-end consumer GPU. Take that as a baseline, assume it takes a day instead of an hour because of inference speed, tally up total electricity cost. It's not much. Won't boil oceans any more than people playing AAA video games all day will.
Sure, the big LLMs from SOTA vendors use more GPUs/TPUs for inference, but this means they finish much faster. Plus, commercial vendors have lots of optimizations (batch processing, large caches, etc.), and data centers are much more power-efficient than your local machine, so "how much it'd cost me in power bill if I did it locally" is a good starting estimate.
No comments yet
Contribute on Hacker News ↗