Comment by fancyfredbot

1 day ago

Overly dismissive of the idea that you'd cut clock frequency to boost LLM inference performance by 46%. Yes, it's workload specific, but the industry is spending tens of billions on running that workload. It's actually quite smart to focus on it. People will certainly take that trade off if offered.

Still it's a good article and nice to see the old anandtech crew together. The random grammatical errors are still there but these days they are a reassuring sign the article was written by hand.