Comment by kenforthewin
2 months ago
I find the cost discussion to be exceedingly more tedious. This would be a more compelling line of thinking if we didn't have highly effective open-weight models like qwen3-coder, glm 4.7 etc. which allow us to directly measure the cost of running inference with large models without confounding factors like VC money. Regardless of the cost of training, the models that exist right now are cheap and effective enough to push the conversation right back to "quibbling about the exact degree of utility LLMs provide".
No comments yet
Contribute on Hacker News ↗