Comment by throwaway314155
17 days ago
Google had a pretty rough start compared to ChatGPT, Claude. I suspect that left a bad taste in many people's mouths. In particular because evaluating so many LLM's is a lot of effort on its own.
Llama and DeepSeek are no-brainers; the weights are public.
No brainer if you're sitting on a >$100k inference server.
Sure, that's fair. If you're aiming for state of the art performance. Otherwise, you can get close and do it on reasonably priced hardware by using smaller distilled and/or quantized variants of llama/r1.
Really though I just meant "it's a no-brainer that they are popular here on HN".
I pay 78 cents an hour to host Llama.
Vast? Specs?
1 reply →