Comment by htrp
1 year ago
I'd argue that their own foundational models are getting outperformed by the Llama finetunes on HF and at this point they're shifting cost structures (getting rid of training clusters in favor of hosted inference).
1 year ago
I'd argue that their own foundational models are getting outperformed by the Llama finetunes on HF and at this point they're shifting cost structures (getting rid of training clusters in favor of hosted inference).
No comments yet
Contribute on Hacker News ↗