Comment by summarity
3 years ago
It’s the indication data of current research: train more and better, current models are oversized and undertrained. A good foundation model can exhibit massive quality differences with just a tiny bit of quality fine tuning (e.g. Alpaca vs Koala)
Personal opinion, not OAI/GH/MSFT’s
No comments yet
Contribute on Hacker News ↗