← Back to context

Comment by version_five

3 years ago

They could clean up the training data I bet. That would be where I'd focus next.

Is there any indication from OpenAI people that there are low hanging fruits to be picked in this direction?

  • It’s the indication data of current research: train more and better, current models are oversized and undertrained. A good foundation model can exhibit massive quality differences with just a tiny bit of quality fine tuning (e.g. Alpaca vs Koala)

    Personal opinion, not OAI/GH/MSFT’s