Comment by version_five 3 years ago They could clean up the training data I bet. That would be where I'd focus next. 2 comments version_five Reply mathematicaster 3 years ago Is there any indication from OpenAI people that there are low hanging fruits to be picked in this direction? summarity 3 years ago It’s the indication data of current research: train more and better, current models are oversized and undertrained. A good foundation model can exhibit massive quality differences with just a tiny bit of quality fine tuning (e.g. Alpaca vs Koala)Personal opinion, not OAI/GH/MSFT’s
mathematicaster 3 years ago Is there any indication from OpenAI people that there are low hanging fruits to be picked in this direction? summarity 3 years ago It’s the indication data of current research: train more and better, current models are oversized and undertrained. A good foundation model can exhibit massive quality differences with just a tiny bit of quality fine tuning (e.g. Alpaca vs Koala)Personal opinion, not OAI/GH/MSFT’s
summarity 3 years ago It’s the indication data of current research: train more and better, current models are oversized and undertrained. A good foundation model can exhibit massive quality differences with just a tiny bit of quality fine tuning (e.g. Alpaca vs Koala)Personal opinion, not OAI/GH/MSFT’s
Is there any indication from OpenAI people that there are low hanging fruits to be picked in this direction?
It’s the indication data of current research: train more and better, current models are oversized and undertrained. A good foundation model can exhibit massive quality differences with just a tiny bit of quality fine tuning (e.g. Alpaca vs Koala)
Personal opinion, not OAI/GH/MSFT’s