← Back to context

Comment by MarkusQ

7 hours ago

There isn't enough training data though, is there? The "secret sauce" of LLMs is the vast amount of training data available + the compute to process it all.

I think you could probably feed a copy of a toki pona grammar book to a big model, and have it produce ‘infinite’ training data