Comment by layer8
2 days ago
> The model will be fully open: source code and weights will be publicly available, and the training data will be transparent and reproducible
This leads me to believe that the training data won’t be made publicly available in full, but merely be “reproducible”. This might mean that they’ll provide references like a list of URLs of the pages they trained on, but not their contents.
Well, when the actual content is 100s of terabytes big, providing URLs may be more practical for them and for others.
The difference between content they are allowed to train on vs. being allowed to distribute copies of is likely at least as relevant.
No problem, we have 25 Gbit/s home internet here. [1]
[1] https://www.init7.net/en/internet/fiber7/
That wouldn't seem reproducible if the content at those URLs changes. (Er, unless it was all web.archive.org URLs or something.)
This is a problem with the Web. It should be easier to download content like it was updating a git Repo.
Yeah, I suspect you're right. Still, even a list of URLs for a frontier model (assuming it does turn out to be of that level) would be welcome over the current situation.