How about "bad agents acquiring dozens of new zero-days and using them to compromise any company or nation they want"? It's not exactly hard to see why you wouldn't want public access to a model significantly better than Opus in cybersecurity.
Open models still haven't caught up to ChatGPT's initial release in 2022. Now that the training data is so contaminated (internet is now mostly LLM slop), they may never.
Also, OpenAI's only real moat used to be the quality of their training data from scraping the pre-GPT-3.5 Internet, but it looks like even they've scratched that too.
Cautious for what? Unchecked doomerism? Just release the damn models. Do it in phases, roll it out slowly if they are so damn worried about "safety".
The real reason they aren't releasing it yet is probably it eats TPU for breakfast, lunch, and dinner and inbetween.
> Cautious for what?
How about "bad agents acquiring dozens of new zero-days and using them to compromise any company or nation they want"? It's not exactly hard to see why you wouldn't want public access to a model significantly better than Opus in cybersecurity.
Bad agents already have dozens of zero-days they can use.
Being cautious is fine. Farming hype around something that may as well not exist for us should be discouraged. I do appreciate the research outputs.
Don't worry, in 6-8 months the open models will catch up. Or I guess _do_ worry? ;)
Open models still haven't caught up to ChatGPT's initial release in 2022. Now that the training data is so contaminated (internet is now mostly LLM slop), they may never.
Also, OpenAI's only real moat used to be the quality of their training data from scraping the pre-GPT-3.5 Internet, but it looks like even they've scratched that too.
2 replies →