← Back to context

Comment by ilaksh

1 day ago

I think that basically they trained a new model but haven't finished optimizing it and updating their guardrails yet. So they can feasibly give access to some privileged organizations, but don't have the compute for a wide release until they distill, quantize, get more hardware online, incorporate new optimization techniques, etc. It just happens to make sense to focus on cybersecurity in the preview phase especially for public relations purposes.

It would be nice if one of those privileged companies could use their access to start building out a next level programming dataset for training open models. But I wonder if they would be able to get away with it. Anthropic is probably monitoring.

I think what they’re saying makes a lot of sense. If this can find thousands of vulnerabilities in browsers and OSes then this is giving those companies time to fix those bugs before they release the model, if they ever do.