Comment by impulser_

1 day ago

So they are only giving access to their smartest model to corporations.

You think these AI companies are really going to give AGI access to everyone. Think again.

We better fucking hope open source wins, because we aren't getting access if it doesn't.

This story has been played out numerous times already. Anthropic (or any frontier lab) has a new model with SOTA results. It pretends like it's Christ incarnate and represents the end of the world as we know it. Gates its release to drum up excitement and mystique.

Then the next lab catches up and releases it more broadly

Then later the open weights model is released.

The only way this type of technology is going to be gated "to only corporations" is if we continue on this exponential scaling trend as the "SOTA" model is always out of reach.

  • I don't know how you can read the report and the companies involved and dismiss this as hot air. What incentive does the Linux Foundation have to hype up Mythos? What about Apple?

    How can you read the description of the exploits and be like "yeah that's nbd?"

    And the only reason OSS has ever caught up is because they simply distill Claude or GPT. The day the big players make it hard to distill (like Anthropic is doing here), OSS is cooked.

    And that's a good thing, why would you want random skiddie hackers to have access to a cyber super weapon?

    • No, that’s a terrible thing and random skiddie hackers absolutely should. This is only a temporary state of insecurity as these vulnerability scanners come online.

      If this stuff is open source and not gate kept, it will be standard practice to just run some LLM security analysis on every commit and software will no longer be vulnerable to these classes of attacks.

      1 reply →

It also took many years to put capable computers in the hands of the general public, but it eventually happened. I believe the same will happen here, we're just in the Mainframe era of AI.

  • Yeah, but computers don't replace you. They are building AI to replace you. You think if these companies eventually achieve AGI that you are going to give you access to it? They are already gatekeeping an LLM because they don't trust you with it.

Would you hope that it would be released today so that evil actors could invest few millions to search for 0days across popular open-source repos?

of course they're not giving access to everyone.

they better make billions directly from corporations, instead of giving them to average people who might get a chance out of poverty (but also bad actors using it to do even more bad things)

  • Anthropic's definition of "safe AI" precludes open-source AI. This is clear if you listen to what he says in interviews, I think he might even prefer OpenAI's closed source models winning to having open-source AI (because at least in the former it's not a free-for-all)