← Back to context

Comment by kmacdough

2 years ago

I'm seeing a lot of criticism suggesting that one company understanding safety won't help what other companies or countries do. This is very wrong.

Throughout history, measurement has always been the key to enforcement. The only reason the nuclear test ban treaty didn't ban underground tests was because it couldn't be monitored.

In the current landscape there is no formal understanding of what safety means or how it is achieved. There is no benchmark against which to evaluate ambitious orgs like OpenAI. Anything goes wrong? No one could've known better.

The mere existence of a formal understanding would enable governments and third parties to evaluate the safety of corporate and government AI programs.

It remains to be seen whether SSI is able to be such a benchmark. But outright dismissal of the effort ignores the reality of how enforcement works in the real world.

> In the current landscape there is no formal understanding of what safety means or how it is achieved. There is no benchmark against which to evaluate ambitious orgs like OpenAI. Anything goes wrong? No one could've known better.

We establish this regularly in the legal sphere, where people seek mediation for harms from systems they don't have liability and control for.