Comment by ks2048

2 days ago

You could be right about this being an excuse for some other reason, but lots of software has “safety tests” beyond life or death situations.

Most companies, for better or worse (I say for better) don’t want their new chatbot to be a RoboHitler, for example.

It is possible to turn any open weight model into that with fine tuning. It is likely possible to do that with closed weight models, even when there is no creator provided sandbox for fine tuning them, through clever prompting and trying over and over again. It is unfortunate, but there really is no avoiding that.

That said, I am happy to accept the term safety used in other places, but here it just seems like a marketing term. From my recollection, OpenAI had made a push to get regulation that would stifle competition by talking about these things as dangerous and needing safety. Then they backtracked somewhat when they found the proposed regulations would restrict themselves rather than just their competitors. However, they are still pushing this safety narrative that was never really appropriate. They have a term for this called alignment and what they are doing are tests to verify alignment in areas that they deem sensitive so that they have a rough idea to what extent the outputs might contain things that they do not like in those areas.