← Back to context

Comment by azath92

4 hours ago

I am continually surprised by the reference to "voluntary actions taken by companies" being brought up in discussion of the risks of AI, without some nuance given to why they would do that. The paragraph on surgical action goes in to about 5-10 times more detail on the potential issues with gov't regulation, implying to me that voluntary action is better. Even for someone at anthropic, i would hope that they would discuss it further.

I am genuinely curious to understand the incentives for companies who have the power to mitigate risk to actually do so. Are there good examples in the past of companies taking action that is harmful to their bottom line to mitigate societal risk of harm their products on society? My premise being that their primary motive is profit/growth, and that is revenue or investment dictated for mature and growth companies respectively (collectively "bottom line").

Im only in my mid 30s so dont have as much perspective on past examples of voluntary action of this sort with respect to tech or pre-tech corporates where there was concern of harm. Probably too late to this thread for replies, but ill think about it for the next time this comes up.

Major incentives currently in play are "PR fuckups are bad" and "if we don't curb our shit regulators will". Which often leads to things like "AI safety is when our AI doesn't generate porn when asked and refuses to say anything the media would be able to latch on to".

The rest is up to the companies themselves.

Anthropic seems to walk the talk, and has supported some AI regulation in the past. OpenAI and xAI don't want regulation to exist and aren't shy about it. OpenAI tunes very aggressively against PR risks, xAI barely cares, Google and Anthropic are much more balanced, although they lean towards heavy-handed and loose respectively.

China is its own basket case of "alignment is when what AI says is aligned to the party line", which is somehow even worse than the US side of things.