Comment by ACCount37
5 hours ago
Major incentives currently in play are "PR fuckups are bad" and "if we don't curb our shit regulators will". Which often leads to things like "AI safety is when our AI doesn't generate porn when asked and refuses to say anything the media would be able to latch on to".
The rest is up to the companies themselves.
Anthropic seems to walk the talk, and has supported some AI regulation in the past. OpenAI and xAI don't want regulation to exist and aren't shy about it. OpenAI tunes very aggressively against PR risks, xAI barely cares, Google and Anthropic are much more balanced, although they lean towards heavy-handed and loose respectively.
China is its own basket case of "alignment is when what AI says is aligned to the party line", which is somehow even worse than the US side of things.
No comments yet
Contribute on Hacker News ↗