← Back to context

Comment by xpe

1 day ago

I think you are probably confused about the general characteristics of the AI safety community. It is uncharitable to reduce their work to a demeaning catchphrase.

I’m sorry if this sounds paternalistic, but your comment strikes me as incredibly naïve. I suggest reading up about nuclear nonproliferation treaties, biotechnology agreements, and so on to get some grounding into how civilization-impacting technological developments can be handled in collaborative ways.

I have no doubt the "AI safety community" likes to present itself as noble people heroically fighting civilizational threats, which is a common trope (as well as the rogue AI hypothesis which increasingly looks like a huge stretch at best). But the reality is that they are becoming the main threat much faster than the AI. They decide on the ways to gatekeep the technology that starts being defining to the lives of people and entire societies, and use it to push the narratives. This definitely can be viewed as censorship and consent manufacturing. Who are they? In what exact ways do they represent interests of people other than themselves? How are they responsible? Is there a feedback loop making them stay in line with people's values and not their own? How is it enforced?