← Back to context

Comment by jari_mustonen

1 year ago

Once again, there’s a lot of safety talk. For example, OpenAI’s collaborations with NGOs and government agencies are being highlighted in the release notes. While it’s crucial to prevent AI from facilitating genuinely harmful activities—like instructing someone on building a nuclear bomb, there is an elephant in the room regarding safety talk: Evidence suggests that these safety protocols sometimes censor specific political perspectives.

OpenAI and other AI vendors should recognize the widespread suspicion that safety policies are being used to push political agendas. Concrete remedies are called for—for example, clearly defining what “safety” means and specifying prohibited content to reduce suspicions of hidden agendas.

Openly engaging with the public to address concerns about bias and manipulation is a crucial step. If biases are due to innocent reasons like technical limitations, they should be explained. However, if there’s evidence of political bias within teams testing AI systems, it should be acknowledged, and corrective actions should be taken publicly to restore trust.