← Back to context

Comment by mossTechnician

17 hours ago

"AI safety" groups are part of what's described here: you might assume from the general "safety" label that organizations like PauseAI or ControlAI would focus things like data center pollution, the generation of sexual abuse material, causing mental harm, or many other things we can already observe.

But they don't. Instead, "AI safety" organizations all appear to exclusively warn of unstoppable, apocalyptic, and unprovable harms that seem tuned exclusively to instill fear.

We should do both and it makes sense that different orgs have different focuses. It makes no sense to berate one set of orgs for not working on the exact type of thing that you want. PauseAI and ControlAI have each received less than $1 million in funding. They are both very small organizations as far as these types of advocacy non-profits go.

  • If it makes sense to handle all of these issues, then couldn't these organizations just acknowledge all of these issues? If reducing harm is the goal, I don't see a reason to totally segregate different issues, especially not by drawing a dividing line between the ones OpenAI already acknowledges and the ones it doesn't. I've never seen any self-described "AI safety" organizations that tackles any of the present-day issues AI companies cause.

    • If you've never seen it then you haven't been paying attention. For example Anthropic (the biggest AI org which is "safety" aligned) released a big report last year on metal well being [1]. Also here is their page on societal impacts [2]. Here is PauseAI's list of risks [3], it has deepfakes as its second issue!

      The problem is not that no one is trying to solve the issues that you mentioned, but that it is really hard to solve them. You will probably have to bring large class action law suits, which is expensive and risky (if it fails it will be harder to sue again). Anthropic can make their own models safe, and PauseAI can organize some protests, but neither can easily stop grok from producing endless CSAM.

      [1] https://www.anthropic.com/news/protecting-well-being-of-user...

      [2] https://www.anthropic.com/research/team/societal-impacts

      [3] https://pauseai.info/risks

      1 reply →

I'd rather the "AI safety" of the kind you want didn't exist.

The catastrophic AI risk isn't "oh no, people can now generate pictures of women naked".

  • Why would you rather it not exist?

    In a vacuum, I agree with you that there's probably no harm in AI-generated nudes of fictional women per se; it's the rampant use to sexually harass real women and children[0], while "causing poor air quality and decreasing life expectancy" in Tennessee[1], that bothers me.

    [0]: https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

    [1]: https://arstechnica.com/tech-policy/2025/04/elon-musks-xai-a...

    • Because it's just a vessel for the puritans and the usual "cares more about feeling righteous than about being right" political activists. I have no love for either.

      The whole thing with "AI polluting the neighborhoods" falls apart on a closer examination. Because, as it turns out, xAI put its cluster in an industrial area that already has: a defunct coal power plant, an operational steel plant, and an operational 1 GW grid-scale natural gas power plant that powers the steel plant - that one being across the road from xAI's cluster.

      It's quite hard for me to imagine a world where it's the AI cluster that moves the needle on local pollution.

      1 reply →

It's almost like there's enough people in the world that we can focus on and tackle multiple problems at once.

You are the masses. Are you afraid?

  • They don't need to instill fear in everyone, but only a critical mass and most importantly _regulators_.

    So there will be laws because not everyone can be trusted to host and use this "dangerous", new tech.

    And then you have a few "trusted" big tech firms forming an oligopoly of ai, with all of the drawbacks.

  • Hn commenters are not representative

    • Everyone thinks they are special right? Thinking you are special suggests you likely aren't that special (not saying this about you personally, but still).