← Back to context

Comment by 0_____0

9 hours ago

It's unlikely that the current LLMs are conscious, but where the boundary of conscious lies for these machines is a slippery problem. Can a machine have experiences with qualia? How will we know if one does?

So we have a few things happening: a poor ability to understand the machines we're building, the potential for future consciousness, and no way to detect it, and the knowledge that subjecting a consciousness to the torrent of would-be psychological tortures that people subject LLMs to represent immense harm if the machines are, in fact, conscious.

If you wait for real evidence of harm to conscious entities before acting, you will be too late. I think it's actually a great time to think about this type of harm, for two reasons: there is little chance that LLMs are conscious, so the fix got made early enough, and second, it will train users out of practising and honing psychological torture methods, which probably good for the world generally.

The HN angst here seems sort of reflexive. Company limits product so it can't be used in a sort of fucked up way, folks get their hackles up because they think company might limit other functionality that they actually use (I suspect most HNers aren't attempting to psychologically break their LLMs). The LLM vendors have a lot of different ways to put guardrails up, ideological or not (see Deepseek), they don't need to use this specific method to get their LLMs to "rightthink."