Comment by kart23

1 month ago

humanity is done if we think one bit about AI wellbeing instead of actual people's wellbeing. There is so much work to do with helping real human suffering, putting any resources to treating computers like humanity is unethical.

What makes you think that caring about the wellbeing of one kind of entity is incompatible with caring about another kind?

Instead, of, you know, probably highly correlated just like it is with animals.

No, an LLM isn't a human and doesn't deserve human rights.

No, it isn't unreasonable to broaden your perspective on what is a thinking (or feeling) being and what can experience some kinds of states that we can characterize in this way.