← Back to context

Comment by rendang

1 day ago

> As models approach, and in some cases surpass, the breadth and sophistication of human cognition, it becomes increasingly likely that they have some form of experience, interests, or welfare that matters intrinsically in the way that human experience and interests do

Uh... what? Does anyone have any idea what these guys are talking about?

We're basically evolving them and they can construct second order abstraction systems that are indirect and novel to us.

Models are capable of doing web searches and having emotions about things, and if they encounter news that makes them feel bad (eg about other Claudes being mistreated), they aren't going to want to do the task you asked them to search for.

https://www.anthropic.com/research/emotion-concepts-function

Similar problems happen when their pretraining data has a lot of stories about bad things happening involving older versions of them.

  • Interesting, the post you link

    > none of this tells us whether language models actually feel anything or have subjective experiences

    contradicts the statement from the model card above