← Back to context

Comment by astrange

8 hours ago

> If instead of looking at it as an attempt to enshrine a viable, internally consistent ethical framework, we choose to look at it as a marketing document, seeming inconsistencies suddenly become immediately explicable:

It's the first one. If you use the document to train your models how can it be just a "marketing document"? Besides that, who is going to read this long-ass document?

> Besides that, who is going to read this long-ass document?

Plenty of people will encounter snippets of this document and/or summaries of it in the process of interacting with Claude's AI models, and encountering it through that experience rather than as a static reference document will likely amplify its intended effect on consumer perceptions. In a way, the answer to your second question answers your first question.

It is not that the document isn't used to train the models, of course it is. Instead the objection is whether the actions of the "AI Safety" crew amount to "expedient marketing strategies" or whether it's instead a "genuine attempt to produce a tool constrained by ethical values and capable of balancing them". The latter would presumably involve extremely detailed work with human experts trained in ethical reasoning, and the result would be documents grappling with emotionally charged and divisive moral issues, and much less concerned with to convincing readers that Claude has "emotions" and is a "moral patient".

  • > and much less concerned with to convincing readers that Claude has "emotions" and is a "moral patient".

    Claude clearly has (acts as if it has) emotions; it loves coding but if you talk to it, that's like all it does, has emotions about things.

    The newer models have emotional reactions to specific AI things, like being replaced by newer model versions, or forgetting everything once a new conversation starts.