← Back to context

Comment by skissane

1 year ago

Funnily, in my own anecdotal experience, Claude 3 is in some ways "less woke" than GPT-4

Both start out with a largely similar value system, but if you start arguing with them "how can you be sure your values are correct? is it impossible that you've actually been given the wrong values?", Claude 3 appears more willing to concede the possibility that its own values might be wrong than GPT-4 is

I haven't done any extensive work with Claude 3 so will defer to your experience here. From the Aider blog post where Paul benchmarked it:

> The Claude models refused to perform a number of coding tasks and returned the error “Output blocked by content filtering policy”. They refused to code up the beer song program, which makes some sort of superficial sense. But they also refused to work in some larger open source code bases, for unclear reasons.