← Back to context

Comment by jug

2 days ago

I don’t think they should be interpreted like that (if this is still about Anthropic’s study in the article), but the innate moral state from the sum of their training material and fine tuning. It doesn’t require consciousness to have a moral state of sorts. It just needs data. A language model will be more ”evil” if trained on darker content, for example. But with how enormous they are, I can absolutely understand the issue in even understanding what that state precisely is. It’s hard to get a comprehensive bird’s eye view from the black box that is their network (this is a separate scientific issue right now).