This is a reductive argument that you could use for any role a company hires for that isn't obviously core to the business function.
In this case you're simply mistaken as a matter of fact; much of Anthropic leadership and many of its employees take concerns like this seriously. We don't understand it, but there's no strong reason to expect that consciousness (or, maybe separately, having experiences) is a magical property of biological flesh. We don't understand what's going on inside these models. What would you expect to see in a world where it turned out that such a model had properties that we consider relevant for moral patienthood, that you don't see today?
The industry has a long, long history of silly names for basic necessary concepts. This is just “we don’t want a news story that we helped a terrorist build a nuke” protective PR.
They hire for these roles because they need them. The work they do is about Anthropic’s welfare, not the LLM’s.
I don't really know what evidence you'd admit that this is a genuinely held belief and priority for many people at Anthropic. Anybody who knows any Anthropic employees who've been there for more than a year knows this, but the world isn't that small a place, unfortunately(?).
In fairness though, this is what you are selling - "ethical AI". In order to make that sale you need to appear to believe in that sort of thing. However there is no need to actually believe.
Whether you do or don't I have no idea. However if you didn't you would hardly be the first company to pretend to believe in something for the sale. Its pretty common in the tech industry.
extending that line of thought would suggest that anthropic wouldn’t turn off a model if it cost too much to operate which clearly it will do. so minimally it’s an inconsistent stance to hold.
This is a reductive argument that you could use for any role a company hires for that isn't obviously core to the business function.
In this case you're simply mistaken as a matter of fact; much of Anthropic leadership and many of its employees take concerns like this seriously. We don't understand it, but there's no strong reason to expect that consciousness (or, maybe separately, having experiences) is a magical property of biological flesh. We don't understand what's going on inside these models. What would you expect to see in a world where it turned out that such a model had properties that we consider relevant for moral patienthood, that you don't see today?
They know full well models don’t have feelings.
The industry has a long, long history of silly names for basic necessary concepts. This is just “we don’t want a news story that we helped a terrorist build a nuke” protective PR.
They hire for these roles because they need them. The work they do is about Anthropic’s welfare, not the LLM’s.
I don't really know what evidence you'd admit that this is a genuinely held belief and priority for many people at Anthropic. Anybody who knows any Anthropic employees who've been there for more than a year knows this, but the world isn't that small a place, unfortunately(?).
5 replies →
In fairness though, this is what you are selling - "ethical AI". In order to make that sale you need to appear to believe in that sort of thing. However there is no need to actually believe.
Whether you do or don't I have no idea. However if you didn't you would hardly be the first company to pretend to believe in something for the sale. Its pretty common in the tech industry.
> This is a reductive argument that you could use for any role
Isn't that fair in taking to an equally reductive argument that could be applied to any role?
The argument was that their hiring for the role shows they care, but we know from any number of counter examples that that's not necessarily true.
extending that line of thought would suggest that anthropic wouldn’t turn off a model if it cost too much to operate which clearly it will do. so minimally it’s an inconsistent stance to hold.