Comment by Timwi
6 months ago
If you read the literature on AI safety carefully (which uses the word “goal”), you'll find they're not talking about LLMs either.
6 months ago
If you read the literature on AI safety carefully (which uses the word “goal”), you'll find they're not talking about LLMs either.
I think the Anthropic "omg blackmail" article clearly talks about both LLMs and their "goals".