Comment by concordDance
1 year ago
It is utterly mad that there's conflation between "let's make sure AI doesn't kill us all" and "let's make sure AI doesn't say anything that embarrasses corporate".
The head of every major AI research group except Metas believes that whenever we finally make AGI it's vital that it shares our goals and values at a deep even-out-of-training-domain level and that failing at this could lead to human extinction.
And yet "AI safety" is often bandied about to be "ensure GPT can't tell you anything about IQ distributions".