Comment by bhouston
19 hours ago
This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.
> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:
> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”
> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.
https://www.currentaffairs.org/news/2022/09/defective-altrui...
He is just giving everyone permission to do bad things by saying a lot of words around it.
> then pivoted to being, ah, its okay for it to be a terminator type entity.
Isn’t that the opposite of what he’s saying? He’s saying it could become that powerful, and given that possibility it’s incredibly important that we do whatever we can to gain more control of that scenario
> Isn’t that the opposite of what he’s saying?
The quote was from 2022 for the first pivot to AI to prevent it from becoming a terminator style entity. The last pivot was not in the quote but is the topic of this current Hacker News post, where takes credit for dropping the safety pledge:
"That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance."
I expect the next pivot will be that we need to allow the US military to use Anthropic to kill people because otherwise they will use a less pure AI to kill people and our Anthropic is better at only killing the bad guys, thus it is the lesser evil.
I think the poster here has an axe to grind, considering they quoted something that directly contradicted their point and didn't even notice.
The quote was only for the 2022 pivot to AI safety, the 2026 pivot away from AI safety is the topic of this hacker news post.
Effective Altruism is such a beautiful term for a pretentious Karen that needs to wrap their selfish actions with moral superiority.
It's that perfect blend of I'm doing what everyone else are doing, and I'm better than everyone else.
Chefs' Kiss
Getting SBF vibes from this. "Earn to give" is an inherently flawed philosophy.
Effective altruism came from the "rationalist"
It was never about helping poor people.
For some reason, the rationalist movement and its offshoots are really pervasive in silicon valley. i don't see it much in the other tech cities.