Comment by hiAndrewQuinn

10 hours ago

Anthropic cares first and foremost about extinction risk. This is not what everyone who professes to care about human welfare thinks should be at the top of the priority list. See e.g. the Voluntary Human Extinction Movement for an example of a humanistic approach to letting humanity die off with no replacement.

One of the most challenging problems in AI safety re/ x-risk is that even if you can get one country to do the right thing, getting multiple countries on board is an entirely different ballgame. Some amount of intentional coercion is inevitable.

On the low end, you could pay bounties to international bounty hunters who extract foreign AI researchers in a manner similar to an FBI's most wanted lost, and let AI researchers quickly do the math and realize there are a million other well paid jobs that don't come with this flight risk. On the high end you can go to war and kill everyone. Whatever gets the job done.

Either way, if you want to win at enforcing a new kind of international coercion, you need to be at the top of the pack militarily and economically speaking. That is the true goal here, and I don't think one can make coherent sense out of what Anthropic is doing without keeping that in the back of their mind at all times.