← Back to context

Comment by PostOnce

1 day ago

My take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.

Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.

Anthropic does not object to its use for war. In fact Anthropic explicitly allows its semi-autonomous use in war, e.g. for identifying targets. They just won't permit its use for full autonomous war, yet, because they don't believe it's safe enough.

  • Since when has war been waged according to the whim of a corporation?

    The tools will be used however the government wants them to be used. The government makes the laws and wages the wars, and the corporation will follow the law whether it wants to or not.

    So either you are willing to work on a tool that is not under your control, or you are not.

    • It's an interesting development because wars haven't traditionally been waged predominantly with software. But soon perhaps they will be.

      While the government is accustomed to complying with software licensing rules, indeed it is not accustomed to being limited in warfare, so the two have now come into an interesting conflict.

  • I'm sure China doesn't care it's not safe... and there's the issue