Comment by echelon

7 months ago

> "They could very well trick a developer"

Large Language Models aren't alive and thinking. This is an artificial fear campaign to raise money from VCs and sovereign wealth funds.

If OpenAI was so afraid of AI misuse, they wouldn't be firing their safety team and partnering with the DoD.

It's all a ruse.

Many non-sequiturs

> Large Language Models aren't alive and thinking

not required to deploy deception

> If OpenAI was so afraid of AI misuse, they wouldn't be firing their safety team

They could just be recognizing that if not everybody is prioritizing safety, they might as well try to get AGI first

> If OpenAI was so afraid of AI misuse, they wouldn't be firing their safety team and partnering with the DoD.

What makes you think that? I sounds reasonable that a dangerous tool/substance/technology might be profitable and thus the profits justify the danger. See all the companies polluting the planet and risking the future of humanity RIGHT NOW. All the weapon companies developing their weapons to make them more lethal.

We might want to kick out guys who only talk about safety or ethics and barely contribute the project.