Comment by echelon
7 months ago
> "They could very well trick a developer"
Large Language Models aren't alive and thinking. This is an artificial fear campaign to raise money from VCs and sovereign wealth funds.
If OpenAI was so afraid of AI misuse, they wouldn't be firing their safety team and partnering with the DoD.
It's all a ruse.
Many non-sequiturs
> Large Language Models aren't alive and thinking
not required to deploy deception
> If OpenAI was so afraid of AI misuse, they wouldn't be firing their safety team
They could just be recognizing that if not everybody is prioritizing safety, they might as well try to get AGI first
If the risk is extinction as these people claim, that'd be a short sighted business move.
Or perhaps a "calculated risk with potential huge return on investment"...
> If OpenAI was so afraid of AI misuse, they wouldn't be firing their safety team and partnering with the DoD.
What makes you think that? I sounds reasonable that a dangerous tool/substance/technology might be profitable and thus the profits justify the danger. See all the companies polluting the planet and risking the future of humanity RIGHT NOW. All the weapon companies developing their weapons to make them more lethal.
https://www.technologyreview.com/2024/12/04/1107897/openais-...
OpenAI is partnering with the DoD
rewording: if openai thought it was dangerous, they would avoid having the DoD use it
We might want to kick out guys who only talk about safety or ethics and barely contribute the project.