Comment by ralusek
7 months ago
Many non-sequiturs
> Large Language Models aren't alive and thinking
not required to deploy deception
> If OpenAI was so afraid of AI misuse, they wouldn't be firing their safety team
They could just be recognizing that if not everybody is prioritizing safety, they might as well try to get AGI first
If the risk is extinction as these people claim, that'd be a short sighted business move.
Or perhaps a "calculated risk with potential huge return on investment"...