← Back to context

Comment by bossyTeacher

13 hours ago

> I'm putting "securing" in scare quotes because IMO it's fool's errand to even try - LLMs are fundamentally not securable like regular, narrow-purpose software, and should not be treated as such.

Indeed. Between this fundamental unsecurability and alignment, I struggle to see how OpenAI/Anthropic/etc will manage to give their investors enough RoI to justify the investment