Comment by TeMPOraL
2 days ago
> The problem is, so much of what people want from these things involves having all three.
Pretty much. Also there's no way of "securing" LLMs without destroying the quality that makes them interesting and useful in the first place.
I'm putting "securing" in scare quotes because IMO it's fool's errand to even try - LLMs are fundamentally not securable like regular, narrow-purpose software, and should not be treated as such.
> I'm putting "securing" in scare quotes because IMO it's fool's errand to even try - LLMs are fundamentally not securable like regular, narrow-purpose software, and should not be treated as such.
Indeed. Between this fundamental unsecurability and alignment, I struggle to see how OpenAI/Anthropic/etc will manage to give their investors enough RoI to justify the investment