← Back to context

Comment by verdverm

2 years ago

More important than SLAs is who takes on the liability for these mistakes? With the changing laws and regulations for breaches, do I want to rely on an LLM that isn't going to own that liability?

Yeah, I think it's a curious decision to have launched this thing with the LLM alone, as using an LLM for something this potentially disastrous is a moot point for me. If they can get a formal simulator running with it, on the other hand, then I'd imagine they may feel more comfortable putting out a guarantee and taking on some kind of liability themselves.

Perhaps I should have emphasized better that indeed the LLM's are trustworthy by themselves and require several extra checks. These would be policy simulators, connecting to cloud environments and running checks in Dev/Staging.

Again, I understand the skepticism using LLM's but currently everything is done manually and it shows that doesn't work well. So using LLM's is a quick way to improve the current situation and hopefully we can further compliment it with checks and balances

  • > but currently everything is done manually and it shows that doesn't work well

    If it is all done manually, and there are both good and bad IAM setups, can you really extrapolate to "manual" being the root cause? How can you even get an LLM to produce secure policies without having existing secure policies to train on? The entire premiss seems off and misleading to me

    I would expect a hands-off approach to have worse outcomes