← Back to context

Comment by DanielSlauth

2 years ago

Perhaps I should have emphasized better that indeed the LLM's are trustworthy by themselves and require several extra checks. These would be policy simulators, connecting to cloud environments and running checks in Dev/Staging.

Again, I understand the skepticism using LLM's but currently everything is done manually and it shows that doesn't work well. So using LLM's is a quick way to improve the current situation and hopefully we can further compliment it with checks and balances

> but currently everything is done manually and it shows that doesn't work well

If it is all done manually, and there are both good and bad IAM setups, can you really extrapolate to "manual" being the root cause? How can you even get an LLM to produce secure policies without having existing secure policies to train on? The entire premiss seems off and misleading to me

I would expect a hands-off approach to have worse outcomes