Comment by DanielSlauth
2 years ago
>I'd like to challenge you on what seems to be the main claim behind why Slauth is a necessary product: "the amount of money that is being spent on tooling to scan for IAM misconfigurations in the cloud.
The quote you use got me to further research the market and speak to users of those toolings. From speaking to the users it was evident that the amount of misconfigurations being deployed wasn't being reduced.
I imagine users of cloud scanning tools would also use a pro-active tool like Slauth or any other shift-left tool that would aim at preventing as opposed to reacting.
I suspect you will have a better time selling the tool as a double checker than an author. As others have pointed out, LLMs cannot be trusted to create security policies, but they may be accepted as something that can catch mistakes, because we are busy and the attention needed is not always there.
At the same time, it will create noise in a PR, it will be reliably wrong, so it is not really about saving time, but more about always having an extra (junior) reviewer, it's only going to catch the simple things. You will have to work hard to improve the signal to noise ratio. Your current examples are all very simple and do not reflect real world complexities. I'm very much doubt ChatGPT has enough context length for devising real world IaC from source code, or even checking it. How many times will the code changed in a PR require knowledge about code not changed in a PR?