Comment by lijok
2 years ago
I'd like to challenge you on what seems to be the main claim behind why Slauth is a necessary product: "the amount of money that is being spent on tooling to scan for IAM misconfigurations in the cloud".
1. The tooling you're quoting specifically, wiz.io and ermetic.com do an incredible amount more than just "scan for IAM misconfigurations". In fact, I understand that to be one of their most insignificant features. Yet it sounds, from the numbers being quoted (I saw the "millions" figure being thrown around), that you are equating a company purchasing wiz.io as them purchasing "tooling to scan for IAM misconfigurations" exclusively. How much does the IAM scanning tooling actually cost, and what is the material cost of delayed remediation of over-permissioned entities?
2. Were a company to introduce Slauth into their stack, are you under the impression that they would then not need to scan their IAM for misconfigurations and would therefore be able to save "millions"? Would it not be fair to say that the presence of Slauth would not remove the need for IAM scanning tools, since IAM deployments could happen out of bounds, which is not something that Slauth removes from a companies threat model?
>I'd like to challenge you on what seems to be the main claim behind why Slauth is a necessary product: "the amount of money that is being spent on tooling to scan for IAM misconfigurations in the cloud.
The quote you use got me to further research the market and speak to users of those toolings. From speaking to the users it was evident that the amount of misconfigurations being deployed wasn't being reduced.
I imagine users of cloud scanning tools would also use a pro-active tool like Slauth or any other shift-left tool that would aim at preventing as opposed to reacting.
I suspect you will have a better time selling the tool as a double checker than an author. As others have pointed out, LLMs cannot be trusted to create security policies, but they may be accepted as something that can catch mistakes, because we are busy and the attention needed is not always there.
At the same time, it will create noise in a PR, it will be reliably wrong, so it is not really about saving time, but more about always having an extra (junior) reviewer, it's only going to catch the simple things. You will have to work hard to improve the signal to noise ratio. Your current examples are all very simple and do not reflect real world complexities. I'm very much doubt ChatGPT has enough context length for devising real world IaC from source code, or even checking it. How many times will the code changed in a PR require knowledge about code not changed in a PR?