← Back to context

Comment by DanielSlauth

2 years ago

Do you think humans are doing a better job? Research shows that 95% of the permissions granted to users aren't used which creates huge problems and is a reason for spending millions in security tools. Why not use Slauth and other checks such as policy simulators to get tightened policies pre-deployed

I'm not your target user, I don't feel the priority on this problem even though our permissions are more permissive than we'd like. Thing is, to rein them in typically requires application changes. You cannot just sprinkle magic LLM dust on IAM and make things better.

My concern is for those who blindly trust LLMs. Security posturing is not the place to be an early adopter of AI tools. You have to understand both IAM and system architecture to know if what the LLM is saying is correct, so where does that leave us?

I think they can be an extra pair of eyes, but not the driver. Still, there is a signal to noise problem that remains, due to the inherent hallucinations.

  • Absolutely not. Anywhere where accuracy, precision and safety matters, throwing LLMs in the mix is irresponsible IMHO or being too optimistic or possibly not understanding how these giant arrays of floating point numbers work or just hoping for the best.

    Similarly, LLMs used for SQL generation meant for business analytics is also a critical area where if numbers are wrong, it might lead to a business going bankrupt.

    For Prototype, fun exercise, sure go all in.

  • First of all its pretty awesome your permissions are very tight. You are definitely on the other side of the spectrum compared to the rest. I get it that there is a lot of skepticism because of people hyping LLM's so indeed for now we use it as Copilot and not the driver. Hopefully you can agree though its pretty random that we are still manually creating IAM policies and need to get accustomed with the thousands of different permissions :)

    • To add a plus one here, as soon as I learned there's LLMs involved this became a non starter to me. I'd rather have less granular policies than risk some LLM doing something crazy.

      I can justify to management that we have limited time for IAM and something was missed that we can fix / create tests / scans for after an incident. It's harder to explain that we chose a vendor that uses a non deterministic tool that can hallucinate for one of the most core security pieces of the puzzle.

    • We are actively working on reining in permissions, I would not call them "tight". It's just not a top 3 priority, though that is likely changing with the upcoming SOC2 efforts. I still don't see us reaching for LLMs to help us here.

      I'm not saying don't use them, just use them as an extra pair of eyes, mostly to catch errors rather than to drive and architect

      > get it that there is a lot of skepticism because of people hyping LLM's

      The skepticism is not from the hype, it's from experiencing LLM output personally. They are fine if the output can be fuzzy, like a blog post or a function signature, not so much if there is a specific and fragile target.

  • what kind of application changes are you thinking it would equire?

    my policies are definitely too broad, but feels like I should be able to tighten them up without changing code. (just potentially breaking things if I get it wrong and go too tight).

    • Some scenarios

      1. The application has to start using credentials for the first time, or consume them a different way. For example, stop consuming an environment variable and rely on a service account.

      2. You have to change ops to support new workflows. Often you have to put approval workflows in place because fewer people can do things and you want only the machines touching production

      3. You have to change human behaviors and habits (this is the real hard one). I've had to revert changes because the increased security blocked developers and they don't have time to adapt for the next deadline.

      4. Getting parity in local development workflows is also challenging. How and where do you match vs except from IAM parity?

      5. Should I give the current server access to a particular cloud service/resource or break out that particular function into a lambda and minimize the permissions there? You have to think through the implications of a breach and how/where you want to limit the blast radius.

      6. This is probably obvious, but implementing application level controls, like API endpoint permissioning. IAM is not limited to cloud infra

      2 replies →

> Research shows that 95% of the permissions granted to users aren't used

These would be the "s3:*" and "Resources: *" scoped permissions I assume? I can't imagine users are explicitly typing out permissions, 95% of which are not relevant for the task.

> which creates huge problems

Such as? What is the material impact of a workflow or a user having too many permissions?

> and is a reason for spending millions in security tools

Are you claiming that overscoped IAM permissions alone are responsible for 1M+ security tooling bills in companies? Would you be willing to share information on which tools these are?

  • > Such as? What is the material impact of a workflow or a user having too many permissions?

    Security obviously https://en.wikipedia.org/wiki/Principle_of_least_privilege

    • That is the "theoretical" problem

      How many times have excess permissions "actually" been the problem... versus something like correct permissions with compromised credentials?

      2 replies →

    • If you're trying to sell a tool, you don't justify its cost by saying it addresses "huge problems" such as "security". Lets talk material impact; how will this tool pay for itself?

      4 replies →

  • It's the constant tug of war between the idealized security status where users have just enough access to do their jobs and the fact that it's hard to know the precise access you need until you get the task at which point the idealized process of review to grant access takes too long and really drags down your development pace.

    At my job for example we don't have a separate support team for the ETL work we do so I have a lot of access I don't use unless things are breaking and then I can't wait for the access approval process to get added to database XXX or bucket YYY to diagnose what data has broken our processes.

> Research shows that 95% of the permissions granted to users aren't used which creates huge problems and is a reason for spending millions in security tools.

It'd potentially cost millions more to recover from a GPT-4 disaster.

One challenge will be similar to self driving cars. The error / fatality rates need to be several orders of magnitude lower than for human operators for it to be acceptable.

AWS and GCP already provide tools to show excess permissions...

  • The pain there is often a pre-configured role with a slew of permissions was used and you actually need to craft a new role with the right permissions.

    I wrote some code once to fetch all those preconfigured role permissions and then present them in a more digestible way