Comment by progbits
3 days ago
Friendly reminder that just because someone is building security software it doesn't mean they are competent and won't cause more harm than good.
Every month the security team wants me to give full code or cloud access to some new scanner they want to trial. They love the fancy dashboards and lengthy reports but if I allowed just 10% of what they wanted we would be pwned on the regular...
I audited Trivy's GitHub Actions a while back and found some worrying things, the most worrying bit was in the setup-trivy Action where it was doing a clone of main of the trivy repo and executing a shell script in there. There was no ref pinning until somebody raised a PR a few months ago. So a security company gave themselves arbitrary code execution in everyone's CI workflows.
Aqua were breached earlier this month, failed to contain it, got breached again last week, failed to contain it again, and now the attackers have breached their Docker Hub account. Shit happens but they're clearly not capable of handling this and should be enlisting outside help.
The ref pinning part is almost worse than no pinning. You can pin the action itself to a commit SHA, sure. But half the actions out there clone other repos, curl binaries, or run install scripts internally. Basically none of that is covered by your pin. You're trusting that the action author didn't stick a `curl | bash` somewhere in their own infra.
Audited our CI a few months back and found two actions doing exactly that. Pinned to SHA on our end, completely unpinned fetches happening inside.
In this case I'm talking about what the Action did internally. The git clone inside was not pinned, but is now.
It seems they did end up contracting with Sygnia
Granting broad access to "security" tools so some vendor can take another shot at your prod keys is not risk reduction. Most of these things are just report printers that makes more noise than a legacy SIEM, and once an attacker is inside they don't do much besides dump findings into a dashboard nobody will read.
If you want less self-inflicted damage, stick new scanners in a tight sandbox, feed them read-only miror data, and keep them away from prod perms until they have earned trust with a boring review of exactly what they touch and where the data goes. Otherwise you may as well wire your secrets to a public pastebin and call it testing.
Couldn't agree more.
Yet many of these tools have setup like: create a service account, give it about thousand permissions (if not outright full ownership) and send us the JSON private key.
At least they make the red flag nice and obvious.
Most of corporate security nowadays involves "endpoint security solutions" installed on all devices, servers and VMs, piping everything into an AI-powered dashboard so we can move fast and break everything.
My hypothesis is that generally, there's no quality floor at which security departments are "allowed" to say "actually, none of the options on the market in this category are good enough; we're not going to use any of them". The norm is to reflexively accept extreme invasiveness and always say yes to adding more software to the pile. When these norms run deeply enough in a department, it's effectively institutionally incapable of avoiding shitty security software.
Fwiw w/r/t Trivy in particular,I don't think Trivy is bad software and I use it at work. We're unaffected by this breach because we use Nix to provide our code scanning tools and we write our own Actions workflows. Our Trivy version is pinned by Nix and periodically updated manually, so we've skipped these bad releases.
From having worked at and consulted with security software producing companies as well as security software consuming ones, I would say the security companies are worse than average at security.
And their security teams more cynical.
Sometimes they deliberately hire lower aptitude candidates to run internal security to prevent them from getting distracted by the product.
In other cases they are getting high on their own supply, more or less.
Jack Welch style management seems to take a deeper toll in this sector.
It doesn't help that a lot of security software is pretty niche. It's unreasonable to expect most candidates to know it or have experience.
In one case I was one of exactly two people out of 500 that had used the product as a paying customer. Neither of us was in management.
After a year or two the CISO drifted over and asked me to show him how to use the product, but he was more interested in soundbytes than actually using the system.
It became a powerpoint exercise and I collected my attaboy.
[dead]