Comment by maeil

4 months ago

A $1.3 billion revenue company being too tight to pay this after all, even on their 2nd chance, is so short-sighted it's absurd. They're putting out a huge sign saying "When you find a vuln, definitely contact all our clients because we won't be giving you a penny!".

Incredible. This must be some kind of "damaged ego" or ass-covering, as it's clearly not a rational decision.

Edit: Another user here has pointed out the reasoning

> It's owned by private equity. Slowly cutting costs and bleeding the brand dry

It all makes sense if you consider bug bounties are largely:

1) created for the purpose of either PR/marketing, or a checklist ("auditing"), 2) seen as a cheaper alternative to someone who knows anything about security - "why hire someone that actually knows anything about security when we can just pay a pittance to strangers and believe every word they say?"

The amusing and ironic thing about the second point is that by doing so, you waste time with the constant spam of people begging for bounties and reporting things that are not even bugs let alone security issues, and your attention is therefore taken away from real security benefits which could be realized elsewhere by talented staff members.

  • I don’t agree. Bug bounties are taken seriously by at least some companies. Where I have worked, we received very useful reports, some very severe, via HackerOne.

    The company even ran special sessions where engineers and hackers were brought together to try to maximize the number of bugs found in a few week period.

    It resulted in more secure software at the end and a community of excited researchers trying to make some money and fame on our behalf.

    The root cause in this case seems to be that they couldn’t get by HackerOne’s triage process because Zendesk excluded email from being in scope. This seems more like incompetence than malice on both of their parts. Good that the researcher showed how foolish they were.

    • This feels like a case in the gray area. On the one hand, companies need to declare certain stuff out of scope - whether they know about it and are planning to work on it, or consider it acceptable risk, as the point for the company is to help them improve their security posture within the scope of the resources they have to run the bug bounty program. What's weird here is that the blog author found an email problem that wasn't really in that DKIM/SPF etc area, and Zendesk claimed that exemption covered it. Without a broader PoC to show how it could be weaponized, it's hard to say that Zendesk was egregiously wrong here - the person triaging just lacked the imagination to figure out that it would be a real problem. Hell, later in the write up we learn Zendesk does do spam filtering on the inbound emails, and so it's not crazy to think a security engineer reading the report may assume that stuff would cover their butts, when it failed miserably here. (A good Engineer would check that assumption though)

      That said putting my security hat on, I have to ask - who thought that sequential ticket ids in the reply-to email address were a good idea? they really ought to be using long random nonces; at which point the "guess the right id to become part of the support thread" falls apart. Classic enumeration+IDOR. So it sounds like there's still a potential for abuse here, if you can sneak stuff by their filters.

      6 replies →

    • I use a competitor to HackerOne. I view all submissions pre-triage and would have taken it seriously, even if I made a mistake in program scope. I have paid researchers for bugs out of scope before because they were right.

      2 replies →

    • HackerOne is an awful company with a terrible product. Not the first time I’ve heard of their triage process or software getting in the way of actual bug bounty.

      3 replies →

  • It's incredibly hard and resource intensive to run a bounty program, so anyone doing it for shortcuts or virtue signaling will quickly realize if they're not mature enough to run one.

  • Our company has a bug bounty program:

    - handled with priority, but sometimes it takes a couple of weeks for a more definite fix

    - handled by the security department within the company ( to forward to relevant PO's and to follow up)

    The unfortunate thing about bug bounties is that you will be hammered with crawlers that would sometimes even resemble a DDOS

    • >The unfortunate thing about bug bounties is that you will be hammered with crawlers

      you mean your product will be hammered by people testing to find holes, thus garner the bounty? or some other reason?

      2 replies →

  • “2) seen as a cheaper alternative to someone who knows anything about security - "why hire someone that actually knows anything about security when we can just pay a pittance to strangers and believe every word they say?"”

    It doesn’t make sense, companies with less revenue aren’t the ones doing this. It’s usually the richer tech companies.

    • >It doesn’t make sense, companies with less revenue aren’t the ones doing this. It’s usually the richer tech companies.

      Because for some reason, it's larger tech companies that love to bean-count their way through security.

      1 reply →

> A $1.3 billion revenue company being too tight to pay this after all, even on their 2nd chance, is so short-sighted it's absurd.

I'll give an "another side" perspective. My company was much smaller. Out of 10+ "I found a vulnerability" emails I got last year, all were something like mass-produced emails generated based on an automated vulnerability scanning tool.

Investigating all of those for "is it really an issue" is more work than it seems. For many companies looking to improve security, there are higher ROI things to do than investigating all of those emails.

  • https://www.sqlite.org/cves.html provides an interesting perspective. While they thankfully already have a pretty low surface area from overall design/purpose/etc, You can see a decent number of vulns reported that are either 'not their fault' (i.e. wrappers/consumers) or are close enough to the other side of the airtight hatchway (oh, you had access to the database file to modify it in a malicious way, and modified it in a malicious way)[0]

    [0] - https://sqlite.org/forum/forumpost/53de8864ba114bf6

  • We also had this problem in my previous company a few years ago, a 20-people company, but somehow we attracted much more attention.

    In one specific instance, we had 20 emails in a single month about a specific Wordpress PHP endpoint that had a vulnerability, in a separate market site in another domain. The thing is, it had already been replaced by our Wordpress contractor as part of the default install, but it was returning 200.

    But being a static page didn't stop the people running scanners from asking us from money even after they were informed of the above.

    The solution? Delete it altogether to return 404.

  • We have a policy to never acknowledge unsolicited emails like that unless they follow the simple instructions set-out in our /.well-known/security.txt file (see https://en.wikipedia.org/wiki/Security.txt) - honestly all they have to do is put “I put a banana in my fridge” as the message subject (or use PGP/GPG/SMIME) and it’ll be instantly prioritised.

    The logic being that any actual security-researcher with even minimal levels of competency will know to check the security.txt file and can follow basic instructions; while if any of our actual (paying) users find a security issue then they’ll go through our internal ticket site and not public e-mail anyway - so all that’s left are low-effort vuln-scanner reports - and it’s always the same non-issues like clickjacking (but only when using IE9 for some reason, even though it’s 2024 now…) or people who think their browser’s Web Inspector is a “hacking tool” that allows anyone to edit any data in our system…

    And FWIW, I’ve never received a genuine security issue report with an admission of kitchen refrigeration of fruit in the 18 months we’ve had a security.txt file - it’s almost as-if qualified competent professionals don’t operate like an embarrassingly pathetic shakedown.

    • I saw a great presentation from Finn.no on their bug bounty program. They had had great success, despite the amount of work it took. Much more so than the three different security companies they hired each year to find vulnerabilities.

      They also had a security.txt file and had received several emails through that, but all of it was spam. Ironically they had received more real security vulnerabilities through people contacting them on LinkedIn than through their security.txt file.

      Your milage may vary, but it didn’t seem like the security.txt file was read by the people one would hope would read it.

  • I understand, this is exactly why I noted "even on their 2nd chance". The initial lack of payout/meaningful response was incompetency by not understanding the severity of the vuln. Fine, happens.

    But after the PoC that showed the severity in a way that anyone could understand, they still didn't pay. That's the issue. The whole investigation was done for them.

If the bounty is big enough you basically need to retain a lawyer so the whole thing is done right and prevent being scammed.

it never made sense to me why these white-hat hackers don't require payment before disclosing the vulnerability

  • Bug bounty people do this all the time. It's almost always a sign that your bug is something silly, like DKIM.

    Later

    I wrote this comment before rereading the original post and realizing that they had literally submitted a DKIM report (albeit a rare instance of a meaningful one). Just to be clear: in my original comment, I did not mean to suggest this bug was silly; only that in the world of security bug bounties, DKIM reports are universally viewed as silly.

    • what does it mean to say a bug is silly?

      only thing that matters is the severity and what it allows the attackers to do.