Comment by danielvf
6 months ago
I handle reports for a one million dollar bug bounty program.
AI spam is bad. We've also never had a valid report from an by an LLM (that we could tell).
People using them will take any being told why a bug report is not valid, questions, or asks for clarification and run them back through the same confused LLM. The second pass through generates even deeper nonsense.
It's making even responding with anything but "closed as spam" not worth the time.
I believe that one day there will be great code examining security tools. But people believe in their hearts that that day is today, and that they are riding the backs of fire breathing hack dragons. It's the people that concern me. They cannot tell the difference between truth and garbage.
>It's the people that concern me. They cannot tell the difference between truth and garbage.
Suffice to say, this statement is an accurate assessment of the current state of many more domains than merely software security.
This has been going for years before AI - they say we live in a "post-truth society". The generation and non-immediate-rejection of AI slop reports could be another manifestation of post-truth rather than a cause of it.
> I believe that one day there will be great code examining security tools.
As for programming, I think that we will simply continue to have incrementally better tools based on sane and appropriate technologies, as we have had forever.
What I'm sure about is that no such tool can come out of anything based on natural language, because it's simply the worst possible interface to interact with a computer.
people have been trying various iterations of "natural language programming" since programming languages were a thing. Even COBOL was supposed to be more natural than other languages of the era.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
This sounds more like an influx of scammers than security researchers leaning too hard on AI tools. The main problem is the bounty structure. And I don’t think these influx of low quality reports will go away, or even get any less aggressive as long as there is money to attract the scammers. Perhaps these bug bounty programs need to develop an automatic pass/fail tester of all submitted bug code, to ensure the reporter really found a bug, before the report is submitted to the vendor.
It's unfortunately widespread. We don't offer bug bounties, but we still get obviously LLM-generated "security reports" which are just nonsense and waste our time. I think the motivation may be trying to get credit for contributing to open source projects.
Simply charge a fee to submit a report. At 1% of the payment for low bounties it's perfectly valid. Maybe progressively scale that down a bit as the bounty goes up. But still for a $50k bounty you know is correct it's only $500.
No need to make it a percentage ; charge $1 and the spammers will stop extremely quickly, since none of their reports are valid.
But I do think established individual and institutes should have free access ; leave a choice between going through an identification process and paying the fee. If it's such a big problem that you REALLY need to do something ; otherwise just keep marking as spam.
If you charge a fee the motivation for good samaritan reports goes to zero.
3 replies →
You are adding more incentive to go directly to black market to sell vulnerability.
Also I've heard many times cases when company refused to pay bounty for any reason.
And taxes, how you'll tax it internationally? Sales tax? VAT?
Why charge a fee? All you need is a reputation system where low reputation bounty hunters need a reputable person to vouch for them. If it turns out to be false, both take a hit. If true, the voucher gets to be a co-author and a share in the bounty.
1 reply →
gentle reminder that the median salary of a programmer in japan is 60k USD a year. 500 usd is a lot of money (i would not be able to afford it personally).
i suspect 1usd would do the job perfectly fine without cutting out normal non-american people.
Could also be made refundable when the bug report is found to be valid. Although of course the problem then becomes some kid somewhere who is into computers and hacking find something but can’t easily report it because the barrier to entry is too high now. I don’t think there is a good solution unfortunately.
2 replies →
> I believe that one day there will be great code examining security tools.
Based on current state, what makes you think this is given?
The improvement history of tools beside LLMs, I suspect. First we had syntax highlighting, and we were wondered. Now we have fuzzers and sandbox malware analysis, who knows what the future will bring?
> They cannot tell the difference between truth and garbage.
I honestly think that in this context, they don't care - they put in essentially zero effort on the minuscule chance that you'll pay out something.
It's the same reason we have spam. The return rates are near zero, but so is the effort.