← Back to context

Comment by vessenes

6 months ago

Reading the straw that broke the camel's back commit illustrates the problem really well: https://hackerone.com/reports/3125832 . This shit must be infuriating to dig through.

I wonder if reputation systems might work here - you could give anyone who id's with an AML/KYC provider some reputation, enough for two or three reports, let people earn reputation digging through zero rep submissions and give someone like 10,000 reputation for each accurate vulnerability found, and 100s for any accurate promoted vulnerabilities. This would let people interact anonymously if they want to edit, quickly if they found something important and are willing to AML/KYC, and privilege quality people.

Either way, AI is definitely changing economics of this stuff, in this case enshittifying first.

there is a reputation system already. according to hackerone reputation system, it is a credible reporter. it's really bad

  • The vast majority of developers are 10-100x more likely to find a security hole in a random tool than spend time improving their reputation on a bug bounty site that pays < 10% their salary.

    That makes it extremely hard to build a reputation system for a site like that. Almost all the accounts are going to be spam, and the highest quality accounts are going to freshly created and take ~ 1 action on the platform.

Or a deposit system: pay 2€ for a human to read this message, you'll get it back if it's not spam

What if the human marks it as spam but you're actually legit? Deposit another 2€ to have the platform (like Hackerone or whichever you're reporting via) give a second opinion, you'll get the 4€ back if you weren't spamming. What to do with the proceeds from spammers? The first X euros of spam reports go to upkeep of the platform, the rest to a good cause defined by the projects to whom the reports were submitted because they were the ones who had to deal with reading the slop so they get at least this much out of it

Raise deposit cost so long as slop volume remains unmanageable

This doesn't discriminate against people who aren't already established, but it may be a problem if you live in a low-income country and can't easily afford 20€ (assuming it ever gets to that deposit level). Perhaps it wouldn't work, but it can first be trialed at a normal cost level. Another concern is anonymity and payment. We hackers are often a paranoid lot. One can always support cash in the mail though, the sender can choose whether their privacy is worth a postage stamp

Reputation systems for this kind of thing sounds like rubbing some anti-itch cream on bullet wound. I feel like the problem seems to me to be behavior, not a technology issue.

Personally I can't imagine how miserable it would be for my hard-earned expertise to be relegated to sifting through SLOP where maybe 1 in hundreds or even thousands of inquiries is worth any time at all. But it also doesn't seem prudent to just ignore them.

I don't think better ML/AI technology or better information systems will make a significant difference on this issue. It's fundamentally about trust in people.

  • I consider myself a left leaning soyboy, but this could be the outcome of too "nice" of a discourse. I won't advocate for toxicity, but I am considering if we bolster the self-image of idiots when we refuse to call them idiots. Because you're right, this is fundamentally a people problem, specifically we need people to filter this themselves.

    I don't know where the limit would go.

  • > I feel like the problem seems to me to be behavior, not a technology issue.

    To be honest, this has been a grimly satisfying outcome of the AI slop debacle. For decades, the general stance of tech has been, “there is no such thing as a behavioral/social problem, we can always fix it with smarter technology”, and AI is taking that opinion and drowning it in a bathtub. You can’t fix AI slop with technology because anything you do to detect it will be incorporated into better models until they evade your tests.

    We now have no choice but to acknowledge the social element of these problems, although considering what a shitshow all of Silicon Valley’s efforts at social technology have been up to now, I’m not optimistic this acknowledgement will actually lead anywhere good.

  • I guess I'm confused by your position here.

    > I feel like the problem seems to me to be behavior, not a technology issue.

    Yes, it's a behavior issue, but that doesn't mean it can't be solved or at least minimized by technology, particularly as a technology is what's exacerbating the issue?

    > It's fundamentally about trust in people.

    Who is lacking trust in who here?

    • Vulnerability reports are interesting from a trust point of view, because each party has a different financial incentive. You can't 100% trust the vendor to accurately assess the severity of an issue - they have a lot riding on downplaying an issue in some cases. The person reporting the bug is also likely looking for bounty and reputational benefit, both of which are enhanced if the issue is considered high severity. So a user of the supposedly-vulnerable program can't blindly trust either party.

IMO, this AI crap is just the next step of the "let's block criminal behavior with engineering" path we followed for decades. That might very well be the last straw, as it is very unlikely we can block this one efficiently and reliably.

It's due time we ramp-up our justice systems to make people truly responsible and punished for their bad behavior online, including all kind of spams, scams, fishing and disinformation.

That might involve the end of anonymity on internet, and lately I feel that the downsides of that are getting smaller and smaller compared to it's upsides.