← Back to context

Comment by emushack

6 months ago

Reputation systems for this kind of thing sounds like rubbing some anti-itch cream on bullet wound. I feel like the problem seems to me to be behavior, not a technology issue.

Personally I can't imagine how miserable it would be for my hard-earned expertise to be relegated to sifting through SLOP where maybe 1 in hundreds or even thousands of inquiries is worth any time at all. But it also doesn't seem prudent to just ignore them.

I don't think better ML/AI technology or better information systems will make a significant difference on this issue. It's fundamentally about trust in people.

I consider myself a left leaning soyboy, but this could be the outcome of too "nice" of a discourse. I won't advocate for toxicity, but I am considering if we bolster the self-image of idiots when we refuse to call them idiots. Because you're right, this is fundamentally a people problem, specifically we need people to filter this themselves.

I don't know where the limit would go.

  • I'm now imagining old-Linus responding to an AI slop bug report on lkml...

    • Shame is demotivating for me. I would rather frame it as behaving out the best-interest and collective excellence within our trade. Imagine if plumbers or electricians were this cavalier? Houses would burn down. People in hospitals would die because the back-up power generators gas lines would fail. The security of curl is pretty high stakes. If the obnoxious behavior is simply just for kicks, we're putting a LOT on the line.

> I feel like the problem seems to me to be behavior, not a technology issue.

To be honest, this has been a grimly satisfying outcome of the AI slop debacle. For decades, the general stance of tech has been, “there is no such thing as a behavioral/social problem, we can always fix it with smarter technology”, and AI is taking that opinion and drowning it in a bathtub. You can’t fix AI slop with technology because anything you do to detect it will be incorporated into better models until they evade your tests.

We now have no choice but to acknowledge the social element of these problems, although considering what a shitshow all of Silicon Valley’s efforts at social technology have been up to now, I’m not optimistic this acknowledgement will actually lead anywhere good.

  • You can’t fix AI slop with technology because anything you do to detect it will be incorporated into better models until they evade your tests.

    How is that a bad thing? At a certain point, it’s no longer AI slop!

    https://xkcd.com/810/

    • Polite slop is still slop.

      Most people use platforms like HN to engage in conversation with other people, not simply to assimilate information as efficiently as possible. That they are conversing with actual human beings has value to them, even when they do human things like express emotions and humor.

      Hacker News could be perfectly civil if it removed the human element entirely and had an AI post links and generate threads, avoiding common tropes and boilerplate and preferring technical and factual accuracy. Make the forum read only. It would succeed in HN's goal of avoiding Eternal September and maximizing the signal to noise ratio (to the degree that it's possible with AI,) and the technical quality and information density of threads would be superior to anything HN currently hosts on average, but it would also undermine the goal of making it worth a damn to nearly anyone.

      2 replies →

I guess I'm confused by your position here.

> I feel like the problem seems to me to be behavior, not a technology issue.

Yes, it's a behavior issue, but that doesn't mean it can't be solved or at least minimized by technology, particularly as a technology is what's exacerbating the issue?

> It's fundamentally about trust in people.

Who is lacking trust in who here?

  • Vulnerability reports are interesting from a trust point of view, because each party has a different financial incentive. You can't 100% trust the vendor to accurately assess the severity of an issue - they have a lot riding on downplaying an issue in some cases. The person reporting the bug is also likely looking for bounty and reputational benefit, both of which are enhanced if the issue is considered high severity. So a user of the supposedly-vulnerable program can't blindly trust either party.