← Back to context

Comment by jascha_eng

14 hours ago

Note I might be wrong on this one but it's just extremely annoying that I even have to consider if I am being manipulated by an AI while reading HN comments.

If I want to read AI stuff I go to Clawdbook or OpenAIs Sora app.

Sure, and we've banned the account, but please email us with these hn@ycombinator.com. @mentions don't work on HN; I only saw it because I was looking through the thread. We're also asking people not to make these accusations publicly, partly because they take longer for us to see than an email, and also because a false accusation is more harmful than a valid accusation is beneficial.

  • Okay fair about the mentions but I don't that email is a good process:

    1. It puts more effort on me as a user to report the spam via email because I have to open my email, compose one by hand and add the reasoning. The offending user in comparison probably automatically spams. Can't we have a button at least?

    2. It doesn't make the community aware of the ongoing issue. Other community members could be primed that currently they need to read comments more critically. At the moment that seems like the only detection that somewhat works but if I silently send an email instead of commenting here it doesn't inform anyone else of my suspicion.

    • It’s fine to just flag things and move on. We’re considering adding additional parameters to the flag function, but till then, emailing us with “LLM?” in the subject and the comment ID/URL in the body is great, and should be faster for you than a comment and faster for us to be able to act.

      The community is well aware of the issue and off-topic meta discussion has always been against the guidelines here. We’ve discussed this publicly and privately with top HN contributors and the consensus is that this is the least-worst approach.