Comment by arrsingh
3 days ago
There should be a "flag as AI" link in addition to "flag" and then a setting for people to show flagged as AI. Once the flagged as AI reaches a certain threshold then it disappears unless you enable "Show AI".
Maybe once enough posts have been flagged like that then that corpus could be used to train an AI to automatically detect content generated by AI.
That would be cool.
Maybe the HN site wouldn't add this feature but if someone wrote a client then maybe it could be added there.
We're going to add that. I've resisted adding reasons-for-flagging for years, but even I can change my mind every decade or so.
A nice side effect is that it will double as a confirmation step, solving the FFF (fat finger flagging) problem.
> We're going to add that. I've resisted adding reasons-for-flagging for years, but even I can change my mind every decade or so.
You need a reason that means "this person is talking about something helpful that an admin needs to fix." Flagging currently has a negative connotation (too many flags and the comment gets deleted), but sometimes you want to flag a comment that says something like "the link is broken and should be X" to just bring it to admin attention without the implied negative judgement.
Flag as AI would be incredible and is probably unique to software-focused forums. Saves everyone who wants it a lot of time. Still allows cool content to reach the front page with some visibility or escape some moderation queue.
Thanks for not standing still on this issue. The world is changing, fast, and glad HN responded quicker than some forums on a cogent stance.
> it will double as a confirmation step, solving the FFF (fat finger flagging) problem
Thank you!!!
Could it be also a toggle to skip/not show any AI-generated content? And all child branches?
That might take me another decade.
I'm joking, but we've always resisted partitioning HN. Here a bunch of past explanations about that: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
I do sort of like the idea (suggested by mthurman) that we let users prompt HN to be the kind of HN they want. That could be the ultimate dump of long-requested features (dark mode! tags! blocklists!)
Will there be a process or opportunity for mis-flagged comments' posters to prove their comment was human generated?
Or will they have to simply eat the karma hit and move on?
Anyone can email hn@ycombinator.com and ask us to take a look either way.
1 reply →
Do commenters even know whether their post was flagged as anything?
I mean my comments may have been flagged or I may even have been shadowbanned but I never look at old comments to check.
Annoyingly as downvoting is, it's limited to -4.
My radical opinion is there shouldn't be 2 flags, there should be N flags, user defined, so that we can flag humor/satire/factuality/insight/political and a bunch of other things. I fully realize that's not going to fly any time soon.
Adding AI in addition to the standard up/downvote and flag seems a reasonable thing.
That sounds like /.'s moderation system. Not that I disagree, theme based filtering could be fun but also encourages things like meme threads that you'd see on reddit under the guise of "Just filter funny out and let us have fun".
The issue with N-flagging is that every flag needs to be universally-defined and equally applied.
If one person's humor is another person's satire is another person's political, then splitting it into N options muddles the signal.
Downvotes are bad enough between "I disagree with this" and "This isn't an appropriate comment for HN."
i think you're thinking of flair like on reddit, flag is more of a 'report spam' type feature
I think the up/downvote system is good enough for that - good posts go up, bad posts go down, really bad posts that nobody should see and whose poster should get banned get flagged.
Flags are a signal to the moderation system. What does it mean to "flag" something as "factuality" or "satire"?
1 reply →
‘Flag’ is an algorithmic flag only, and there are no humans in the flag algorithm’s processing loop. They may monitor and react to the ‘queue’ of flagged articles, and they can do special mod things with flagged posts. But if you want to report a guidelines violation for AI-assisted writing to the mods, just email the mods (contact link in the footer) subject “AI-assisted writing flag” or similar with a link to the post/comment. It works, I know, I’ve done it before. It takes maybe 60 seconds and there is no other way on the site (seemingly by OG design!) to guarantee human review but that email.
> It works, I know, I’ve done it before. It takes maybe 60 seconds and there is no other way on the site (seemingly by OG design!) to guarantee human review but that email.
It's a ton of friction compared to ordinary use of a forum; and while I've emailed several times myself, it comes with a sense of guilt (and a feeling that my "several" is probably approximately "several" above average).
Valid. It’s a big drawback of HN. I find it helps to report a perceived guidelines violation in “seems like” language rather than “is”, without demanding a specific mod outcome, in cases where I’m uncertain. That is noticeably distinct from “this is completely unacceptable” which I’ve said in a couple of instances, though I still tend to let the mods pick the outcome since that’s their job and I make a specific effort not to participate in sentencing decisions if at all possible.
ps. I acknowledge as well that I’m exempt from feeling guilt for brain reasons, and so if it sounds like I’m not honoring what I would describe as a ‘completely normal’ human response, apologies; I’m trying my best given the lack of familiarity and intend no disrespect towards that reaction.
Never occurred to me to try that, because I assumed I would get banned for doing it, until today.
Nah, as long as you aren’t demanding and rude, you’ll either get a reply or not, and if you get a reply, it’ll either be “we’ll look into it”, “we looked into it and acted in some way”, or “we looked into it and decided it isn’t actionable”; often with some supporting explanation.
(I suppose if you open with e.g. “wtf is wrong with you mods” they might well ask you to reconsider your approach or else clock a ban — I’ve never tried that!)
I’ve actually been thinking about this exact idea for https://hcker.news/. Stay tuned, I’ve already started rolling out some comment filtering.
Oh I didnt know about this. Very cool. Is hcker.news only on web? Or is there a mobile app as well?
No app right now but it works well as a PWA.