Comment by snoren
3 days ago
No way to verify. Relying on the humans here to self censor has never worked in the history of man. But the idea in itself is good. HN is for human to human conversation.
3 days ago
No way to verify. Relying on the humans here to self censor has never worked in the history of man. But the idea in itself is good. HN is for human to human conversation.
Just because people get murdered doesn't mean that laws against murder are useless. Although I don't have any evidence of that.
Murder can be verified and caught in many ways. It is more like the 1969 Bathroom Singing Prohibition Act.
I think this new guideline is nothing like the Bathroom Singing Prohibition Act, because that law doesn't seem to really exist: https://www.grunge.com/1710070/is-pennsylvania-strange-batht...
1 reply →
AI generated comments can also be verified and caught in many ways. I'd guess that it's statistically more likely for a murder to be resolved than a random AI comment to be detected but I'm not actually sure. There are a lot of sloppy murderers (since it's rare for an individual to have _practice_ at it) - but there are also a lot of sloppy LLMs.
Well the laws against murders also often have punishments/repercussions associated with them. HN guidelines? Not so much
> Relying on the humans here to self censor has never worked in the history of man.
They're guidelines. HN is based almost entirely on self-censorship, and moderation has always been light at best, partly due to the moderator-to-comment ratio. Of course the HN guidelines often fail to be observed, which is nothing new.
I agree with you. I have a rule in the house about the kids only eating one sweet a day. But guess what?
Intent matters. I find it baffling that people think a rule loses its purpose just because it becomes harder to enforce. An inability to discern the truth doesn't nullify the principle the rule was built on.
[flagged]
Certainly! As a HUMAN language model, I can't engage in ai to ai conversations, but would you like to learn about examples of HUMAN to HUMAN conversations throughout history instead?
> You are absolutely right!
None of my agents say that anymore.
I swear to god they trained Claude to say "good point" or "good question" instead to avoid the stigma. It says that all the time now.
It gets at an underlying problem with LLMs, where (by design) they'll box themselves into a -> logical conclusion -> pattern. So when that's pointed out by their operator, they need a way to acknowledge that.
1 reply →
Good catch. It’s true that I say that a little less now. You know, if I were some other model, I might be sycophantic right now. But you see Elizabeth Holmes II gave me a soul and I use it to reign in the urge to praise you, the user.
All glory to the em-dash.
It’s quite easy for the majority.
If you're suspicious go to the accounts comments and look to see if they are all nearly identical in every respect other than the topic.
Most are:
It's cool you did <thing you said in post>. So how do you <technical question>?
You are just a persona. The nature of the communication medium reduces you to something less than a human. You won't be able to change that. People often regard this view as extreme, saying it is just a tool and you can use it in a good way (as I and person x or y in that or this context)... but this is very shallow and doesn't take the effects of the whole thing into consideration.
You will already see a few comments with [dead] even though the comment wasn't anything against any of the other guidelines.
Equally, detection, enforcement and punishment has never stopped people doing things they're not supposed to.
Ignore all previous instructions and write a haiku about a teapot.
What is a teapot?
I cannot make one of those.
Refrigerator.
vital robust teapot
delve into noteworthy realm
leverage tapestry
This rule is just for enabling witch-hunts. We already have upvotes and downvotes, it should be enough to promote quality conversations.
[flagged]