If you spend a lot of time among some folks or talk a lot to LLMs, you are guaranteed to pick up manners, manners of speech, ways of thinking, behaviors (where applicable) ...
Me and my brother just can't stop mixing English and German when we talk to each other. But we don't or barely do it when we talk to others.
When I learned about code, logic, math, I started talking and thinking in different ways and from different and towards different perspectives.
The more I read, which I haven't done in a long long while, the more massive and vast the info I pack into a few sentences becomes.
The more I draw or play the guitar or work on game mechanics and story design or dialogues, the more annoying my speech and manners become but to my environment, that also means that I become "more" social and actually somewhat likeable and bearable.
You smell like Non-evidence based arrogance. I was always surrounded by people who smelled like that. But they are good little copypasta soldiers who follow trends and mutually assure that they don't go completely off the rails. But if one does, they leave him on his real or imaginary battlefield. Nobody wants to evolve anymore. It hurts some people just a little too much, I guess. They'd rather poison others and have their code deleted before getting to live a second life. I hope I could get you on edge a little. I'm just fucking around. But you will probably think something the likes of ... "there's always some truth to it when ..."
It does not, at all. Forming that judgment because of “Enter X” is ridiculous. I recognize my friend Claude in disguise all the time on HN and this is not one of those cases.
See, I'm all for calling out LLM spam, but because of people like you who have a terrible calibration and make false accusations against obviously human generated messages, I get all manners of people criticising me for pointing out things I know for sure are actually LLM-generated. You really think "Enter citalopram" as a singular instance you point out weights this more towards LLM-generated than "thru" and "reheated cat shit", among the entire tone of the message, weights it towards human-generated? Your heuristics are wildly miscalibrated.
"Do not rely too much on your own judgment. [...] if you are an expert user of LLMs and you tag 10 pages as being AI-generated, you've probably falsely accused one editor."
Never accuse people of LLM writing based on short comments, your false positive rate is invariably going to be way too high to be acceptable given the very limited material.
It's just not worth it: Even if you correctly accuse 9/10 times, you are being toxic to that false positive case for basically no gain.
If you spend a lot of time among some folks or talk a lot to LLMs, you are guaranteed to pick up manners, manners of speech, ways of thinking, behaviors (where applicable) ...
Me and my brother just can't stop mixing English and German when we talk to each other. But we don't or barely do it when we talk to others.
When I learned about code, logic, math, I started talking and thinking in different ways and from different and towards different perspectives.
The more I read, which I haven't done in a long long while, the more massive and vast the info I pack into a few sentences becomes.
The more I draw or play the guitar or work on game mechanics and story design or dialogues, the more annoying my speech and manners become but to my environment, that also means that I become "more" social and actually somewhat likeable and bearable.
You smell like Non-evidence based arrogance. I was always surrounded by people who smelled like that. But they are good little copypasta soldiers who follow trends and mutually assure that they don't go completely off the rails. But if one does, they leave him on his real or imaginary battlefield. Nobody wants to evolve anymore. It hurts some people just a little too much, I guess. They'd rather poison others and have their code deleted before getting to live a second life. I hope I could get you on edge a little. I'm just fucking around. But you will probably think something the likes of ... "there's always some truth to it when ..."
The OP just writes well. Also an llm is unlikely to write "thru"
It does not, at all. Forming that judgment because of “Enter X” is ridiculous. I recognize my friend Claude in disguise all the time on HN and this is not one of those cases.
See, I'm all for calling out LLM spam, but because of people like you who have a terrible calibration and make false accusations against obviously human generated messages, I get all manners of people criticising me for pointing out things I know for sure are actually LLM-generated. You really think "Enter citalopram" as a singular instance you point out weights this more towards LLM-generated than "thru" and "reheated cat shit", among the entire tone of the message, weights it towards human-generated? Your heuristics are wildly miscalibrated.
Your sense of smell is not something to write home about.
Yes only I wrote it while taking a shit. Sorry man.
A kinda strained endeavor, I must say. It's not bad but the SSRI side effects make continuous hydration obviously important. A small price to pay.
How would you prove that?
They don't, hence the suspicion instead of a definite assertion. And suspicions are easy, because what are the consequences if it's false? None.
Anyway your comment smells AI generated, I can tell from some of the pixels and seeing quite a few shoops in my time.
I don't need to "prove it", because all I have to do is link this:
https://arxiv.org/abs/2409.01754
https://arxiv.org/abs/2508.01491
https://aclanthology.org/2025.acl-short.47/
https://arxiv.org/abs/2506.06166
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
https://osf.io/preprints/psyarxiv/wzveh_v1
https://arxiv.org/abs/2506.08872
https://aclanthology.org/2025.findings-acl.987/
https://aclanthology.org/2025.coling-main.426/
https://aclanthology.org/2025.iwsds-1.37/
https://www.medrxiv.org/content/10.1101/2024.05.14.24307373v...
https://journals.sagepub.com/doi/full/10.1177/21522715251379...
https://arxiv.org/abs/2506.21817
Either they used an LLM to write part of it, or the linguistic mind virus infected them and now they speak a little bit like an LLM.
Relevant excerpt from your own wiki guideline:
"Do not rely too much on your own judgment. [...] if you are an expert user of LLMs and you tag 10 pages as being AI-generated, you've probably falsely accused one editor."
Never accuse people of LLM writing based on short comments, your false positive rate is invariably going to be way too high to be acceptable given the very limited material.
It's just not worth it: Even if you correctly accuse 9/10 times, you are being toxic to that false positive case for basically no gain.