← Back to context

Comment by gchamonlive

4 days ago

HN's comment section new favourite sport, trying to guess if an article was generated by LLM. It's completely pointless. Why not focus on what's being said instead?

I thought the same thing. With the rate LLMs are improving, it's not going to be too much longer before no one can tell.

I also enjoy all the "vibes" people list out for why they can tell, as though there was any rhyme or reason to what they're saying. Models change and adapt daily so the "heading structure" or "numbered list" ideas become outdated as you're typing them.

Because I find LLM-generated content very annoying to read. It's sloggish, bloated, and the speaker always has this cringe way of trying to connect to the audience.

I don't believe the story itself is made up by an LLM but I'd argue that if you have an LLM write your story then it's no problem for you to have it add a TL;DR at the top so we can skip the slop.

[flagged]

  • > This is an LLM-generated article, for anyone who might wish to save the "15 min read" labelled at the top. Recounts an entirely plausible but possibly completely made up narrative of incompetent IT, and contains no real substance.

    Nothing in the original message refers to it being clickbait, the core complaint is the LLM-like tone and the lack of substance, which you also just threw it there without references ironically.

    > What, exactly, is the problem with disclosing the nature of the article for people who wish to avoid spending their time in that way?

    It's alright as long as it's not based on faith or guesswork.

    • It is not based on guesswork. For whatever it's worth, I have gotten 7 LLM accounts banned from HN in the past week based on accurately detecting and reporting them to moderation[1]. Many of these accounts had between dozens and 100 upvotes, some with posts rated to the top of their threads that escaped detection by others. I have not once misidentified and reported an account that was genuinely human. I am aware that other people have poorly-tuned heuristics and make false accusations, but it is possible to build the skill to detect LLM output reliably, and I have done so. In the end, it is up to you whether you believe me, but I am simply trying to offer a warning for people who dislike reading generated material, nothing more.

      [1] Unlike LLM-generated articles, posting LLM-generated comments is actually against the rules.

      5 replies →