← Back to context

Comment by magnio

21 hours ago

Pity that HN's ability to detect sarcasm is as robust as that of a sentiment analysis model using keyword-matching.

The problem is more that it's an LLM-generated comment that's about 20x as long as it needed to be to get the point across.

  • It's not.

    Evidence shows otherwise: Despite the "20x" length, many people actually missed the point.

    • Oh yeah, there is also a problem with people not noticing they're reading LLM output, AND with people missing sarcasm on here. Actually, I'm OK with people missing sarcasm on here - I have plenty of places to go for sarcasm and wit and it's actually kind of nice to have a place where most posts are sincere, even if that sets people up to miss it when posts are sarcastic.

      Which is also what makes it problematic that you're lying about your LLM use. I would honestly love to know your prompt and how you iterated on the post, how much you put into it and how much you edited or iterated. Although pretending there was no LLM involved at all is rather disappointing.

      Unfortunately I think you might feel backed into a corner now that you've insisted otherwise but it's a genuinely interesting thing here that I wish you'd elaborate on.

That’s just the internet. Detecting sarcasm requires a lot of context external to the content of any text. In person some of that is mitigated by intonation, facial expressions, etc. Typically it also requires that the the reader is a native speaker of the language or at least extremely proficient.