← Back to context

Comment by sentientslug

1 year ago

Yes, it’s obvious AI writing. The fact that some people can’t tell is actually scary. Eventually (soon?) none of us will be able to tell.

> The fact that some people can’t tell is actually scary.

It really is, and I see more and more of it in Reddit comments, and even at work.

I had some obvious AI writing sent to me by a lawyer on the other side of a dispute recently and I was pissed - I don't mind if you want to use it to help you (I do myself), but at least have the decency to edit so it doesn't read like ChatGPT trash.

  • > It really is, and I see more and more of it in Reddit comments, and even at work.

    I have a morbid fascination with how bad Reddit has become. LLMs have supercharged the problem, but even before ChatGPT became popular Reddit was full of ragebait, reposts, lies, and misinformation.

    The scary and fascinating thing to me is that so many people eat that content right up. You can drop into the front page (default subreddits or logged out) and anyone with basic adult level understanding of the world can pick out obvious lies and deliberate misinformation in many of the posts. Yet 1000s of people in the comments are getting angry over obviously fabricated or reposted AITA stories, clear ragebait in /r/FluentInFinance, and numerous other examples. Yet a lot of people love that content and can’t seem to get enough of it.

If you're below-average, AI writing looks great. If you've above, it looks horrible. That goes not just for writing but anything else created by AI --- it's the average of its training data, which is also going to be average in quality.

  • I didn't notice that this was AI myself. I tend to start skimming when the interesting bits are spread out.

    There's two variations of this that are very common:

    * Watering down - the interesting details are spread apart by lots of flowery language, lots of backstory, rehashing and retelling already established points. It's a way of spreading an cup of content into a gallon of text, the same way a cup of oatmeal can be thinned.

    * High fiber - Lots of long-form essays are like this. They start with describing the person being interviewed or the place visited as though the article were a novel and the author is paid by the word. Every person has some backstory that takes a few paragraphs. There is some philosophizing at some point. The essay is effectively the story of how the essay was written and all the backstory interviews rather than a treatise on the supposed topic. It's basically loading up your beef stew with cabbage; it is nutritive but not particularly dense or enjoyable.

    Both are pretty tedious. AI can produce either one, but it can only hallucinate or fluff to produce more content than its inputs. As such, AI writing is a bit like a reverse-compression algorithm.

It won't be long before you'll have people who learn English with ChatGPT and then it'll get even more confusing.

  • I have posted about this before on HN. I push back against this sentiment. I had a post-doc roommate who was a native German speaker, who regularly used ChatGPT to improve his English grammar and phrasing while writing papers. He told me that ChatGPT was an excellent tutor to improve his English. A few times, he showed me the improvements. I agreed: It was pretty good.

  • This is certainly already happening because TikTok and YouTube are packed with AI content

More likely it'll be normalised until we all start to think of it as normal and start to write like that ourselves.

  • I doubt it, because it is a style that people who’re bad at writing already use. Like, our magical robot overlords did not make it up wholesale; plenty examples of that particular sort of stylistic suck were already out there.

    (I am semi-convinced that the only job that’ll really be impacted by LLMs is estate agent copywriters, because estate agents already love that awful style.)

  • I can never be 100% sure it is AI writing or someone who cheated their English homework using AI and thinks normal people write like that.

    • It's never the latter until the current crop of high school students graduate. Most students couldn't have used it until 2022; it didn't exist.

What's worse is that this obvious AI writing is going to become a part of new AI training datasets, as it gets scraped, so we'll end up with some kind of ouroborus of AI slop.