Comment by mananaysiempre

4 hours ago

> Then the trend quietly died, as trends do. Not because anyone decided carousels were bad. Just because something newer came along to copy.

> [...]

> I've started asking clients a simple question when they bring it up. Not to be difficult, just to understand.

> [...]

> It's not about utility. It's not even really about the chatbot. It's about visibility, the fear of looking behind.

> [...]

> No pop-ups. No blinking corners. Just content, clear and immediate.

It’s been long enough that this might even have plausibly come from a human with LLM writing overrepresented in their brain rather than an LLM. But either way there’s this record-scratch feeling that I experience on each one of these, and (fittingly) it just completely knocks me out of the groove, requiring deliberate effort to resume reading.

And, I mean, none of these is even bad in isolation, but it sure feels like we’re due either a backlash where these patterns become underused even when appropriate, or them becoming so common they lose their power (is syntax subject to semantic bleaching?). Or perhaps both. Socioliguists are going to have a blast.

Have courage and trust your own instincts. Unless one is extremely disagreeable it's very tempting to hedge and avoid outright saying "this is AI" just in case you're wrong, but if you're literate and regularly exposed to AI outputs your instincts are likely quite accurate.

In this particular case the linked article is definitely AI generated.

  • OTOH I’ve had blog posts I wrote two decades ago vehemently called out as AI generated. AI generated style unfortunately means writing that tested positively in human A/B testing, now over represented in a style used largely by AI.

    So if you write in a way that engages the reader, you’re going to struggle not to use em dashes and the occasional a/b contrast, because those are challenging the reader to engage… but when overused, they not only don’t have the intended effect ( to break the reader out of passivity) , they also constitute a new kind of sin.

    So no, don’t “trust your gut”. Trust the math. Is it too much? Or is it just trying to jar you out of not engaging with the prose?

    But yeah, I’d say this article is likely written primarily with AI. Which doesn’t mean it’s not guided with intention and potentially important, it just means the article was probably commissioned and edited by a human, not written by one.

  • Indeed, consider these two posts linked below also from this blog. They look the same, they maintain the same impersonal writing style. There's no humanity to it at all.

    They maintain such a consistent paragraph length that they're either a professional copyeditor or, as is clearly the case, are an LLM.

    Humans deviate a lot more than this, they use run on sentences or lose the thread in their writing.

    This blog however reads like every-other post on LinkedIn. Semi-professional tone, with a strong "You, Me" hook to most posts.

    I encourage everyone to make an LLM-generated blog, don't post the articles anywhere, but generate one, to get a feeling for how these things write.

    Because this is unmistakably LLM. I'd even go so far as to identify the model of these particular posts as ChatGPT.

    Yet when we point this out, we're told it is "unmistakably human" and that we're rude for pointing it out.

    https://adele.pages.casa/md/blog/the-joy-of-a-simple-life-wi...

    https://adele.pages.casa/md/blog/finding_flow_in_code.md

  • I started off hedging but by the end of the comment came to think that AI use—or lack thereof—was actually beside the point. I have feelings with regards to the situation where “the situation” includes some largely irrelevant-to-writing things like the mainframization and the “feelings” are not nearly coherent enough to graduate to thoughts. Thus (unlike some others) I don’t think that calling out writers or warning readers about AI is all that useful (or for that matter courageous). With respect to writers who use AI due to a lack of confidence, it’s probably even harmful. (Saying that as a person who manages to absolutely suck in embarrassing ways in multiple foreign languages. And also in English but less obviously. And likely in my native language too due to lack of use.) Meanwhile, TFA makes a decent point, and I am in no position to criticize people for being wordy.

    The thing is, by now it doesn’t actually matter if AI or not AI or partly AI or whatever, because the record scratch is still there and still breaks my immersion. I could be oversensitive (I definitely am to some other English-language things, and also feel that others are to yet other things like em dashes), but it feels like there’s a new language/social-signalling thing now, and you may have to avoid it even if you’re not an LLM.

LLMs don't "own" this writing style. By definition they can't - they were trained on human writing after all! People wrote like this before and that's fine. You might not like the style, but saying it's because LLM writing has infested their brain is wrong, dismissive and dehumanising.

  • Any style can cross the border into bad and get in the way of itself when it's turned up to 11, no matter who wrote it.

    There've been stylistic fads before LLMs where a thing, with results just as chalkboard-screech-inducing as the current one. That this one is just a button-push away does make it worse, though, because it proliferates so greedily.

    Bad writing is bad writing, and writing like an LLM is writing like an LLM. We should be able to call this out. In fact, calling out the human responsibility in it is the very opposite of dehumanizing to me.

    • Yes, definitely, but the parent post was quite explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content.

      Sure, call the style bad or even similar to LLMs, but there's no reason to believe the style came from LLMs. It existed before and people who used it before still exist and still use it now.

      Hell, this person seems to be a web(site) developer, that's a very marketing-speak-heavy field. It's far morely likely that's where they "caught" thos style. It happened to me too back when I was still in it.

      2 replies →

  • Only to a limited extent, the fine tuning of these models uses a much smaller more curated set to generate tone and defaults.

    The whole corpus is in there, but the standard style is tuned for.

  • I wonder how much marketing copy has poisoned the "default" writing style of LLMs, it surely has those undertones of pitching a sale in an uncanny valley way.

  • So I will say that things I read were not written in this style.

    And people I read had better ability to not put in unneceasary random completely made up facts or illogical implications.

  • LLMs don’t own these expressions in the same sense that McDonald’s doesn’t own salt: they are undoubtedly making use of a strong reaction that humans have had—have been having—long before; but they did develop a way to mash that button on an industrial scale like few before them. (With of course a great deal of help from humans, be it via customer surveys or RLHF; or you could call it help from Moloch[1] in that the humans unwittingly or negligently assembled themselves into a runaway optimizer.) So I think it’s fair to say that LLMs do own this style, as in the balance of ingredients, even if they do not own the ingredients themselves. And anyway nothing in the social perception of language cares about fairness: low-class English speakers did not invent negative agreement (“double negatives”), yet it will still sound low-class to you and even me (and my native language requires negative agreement).

    As for being dehumanizing, perhaps I did commit the sin of psychoanalysis at a distance here, but I’ve felt enough loose wires sticking out of my brain’s own language production apparatus that I don’t think pointing out the mechanistic aspects reduces anyone’s humanity.

    For instance, nobody can edit their own writing until they forget what’s in it—that’s why any publishing pipeline needs editors, and preferably two layers of them, because the first one, who edits for style and grammar, consequently becomes incapable of spotting their own mechanical mistakes like typos, transposed or merged words, etc. Ever spotted a bug in a code-review tool that you’ve read and overlooked a dozen times in your editor? Why does a change in font or UI cause a presumably rational human being to become capable of drawing logical inferences they were not before? In either case, there seems to be a conclusion cache of sorts that we can’t flush and can’t disable, requiring these sorts of actually quite expensive hacks. I don’t think this makes us any less human, and it pays to be aware of your own imperfections. (Don’t merge your copy- and line editors into a single position, please?..)

    As for syntactic patterns, I’ve quite often thought of a slick way to phrase things and then realized that I’d used it three times in as many sentences. On some occasions I’ve needed to literally grep every linking word in my writing to make sure I haven’t used a single specific one five times in a row. If you pay attention during meetings or presentations, you’ll notice that speakers (including me!) will very often reuse the question’s phrasing word for word regardless of how well it fits, without being aware of it in the slightest. (I’m now wondering if lawyers and witnesses train to avoid this.) Language production is stupidly taxing on the brain (or so I’ve heard), so the brain will absolutely take every possible shortcut whether we want it to or not.

    Thus I expect that the priming effect I’m alleging can be very real even before getting into equally real intangibles like “taste”. I don’t think it dehumanizes anyone; you could say it dehumanizes everyone equally instead, but my point of view is that being aware of these mechanical realities of the mind is essential to competent writing (or thinking, or problem solving) in the same way that being aware of mechanical realities of the body is essential to competent dancing (or fighting, or doing sports). A bit of innocence lost is a fair trade for the wisdom gained.

    (Not that I claim to be a particularly good writer.)

    [1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

> “…there’s this record-scratch feeling…”

The op is a blog post. You’re talking about blog post writing. Maybe you just don’t like their style?

It’s also true llm second drafts are a thing.

And it’s true both can ‘record scratch’ you right out of attention.

As well as the now present trend as readers to be impatient and quickly bored.

And this criticism of writing style (for my take this article is perfectly readable)—what is the aim? Call for writers to perform some kind of disclosure? Because without a goal, it sounds like complaining you don’t like the soup.

None of that feels like AI smell to me despite the "it's not X it's Y" framing. I can't really explain why though.

None of those 4 look like AI slop to me. They lack the strange non-sequitur nature these contrasting statements generally have when made by AI. The version of the third example I would expect from a clanker would be more like

> It's not about utility. It's not even really about the chatbot. It's about novelty of talking to a machine

Which of course doesn't connect to the rest of the article contents, because the AI doesn't have any intention in its writing.