← Back to context

Comment by tavavex

12 days ago

This is a confused and misguided project. It makes the mistake of failing to identify why the AI 'style' feels wrong. The author decided to replicate similar tools by breaking down AI writing into bite-sized issues, but it just doesn't work the same way as correcting grammatical errors. Because of this, the author had to really try to find what's so wrong about these patterns in isolation, so all of it comes off as annoying nitpicks. Let's take a look at a few.

> Overused Intensifier - Delete it. If the sentence still makes sense, the word was never needed. If it doesn't, rewrite the sentence to show why it matters.

You heard it here first. Adjectives? More like AIdjectives, a covert plan by AI companies to make our writing more sloppy. According to this recommendation, writing should never have any emphasis, it should only contain the most basic "X is Y" relations, like in some programming language. Sentences should contain the bare minimum amount of information required to parse them, everything else must be cut. In practice, this recommendation only filters a few of the most pervasive 'corporate PowerPoint'-style language, but even then, the suggestion that these words are never useful is wrong.

> Triple Construction - Break the pattern. Use two items or four. Or convert one item into its own sentence to give it more weight.

Humans may really like when things are structured into threes, but you must resist this AI temptation! Use two or four points, because you're not like them. The only reason cited for why this is wrong is that LLMs use this pattern often, so naturally the rest of us must cede good writing practices to them.

> "Almost" Hedge - Commit. "Almost always" → "usually." Or just say "always" and defend the claim. Readers notice when you won't take a stance.

As we all know, the world is discrete and easy to describe. That's why there simply isn't anything between things that happen "usually" (70%) and "always" (100%). Saying "almost always" (95%) is bad, because you should round your estimates and defend what is now an obviously wrong statement, for it makes you seem more brutal and confident.

> "Broader Implications" - State the implication explicitly, or cut the phrase. "This has broader implications" says nothing. What are the implications? Say them.

God forbid you organize an essay that's in any way non-linear, temporarily withholding some information for the sake of organization. Asking to can the phrase entirely says that even complex writing should be strung together in a rigid and sequential order.

That's the problem with the project, the way I see it. It was too heavily inspired by Grammarly and the likes, and in chasing it, the criticisms were adapted to fit the Grammarly model. The issue with that LLM 'style' is the punchy, continuous overuse of these patterns to the point where these phrases start seeming like meaningless sound combinations. There's nothing wrong with most of these patterns individually, what I hate is when text is filled with them to the brim, not when they show at all. If your writing is like the example paragraph, with most of the text highlighted, then it's a sign that your essay is more rhetoric than substance. But if you write an argument with three items in it and it's highlighted because "that's like AI" to make you delete it, then that's performative self-censorship, not improving your writing.

Yeah, "don't overuse these patterns" is the right attitude for tools like this, not "fix all mistakes". And that's OK?

  • It would be OK, but the point I'm raising is that the Grammarly-like design encourages the user to resolve everything it highlights, to make the text look uniform and spotless.

I think this would come off a lot better if the recommendations weren't so absolute. I like the effect of a multicolored slab of highlights calling out every LLM cliche in a passage. Yes, the slop style is not just the sum of these individual patterns, but they're definitely significant contributors to the effect, and they're worth being aware of in your own writing regardless of their association with LLMs. You just can't treat it as a list of must-resolve errors (same as with any writing feedback, really).

> According to this recommendation, writing should never have any emphasis...

If you have measurable amplifications, use them. "This outcome was 40% more frequent". Otherwise keep subjective emotion out of documents, unless you're writing a novel.

> God forbid you organize an essay that's in any way non-linear...

Essays should be brutally logical and sequential. If the text is becoming cluttered with data, break it out into a table. I read a document for information, not for some movie-director suspenseful build-up and revelation.

There's a good rule where I work that any document that requires someone to make a decision must fit on two or fewer pages. Anything longer is TLDR. Tables and charts are prized for their information density, novelesque writing is not.

  • > Otherwise keep subjective emotion out of documents, unless you're writing a novel.

    There's more types of writing between the extremes of research papers and novels. Data is useful and all, but asking it to be the sole driving component of ALL types of non-fictional writing is too much. Besides, this tool would criticize your novel just the same, because the intended use is to have it filter everything you write.