Comment by pecheny

1 month ago

The content is nice and insightful! But God I wish people stopped using LLMs to 'improve' their prose... Ironically, some day we might employ LLMs to re-humanize texts that had been already massacred.

The author’ blog was on HN a few days ago as well for an article on SBOMs and Lockfiles. They’ve done a lot of work in the supply-chain security side and are clearly knowledgeable, and yet the blog post got similarly “fuzzified” by the LLM.

I definitely found the thesis insightful. The actual content stopped feeling insightful to me in the “What uv drops” section, where cut features were all listed as if they had equal weight, all in the same breathless LLM style

  • I would be able to absorb your perspective better if it were structured as a bulleted list, with SUMMARY STRINGS IN BOLD for each bullet. And if you had used the word "Crucially" at least once.

I have reached a point where any AI smell (of which this articles has many) makes me want to exit immediately. It feels tortuous to my reading sensibilities.

I blame fixed AI system prompts - they forcibly collapse all inputs into the same output space. Truly disappointing that OpenAI et all have no desire to change this before everything on the internet sounds the same forever.

  • You're probably right about the latter point, but I do wonder how hard it'd be to mask the default "marketing copywriter" tone of the LLM by asking it to assume some other tone in your prompt.

    As you said, reading this stuff is taxing. What's more, this is a daily occurrence by now. If there's a silver lining, it's that the LLM smells are so obvious at the moment; I can close the tab as soon as I notice one.

    • > do wonder how hard it'd be to mask the default "marketing copywriter" tone of the LLM by asking it to assume some other tone in your prompt.

      Fairly easy, in my wife's experience. She repeatedly got accused of using chatgpt in her original writing (she's not a native english speaker, and was taught to use many of the same idioms that LLMs use) until she started actually using chatgpt with about two pages of instructions for tone to "humanize" her writing. The irony is staggering.

    • It’s pretty easy. I’ve written a fairly detailed guide to help Claude write in my tone of voice. It also coaxes it to avoid the obvious AI tells such as ‘It’s not X it’s Y’ sentences, American English and overuse of emojis and em dashes.

      It’s really useful for taking my first drafts and cleaning them up ready for a final polish.

      1 reply →

    • It’s definitely partially solved by extensive custom prompting, as evidenced by sibling comments. But that’s a lot of effort for normal users and not a panacea either. I’d rather AI companies introduce noise/randomness themselves to solve this at scale.

      1 reply →

Editing the post to switch five "it's X not Y"s[1] is pretty disappointing. I wish people were more clear with their disclosure of LLM editing.

[1]: https://github.com/andrew/nesbitt.io/commit/0664881a524feac4...

  • You're supposed to also remove the fancy UTF-8 quotes that people can't normally type, the EM dashes, and reorder sentences/clauses because the paragraph level "template" slop is really obvious to people who use these models all the time. (I'm also pretty sure that the UTF-8 shenanigans with LLM responses was done very on purpose by those who have a vested interest in making it easier for mass surveillance of written communication.)

    Or, use the "deep research" mode for writing your prose instead. It's far less sloppy in how it writes.

    These people are amateurs at humanizing their writing.

  • This is terrible. So disrespectful. It's baffling how someone can do this under their own name

To me, unless it is egregious, I would be very sensitive to avoid false positives before saying something is LLM aided. If it is clearly just slop, then okay, but I definitely think there is going to be a point where people claim well-written, straightforward posts as LLM aided. (Or even the opposite, which already happens, where people purposely put errors in prose to seem genuine).

  • > To me, unless it is egregious, I would be very sensitive to avoid false positives before saying something is LLM aided. If it is clearly just slop

    Same. I'm actually more tired of this AI witch hunt

> Ironically, some day we might employ LLMs to re-humanize texts

I heard high school and college students are doing this routinely so their papers don't get flagged as AI

this is whether they used an LLM for the whole assignment or wrote it themselves, has to get pass through a "re-humanizing" LLM either way just to avoid drama

We wrote the paper on how to deslop your LLM outputs and if you use our factory de-slopped versions of gemma3 you don't have to worry about this, similarly if you use our antislop sampler, your LLM outputs will look very close to human.

https://arxiv.org/abs/2510.15061

there is going to be a point where people have read so much slop that they will start regurgitating the same style without even realising it. or we could already be at that point