Comment by pecheny
18 hours ago
The content is nice and insightful! But God I wish people stopped using LLMs to 'improve' their prose... Ironically, some day we might employ LLMs to re-humanize texts that had been already massacred.
18 hours ago
The content is nice and insightful! But God I wish people stopped using LLMs to 'improve' their prose... Ironically, some day we might employ LLMs to re-humanize texts that had been already massacred.
I definitely found the thesis insightful. The actual content stopped feeling insightful to me in the “What uv drops” section, where cut features were all listed as if they had equal weight, all in the same breathless LLM style
The author’ blog was on HN a few days ago as well for an article on SBOMs and Lockfiles. They’ve done a lot of work in the supply-chain security side and are clearly knowledgeable, and yet the blog post got similarly “fuzzified” by the LLM.
There are a handful of things in TFA that, while not outright false, are sloppy enough that I'd expect someone knowledgeable to know/explain better.
I didn't notice that - can you give some examples?
2 replies →
Editing the post to switch five "it's X not Y"s[1] is pretty disappointing. I wish people were more clear with their disclosure of LLM editing.
[1]: https://github.com/andrew/nesbitt.io/commit/0664881a524feac4...
I recsind my previous statement. Also, people have to stop putting everything on github.
This is terrible. So disrespectful. It's baffling how someone can do this under their own name
Interestingly I didn’t catch this, I liked it for not looking LLM written!
“Why this matters” being the final section is a guaranteed give away, among innumerable others.
I realized once I was in the "optimizations that dont need rust" section. Specifically "This is concurrency, not language magic."
2 replies →
To me, unless it is egregious, I would be very sensitive to avoid false positives before saying something is LLM aided. If it is clearly just slop, then okay, but I definitely think there is going to be a point where people claim well-written, straightforward posts as LLM aided. (Or even the opposite, which already happens, where people purposely put errors in prose to seem genuine).
there is going to be a point where people have read so much slop that they will start regurgitating the same style without even realising it. or we could already be at that point
I have reached a point where any AI smell (of which this articles has many) makes me want to exit immediately. It feels tortuous to my reading sensibilities.
I blame fixed AI system prompts - they forcibly collapse all inputs into the same output space. Truly disappointing that OpenAI et all have no desire to change this before everything on the internet sounds the same forever.
You're probably right about the latter point, but I do wonder how hard it'd be to mask the default "marketing copywriter" tone of the LLM by asking it to assume some other tone in your prompt.
As you said, reading this stuff is taxing. What's more, this is a daily occurrence by now. If there's a silver lining, it's that the LLM smells are so obvious at the moment; I can close the tab as soon as I notice one.
> do wonder how hard it'd be to mask the default "marketing copywriter" tone of the LLM by asking it to assume some other tone in your prompt.
Fairly easy, in my wife's experience. She repeatedly got accused of using chatgpt in her original writing (she's not a native english speaker, and was taught to use many of the same idioms that LLMs use) until she started actually using chatgpt with about two pages of instructions for tone to "humanize" her writing. The irony is staggering.
It’s pretty easy. I’ve written a fairly detailed guide to help Claude write in my tone of voice. It also coaxes it to avoid the obvious AI tells such as ‘It’s not X it’s Y’ sentences, American English and overuse of emojis and em dashes.
It’s really useful for taking my first drafts and cleaning them up ready for a final polish.
1 reply →
It’s definitely partially solved by extensive custom prompting, as evidenced by sibling comments. But that’s a lot of effort for normal users and not a panacea either. I’d rather AI companies introduce noise/randomness themselves to solve this at scale.
I also don't read AI slop. It's disrespectful to any reader.
> Ironically, some day we might employ LLMs to re-humanize texts
I heard high school and college students are doing this routinely so their papers don't get flagged as AI
this is whether they used an LLM for the whole assignment or wrote it themselves, has to get pass through a "re-humanizing" LLM either way just to avoid drama