Comment by Selkirk
8 days ago
I have a colleague that recently self-published a book. I can easily tell which parts were LLM driven and which parts represent his own voice. Just like you can tell who's in the next stall in the bathroom at work after hearing just a grunt and a fart. And THAT is a sentence an LLM would not write.
> And THAT is a sentence an LLM would not write.
Really?
Here's some alternatives. Some are clunky. But, some aren't.
…just like you can tell whose pubes those are on the shared bar of soap without launching a formal investigation.
…just like you can tell who just wanked in the shared bathroom by the specific guilt radiating off them when they finally emerge.
…just like you can tell which of your mates just shitted at the pub by who's suddenly walking like they're auditioning for a period drama.
…just like you can tell which coworker just had a wank on their lunch break by the post-nut serenity that no amount of hand-washing can disguise.
…just like you can tell whose sneeze left that slug trail on the conference room table by the specific way they're not making eye contact with it.
…just like you can identify which flatmate's cum sock you've accidentally stepped on by the vintage of the crunch.
…just like you can tell who just crop-dusted the elevator by the studied intensity with which one person is suddenly reading the inspection certificate.
IMO The LLM you're using has failed to mimic the tone of OP's bathroom joke.
These alternatives are uncomfortably crude. They largely make gross reference to excretory acts or human waste. The original comment was off color, but it didn't go beyond a vague discussion of a shared human experience.
One shouldn’t expect the ‘joke’ to have identical tone. (As if that’s even measurable.)
The point was simply that these examples are not trending towards the average or ‘ablating’ things as the article puts it. They seem fairly creative, some are funny, all are gross… and they are the result of very brief prompt… you can ‘sculpt’ the output in ways that go way beyond the boring crap you typically find in AI-generated slop.
It's still on you to pick what the LLMs regurgitate. If you don't have a style or taste you will simply make choices that would give you away. And if you already have your own taste and style LLMs don't have much to offer in this regard.
Indeed. Wholeheartedly agree.
Just as it’s on you to pick the word you want when using Roger’s Thesaurus.
My workflow, when using it for writing, is different than when coding.
When coding, I want an answer that works and is robust.
When writing, I want options.
You pick and choose, run it through again, perhaps use different models, have one agent critique the output of another agent, etc.
This iterative process is much different than asking an LLM to ‘write an article about [insert topic)’ and hope for the best.
In any case, I’ve found the LLMs when properly used greatly benefit prose and knee-jerk comments about how all LLM prose sound the same are a bit outdated… (understandable as few authors are out there admitting they are using AI… there’s a stigma about it. But, trust me, there are some beautiful soulful pieces of prose out there that came out of a properly used LLM… it’s just that the authors aren’t about to admit it.)