Comment by barrkel
9 days ago
This is a good statement of what I suspect many of us have found when rejecting the rewriting advice of AIs. The "pointiness" of prose gets worn away, until it doesn't say much. Everything is softened. The distinctiveness of the human voice is converted into blandness. The AI even says its preferred rephrasing is "polished" - a term which specifically means the jaggedness has been removed.
But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.
I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better. As in, the prose is easier to understand, free of obvious errors or ambiguities.
But then, the writing is also never great. I've tried a couple of times to get it to write in the style of a famous author, sometimes pasting in some example text to model the output on, but it never sounds right.
> I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better.
Even poor writers write with character. My dad misspells every 4th word when he texts me, but it’s unmistakably his voice. Endearingly so.
I would push back with passion that AI writes “legitimately” better, as it has no character except the smoothed mean of all internet voices. The millennial gray of prose.
Oh god no, trust me, I'm an academic. I'd rather read an AI essay than the stuff some of my students write.
AI averages everything out, so there's no character left.
Similar thing happens when something is designed by a committee. Good for an average use, but not really great for anything specific.
3 replies →
> A lot of people aren't, and the AI legitimately writes better.
It may write “objectively better”, but the very distinct feel of all AI generated prose makes it immediately recognizable as artificial and unbearable as a result.
It depends how you define "good writing", which is too often associated with "proper language", and by extension with proper breeding. It is a class marker.
People have a distinct voice when they write, including (perhaps even especially) those without formal training in writing. That this voice is grating to the eyes of a well educated reader is a feature that says as much about the reader as it does about the writer.
Funnily enough, professional writers have long recognised this, as is shown by the never-ending list of authors who tried to capture certain linguistic styles in their work, particularly in American literature.
There are situations where you may want this class marker to be erased, because being associated with a certain social class can have negative impact on your social prospects. But it remains that something is being lost in the process, and that something is the personality and identity of the writer.
I find most people can write way better than AI, they simply don’t put in the effort.
Which is the real issue, we’re flooding channels not designed for such low effort submissions. AI slop is just SPAM in a different context.
You may be in a bubble of smart, educated people. Either way, one of the key ways to "put in the effort" is practice. People who haven't practiced often don't write well even if they're trying hard in the moment. Not even in terms of beautiful writing, just pure comprehensibility.
2 replies →
you cannot write well if you do not read a lot (you need to develop taste). this disqualifies most people. i included.
My experience has been
(ordered from best to worst)
1. Author using AI well
2. Author not using AI
3. Author using AI poorly
With the gap between 1 and 2 being driven by the underlying quality of the writer and how well they use AI. A really good writer sees marginal improvements and a really poor one can see vast improvements.
I am really conflicted about this because yes, I think that an LLM can be an OK writing aid in utilitarian settings. It's probably not going to teach you to write better, but if the goal is just to communicate an idea, an LLM can usually help the average person express it more clearly.
But the critical point is that you need to stay in control. And a lot of people just delegate the entire process to an LLM: "here's a thought I had, write a blog post about it", "write a design doc for a system that does X", "write a book about how AI changed my life". And then they ship it and then outsource the process of making sense of the output and catching errors to others.
It also results in the creation of content that, frankly, shouldn't exist because it has no reason to exist. The number of online content that doesn't say anything at all has absolutely exploded in the past 2-3 years. Including a lot of LLM-generated think pieces about LLMs that grace the hallways of HN.
Even if they “stay in control and own the result”, it’s just tedious if all communication is in that same undifferentiated sanded-down language.
[dead]
I think it’s essential to realize that AI is a tool for mainstream tasks like composing a standard email and not for the edges.
The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.
It’s the efficient popularization of the boring stuff. Not much else.
> The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.
I think that boring emails should not be written. What kind of boring emails do you NEED to be written, but not WANT to write? Those are exactly the kind of email that SHOULD NOT be passed through an LLM.
If you need to say yes/no. You don't want to take the whole email conversation and let LLM generate a story about why you said yes/no.
If you want to apply for a leave, just make it optimal "Hi <X>, I want to take leave from Y to Z. Thanks". You don't want to create 2 pages of justification for why you want to take this leave to see your family and friends.
In fact, for every LLM output, I want to see the input instead. What did they have in mind? If I have the input, I can ask LLM to generate 1 million outputs if I really want to read an elaboration. The input is what matters.
If I have the input, I can always generate an output. If I have the output, I don't know what was the input (i.e. the original intention).
when i pass my writings through ai the output is generally only marginally bigger than the input, and it derisks things a lot making my prose a nice beige.
It contributes to making “standard” emails boring. I rather enjoy reading emails in each sender’s original voice. People who can’t articulate well aren’t elevated, instead they are perceived to be sending bland slop if they use LLMs to conceal that they can’t express themselves well.
[dead]
I think it is also fairly similar to the kind of discourse a manager in pretty much any domain will produce.
He lacks (or lost thru disuse) technical expertise on the subject, so he uses more and more fuzzy words, leaky analogies, buzzwords.
This maybe why AI generated content has so much success among leaders and politicians.
Every group want to label some outgroup as naively benefiting from AI. For programmers, apparently it's the pointy haired bosses. For normies, it's the programmers.
Be careful of this kind of thinking, it's very satisfying but doesn't help you understand the world.
> But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.
This brings to mind what I think is a great description of the process LLMs exert on prose: sanding.
It's an algorithmic trend towards the median, thus they are sanding down your words until they're a smooth average of their approximate neighbors.
Mediocrity as a Service
artificial mediocrity
[dead]
no but its bad writing It repeats information, It adds superfluous stuff, doesnt produce more specific forms of saying things, you are making It sounds like its "too perfect" when its bland because its artificial dumbness not artificial intelligence
Well said. In music, it's very similar. The jarring, often out of key tones are the ones that are the most memorable, the signatures that give a musical piece its uniqueness and sometimes even its emotional points. I don't think it's possible for AI to ever figure this out, because there's something about being human that is necessary to experiencing or even describing it. You cannot "algorithmize" the unspoken.
Bryan Cantrill referred to it as "normcore" on a podcast, and that's the perfect description.
I'm sure this can be corrected by AI companies.
The question is… why? What is the actual human benefit (not monetary).
IME, in prose writing, arguing with LLM can help a newer writer to gather 'the facts' (to help with research) and 'the objections to the facts' (same result) to anticipate an initial approach to the material. This can save a lot of organizational time. After which, newer writer can more confidently approach topics in their own voice.
If AI wrote and thought better by default then I wouldn't have to read the AI slop my co-workers send me.
Just let my work have a soul, please.
That is NOT possible.
14 replies →
Eh, it's not __that__ simple.
3 replies →
[flagged]