Comment by tptacek
2 days ago
If you care about voice, you still can get a lot of value from LLMs. You just have to be careful not to use a single word they generate.
I've had a lot of luck using GPT5 to interrogate my own writing. A prompt I use (there are certainly better ones): "I'm an editor considering a submitted piece for a publication {describe audience here}. Is this piece worth the effort I'll need to put in, and how far will I need to cut it back?". Then I'll go paragraph by paragraph asking whether it has a clear topic, flows, and then I'll say "I'm not sure this graf earns its keep" or something like that.
GPT5 and Claude will always respond to these kinds of prompts with suggested alternative language. I'm convinced the trick to this is never to use those words, even if they sound like an improvement over my own. At the first point where that happens, I get dial my LLM-wariness up to 11 and take a break. Usually the answer is to restructure paragraphs, not to apply the spot improvement (even in my own words) the LLM is suggesting.
LLMs are quite good at (1) noticing multi-paragraph arcs that go nowhere (2) spotting repetitive word choices (3) keeping things active voice and keeping subject/action clear (4) catching non-sequiturs (a constant problem for me; I have a really bad habit of assuming the reader is already in my head or has been chatting with me on a Slack channel for months).
Another thing I've come to trust LLMs with: writing two versions of a graf and having it select the one that fits the piece better. Both grafs are me. I get that LLMs will have a bias towards some language patterns and I stay alert to that, but there's still not that much opportunity for an LLM to throw me into "LLM-voice".
All of this sounds like something you could just do yourself after putting a piece down for a day or two and coming back to it with fresh eyes. What benefit is there of cooking the oceans with a bullshit generator?
Like, sure, it's possible to do this with an LLM, but it's also possible to do it without, at roughly similar levels of effort, without contributing to all of the negative externalities of the LLM/genAI ecosystem.
Being able to get useful feedback immediately rather than 48 hours later is useful if you need text today.
Great, now I can procrastinate longer and still meet deadlines. By accelerating climate change.
Because the complaints about the power and water usage of AI are mostly motivated reasoning. I don't like AI, therefore I'm going to find a reason not to like it. I Listen, if it's Greta Thunberg pointing out that AI datacenters use a lot of resources, yeah, I'm willing to listen. But when the voices saying "but what about all the water/electricity is wasting" is coming from individuals I know personally haven't previously given a shit about the planet or conservation or recycling and have made fun of me for reusing things instead of throwing stuff into the garbage, I'm sorry, but those complaints from those individuals fall on deaf ears. Not saying you are, just a theme I've noticed with people in my life.
So all uses of water, land, and power are the same?
They have no grading in terms of importance and priority, especially in a world contending with climate change, lack of arable land, lack of drinkable water, and so on? AI usage of these resources is on par with every other use?
"anecdotally, some people in my life said something that was measurably true but that I didn't like. I think they're being phony."
Who is using motivated reasoning here?
Anything you could automate you could do yourself. What’s the benefit?
If you don't want to eat meat on Fridays, I'm certainly not going to tell you that you should. You do you.
What I struggle more with the things like Grammarly, where it's a mix of fixing very nitpicky grammar spelling structure issues that push things from casual writing with my own voice into more of a professional tone.
+1 on this one! I only use LLMs once I'm done with writing, and basically using them as my editor.
In case it helps anyone, here is my prompt:
"You are a professional writer and editor with many years of experience. Your task is to provide writing feedback, point out issues and suggest corrections. You do not use flattery. You are matter of fact. You don't completely rewrite the text unless it is absolutely necessary - instead you try to retain the original voice and style. You focus on grammar, flow and naturalness. You are welcome to provide advice changing the content, but only do that in important cases.
If the text is longer, you provide your feedback in chunks by paragraph or other logical elements.
Do not provide false praise, be honest and feel free to point out any issues."
(Yes, you kind of need to repeat you're actively not looking for a pat on the back, otherwise it keeps telling you how brilliant your writing is instead of giving useful advice.)
> LLMs are quite good at (1) noticing multi-paragraph arcs that go nowhere
I wonder if this is due to LLMs being trained on persuasive writing.
I simply tell the LLM to call out my mistakes and explain them, but do not offer corrections or replacements. I use it to help my kids with their homework and it's fantastic.
They’re also great, in my experience, for overcoming writer’s block and procrastination. Just as a rubber duck to bounce ideas off of and follow different threads.
It makes the writing process faster and more enjoyable, despite never using anything the LLM generates directly.
Workshopping with humans is even better, if you find the right humans, but they have an annoying habit of not being available 24/7.
I think you just did another non-sequitur.. What is a graf? Is it journalism slang for "paragraph"?
Yeah, easier to type, easier to read, deliberately misspelled so it sticks out to copyeditors. I use it sometimes without thinking. An LLM would have caught that! :)
> copyeditors
Do those jobs still exist?