There is theoretically a big difference, but in practice, I think that peopel using AI to 'get suggestions' tend to dramatically under-estimate its impact on their writing.
It might feel like just a couple of tweaks, but they add up fast.
Your “in practice” is doing too much heavy lifting here. This comes across as more of a prejudice on people than a fair assessment of the tools and techniques.
I'm increasingly convinced that most people spend most of their lives actively trying to find ways to avoid actually thinking about things. When I look at it that way I figure that either we achieve benevolent AGI in the near to medium term or society collapses due to whatever the asymptotic form of today's LLMs is.
An LLM telling me I mispeled a word isn't changing my voice. Especially when I know the proper spelling and simply have a typo.
An LLM telling me I omitted a qualifier and that my statement isn't saying what I meant it to say isn't changing my voice - it's ensuring what you see is my voice.
Maybe. But it can also help people find their voice. And I'd rather have comments from someone knowledgeable but unrefined with some good guidance than their silence on that same topic.
There is a big difference between "asking an editor for suggestions" and "vibe posting".
You don't lose your voice if you ask for advice and manually incorporate the suggestions you agree with.
You might lose your voice if you say "Improve my comment to make it better" and copy-paste the result without another thought.
There is theoretically a big difference, but in practice, I think that peopel using AI to 'get suggestions' tend to dramatically under-estimate its impact on their writing.
It might feel like just a couple of tweaks, but they add up fast.
Your “in practice” is doing too much heavy lifting here. This comes across as more of a prejudice on people than a fair assessment of the tools and techniques.
It hides your voice, and shortcuts your thinking process, because your editing is when you actually evaluate what you think!
When using LLMs to write, the temptation to avoid actually thinking about what you're communicating is too much for most people.
I'm increasingly convinced that most people spend most of their lives actively trying to find ways to avoid actually thinking about things. When I look at it that way I figure that either we achieve benevolent AGI in the near to medium term or society collapses due to whatever the asymptotic form of today's LLMs is.
In the words of the comment: the rough edges are what make you.. you!
Keep polishing and everything eventually turns into a smooth shiny ball. We need texture, roughness, edges.
An LLM telling me I mispeled a word isn't changing my voice. Especially when I know the proper spelling and simply have a typo.
An LLM telling me I omitted a qualifier and that my statement isn't saying what I meant it to say isn't changing my voice - it's ensuring what you see is my voice.
There's a simple solution to the spelling part. Use a spell checker. They seem to work pretty well.
Yep. I actually prefer seeing imperfect writing, there is signal there that AI would erase.
Maybe. But it can also help people find their voice. And I'd rather have comments from someone knowledgeable but unrefined with some good guidance than their silence on that same topic.
AI doesn't just hide your voice -- it improves it!
I had a README with a curse word in it and the agent would try repeatedly to remove it in drive by edits bundle in with some other change.