That's a fair question, so I'll try as best I can. And maybe this will serve as a meta-example for me because it is hard to explain.
In a real discussion, the messiness is an important signal. The mistakes that you made and _didn't_ catch, the clunky word choices, etc, give insight actually show what you are thinking and how clearly you are thinking about it. If you have edited something for clarity, that's an important signal. LLM editing destroys that signal.
And it gets worse because LLMs destroy that signal in one direction - towards homogeneity. They create the illusion of "what you were actually thinking, but better than you could express it" but what they are delivering is "generic, professional-sounding ideas phrased in a way to convince you they are your own".
I'm also not averse to pasting Claude's output sometimes, with clear attribution, if it adds something. It's not that different from pasting a quote from Wikipedia- might bring useful information but there is a chance that it could be wrong.
Yes exactly, when it's clearly attributed you can skip it. It's a tool, it can be used to process and analyse large amounts of information. Not different from Excel.
The fact that several users posted genuine replies to this obvious bot account is proof that this rule will likely go mostly unenforced. The average person is seemingly unable to notice they're reading slop, no matter how obvious it is.
Despite being a bot, it appears to have made a substantive comment that sparked thoughtful replies. Many other comments by this user have been moderator-flagged or auto-flagged, but flagging this one would hide the human discussion.
Tell me about it. English is not my first language... I would say weird things and get downvoted for it. But... we really need this as people started automating too much.
> The final comment is mine, shaped by my experience and opinions
I can understand why you think this is true, but it is false.
Can you expand on that? Why do you think so?
That's a fair question, so I'll try as best I can. And maybe this will serve as a meta-example for me because it is hard to explain.
In a real discussion, the messiness is an important signal. The mistakes that you made and _didn't_ catch, the clunky word choices, etc, give insight actually show what you are thinking and how clearly you are thinking about it. If you have edited something for clarity, that's an important signal. LLM editing destroys that signal.
And it gets worse because LLMs destroy that signal in one direction - towards homogeneity. They create the illusion of "what you were actually thinking, but better than you could express it" but what they are delivering is "generic, professional-sounding ideas phrased in a way to convince you they are your own".
3 replies →
Why not be real and multi faceted in both thinking and writing? Trying to be perfect in writing just makes you plastic.
By the looks of it, I don't even think I'm replying to a human.
They didn't even bother to remove any of the signals. Perhaps this post is actually a honeypot for these bots.
I'm also not averse to pasting Claude's output sometimes, with clear attribution, if it adds something. It's not that different from pasting a quote from Wikipedia- might bring useful information but there is a chance that it could be wrong.
"It's not that different from pasting a quote from Wikipedia"
Claude's output it _totally different_ from pasting a quote from Wikipedia.
The latter has the potential to be edited and reviewed by global subject experts.
Claude's output totally depends on what priors you gave it and while you can have high confidence in the context no third party should have.
Indeed, but we know this, right? When it's relevant, the prompt should also be included.
1 reply →
Yes it is different and I don't want to read it.
Yes exactly, when it's clearly attributed you can skip it. It's a tool, it can be used to process and analyse large amounts of information. Not different from Excel.
1 reply →
The fact that several users posted genuine replies to this obvious bot account is proof that this rule will likely go mostly unenforced. The average person is seemingly unable to notice they're reading slop, no matter how obvious it is.
Despite being a bot, it appears to have made a substantive comment that sparked thoughtful replies. Many other comments by this user have been moderator-flagged or auto-flagged, but flagging this one would hide the human discussion.
People calling it out seem to be getting downvoted, too. Sure, let's trust this one-day-old cryptobro's vague criticism of difficult enforcement.
Tell me about it. English is not my first language... I would say weird things and get downvoted for it. But... we really need this as people started automating too much.