Comment by redfloatplane
4 hours ago
It’s funny you say that as about halfway through I was beginning to wonder if this was at least Claude-edited. Absolutely no shade to the author meant, I think it’s a thoughtful article, but I _did_ feel the sheen of AI co-authorship.
It raises the question of how much text I have read that I did not realise was LLM-generated. I think I have a decent nose for it but I’m not perfect, there must be false negatives (and false positives, as it certainly might be with this article). What will it mean when I can no longer tell the difference?
Edit: thinking on it a little more, I hope the author doesn’t feel insulted by my comment given the subject matter of the article at hand. Sorry, it’s early morning! I’m sure I am wrong about my assessment. Which now really makes me wonder about the above
Hey! I'm not insulted at all. My position is that of a Luddite: I think technology is neutral, but deployment is not. My critique is structural, and I don't blame people in or out of tech for adopting AI to be able to survive.
No AIs were harmed in the writing of this post, either physically or by the sharing of earlier (cringe) drafts.
Pangram agrees with you. About 25% of the text trips the detection threshold, mostly towards the later half.
I don't want to make any accusations, just give some evidence to the above comment.
I have bad news for you...