← Back to context

Comment by saint-evan

6 hours ago

I think I come at this from a very different angle. I grew up around books, so I default pretty hard to being reader-first. I don’t really factor in the author’s effort when I decide if something was worth reading. It’s almost entirely about whether the work holds my attention and/or gives me something.

So the idea of feeling tricked based on how much effort went into it feels foreign to me. If I got something out of it, that's enough. Even if it took the author and a model no time at all.

The ‘feeling tricked’ part, to me, suggests a kind of adversarial framing with AI outputs that I think is curious. I’m just engaging with the text in front of me, whether it’s a story, a README, or a wall of technical writing. If it communicates clearly and has substance, I don’t think much about where it came from. I think much of this just comes down to what people think they’re engaging with when they read, the work itself or the mind behind it.

And tbh, filtering what’s worth the attention has always been on the reader. There’s plenty of human written slop too. I tend to judge everything the same way on my way to deciding whether to keep reading or drop it.