Comment by saint-evan

2 days ago

I really <i>REALLY</i> enjoyed this article and the direction it took me in. I went in with zero preconceptions, just read it straight through, and only after opening the comments did I realize it was largely AI-assisted. Even then, I was very pleasantly surprised. The piece takes you by the hand and leads you through a very deliberate and directed journey. Sure, there are moments where things wobbled a bit like some explanations around specific failures get a little tangled and even contradictory, but none of that registered as “this must be AI.” I’m only noticing those things now, in hindsight, like oh, that’s what that was.

The images hit that sweet spot too. Just enough and few in between to support the plot without getting in the way, just enough to like visually clarify without over-explaining. It all worked together even with minor contradictions around labelling. The inconsistencies wasn't sticky enough to disrupt the plot at all.

Over the MY years I’ve seen an idea play out in movies, books, articles, short stories, that the “humanity only unites when faced with an alien intelligence”. What gets me is how people can enjoy something like this, then immediately recoil once they figure it was actually AI-assisted enough to be largely Ai generated. Does that actually diminish the substance of what they just experienced? I don’t think it does but I'm not gonna argue such a subjective stance.

Someone in the comments suggested tagging AI-assisted work with sth like an “LLM:” prefix, similar to “ShowHN:”. That feels weird to me. LLMs might not be sentient, but they’re clearly capable enough that the output should stand on its own, alongside the intent and effort of whoever’s guiding it. Pre-labeling it just bakes in bias before anyone even engages with the work. It’s not that far off from asking human authors to declare their race or nationality up front. 'cause really if nothing about my direct experience changed, why should my judgment?

In a tech-forward space like HN, I’d expect a stronger bias toward judging things on merit alone. Just read the thing. Let it speak first. I sincerely hope this isn't gonna be an 'LLM vs Humanity' thing 'cause personally, I find the idea of a different kind of intelligence extremely interesting.

I had the exact same experience. It's probably the first time I've read something that (besides the images, which I think are pretty obvious) I didn't think was AI. And while I did feel a little tricked learning it was AI, ultimately it was actually just quite good?

I understand why people feel like they need more transparency around these things. Reading for me is intentional, and I feel cheated when I put in the effort to read something for which the author put in little. I would like to think the author put in a lot of effort for this story despite AI assistance, and so it was worth me putting effort in. But whether that's true or not I still felt like I got something out of it (hard not to as a software engineer wondering about their place in the world), and that's something.

  • I think I come at this from a very different angle. I grew up around books, so I default pretty hard to being reader-first. I don’t really factor in the author’s effort when I decide if something was worth reading. It’s almost entirely about whether the work holds my attention and/or gives me something.

    So the idea of feeling tricked based on how much effort went into it feels foreign to me. If I got something out of it, that's enough. Even if it took the author and a model no time at all.

    The ‘feeling tricked’ part, to me, suggests a kind of adversarial framing with AI outputs that I think is curious. I’m just engaging with the text in front of me, whether it’s a story, a README, or a wall of technical writing. If it communicates clearly and has substance, I don’t think much about where it came from. I think much of this just comes down to what people think they’re engaging with when they read, the work itself or the mind behind it.

    And tbh, filtering what’s worth the attention has always been on the reader. There’s plenty of human written slop too. I tend to judge everything the same way on my way to deciding whether to keep reading or drop it.