← Back to context

Comment by foxglacier

6 months ago

But why? It's nice that somebody's collecting sources of pre-AI content that might be useful for curiosity or research or something. But other than that, why does it matter? AI text can still be perfectly good text. What's the psychological need behind this popular anti-AI ludditism?

You’re absolutely right that AI-generated text can be good—sometimes even great. But the reason people care about preserving or identifying pre-AI content isn’t always about hating AI. It's more about context and trust.

Think of it like knowing the origin of food. Factory-produced food can be nutritious, but some people want organic or local because it reflects a different process, value system, or authenticity. Similarly, pre-AI content often carries a sense of human intention, struggle, or cultural imprint that people feel connected to in a different way.

It’s not necessarily a “psychological need” rooted in fear—it can be about preserving human context in a world where that’s becoming harder to spot. For researchers, historians, or even just curious readers, knowing that something was created without AI helps them understand what it reflects: a human moment, not a machine-generated pattern.

It’s not always about quality—it’s about provenance.

Edit: For those that can't tell this is obviously just copy and pasted from chatgpt response.

  • I feel like the em-dashes and "You're absoultely right" already kinda serve the purpose of special AI-only glyphs

    • I've found propensity to swear quite a useful observation when determining whether or not a user is an LLM. I suspect it'll remain useful for quite a while, the corporate LLM providers at least won't be training their models to sound like a sailor eight pints deep any time soon.

  • OK, so they can choose to read material from publishers that they trust to only produce human generated content. Similar to buying organic food. Pay a bit more for the feeling. No need for those idealists to drag everybody else into it.