← Back to context

Comment by zmef

2 days ago

This policy is incredibly misguided, ableist, neo‑Luddite, technophobic hogwash. Technologically mediated communication has been with us almost as long as communication itself. We already accept writing, printing, telegraphy, phones, keyboards, spellcheckers, compilers, search engines, and autocomplete as legitimate augmentations of human thought. Drawing the line at this particular class of tools feels arbitrary and, frankly, rooted more in fear than in principle. I get it: humans are instinctively protectionist. A tool that operates in the same “space” as what we think makes us special—our intelligence, our language—feels threatening. It looks like competition rather than amplification. But this is just the next step in the same trajectory. Like written language, printing, and telecommunications, generative models are tools that, on the whole, will raise our collective intelligence by reducing the cost of expressing, translating, and recombining ideas. They don’t replace human judgment, curiosity, or responsibility; they change the interface. Generative AI is, in a sense, just very advanced cave painting: humans using whatever is at hand to make marks that carry meaning across time and space. Refusing to engage with those marks because the paint got better doesn’t make the communication more “authentic”; it just makes the medium poorer.

I think you’re missing the point and approaching this with a myopically binary perspective.

Just because you consider AI an interface in line with, perhaps, a paintbrush, typewriter, or spell checker, doesn’t mean it automatically is. It may even be true for you, and not for others. That’s the myopic part.

The binary part is that simply because you see it as an interface, it doesn’t have effects that are different than the interface of a brush. You wouldn’t get very far arguing with a judge that 80mph over the speed limit is exactly the same as 5mph over the speed limit.

Or, where would you draw the line. Is hiring someone to write your hacker news comments still your comment? Or what about spam bots? Are they not also an “interface?” Is banning spam bots outrightly also “ableist” by you?

But also, we have plenty of both media philosophical musing and evidence based data that shows that while mediums may not BE the message, they absolutely do affect the message.

In this case HN is simply saying that the process of humans generating words that we type onto a screen is the valuable part of communicating that we want to maintain. And that using AI is a bridge too far in losing the effort and output from that process.