Comment by wittjeff
2 days ago
Let me refer you to my buddy Anton, a software developer in Ukraine. He has CP and it makes typing and communicating by speech very slow and tedious. https://www.youtube.com/shorts/aYbDLOK14uM
He has a blog, which I think is particularly relevant to this conversation: https://www.patreon.com/c/GreenWizard/posts?vanity=GreenWiza...
IMO his writing style is quite melodramatic. I have asked myself, how much of that is his perhaps overly compensatory tendency to project an articulate voice, and how much of it is applied by his AI tools?
The last time I saw Anton in person I asked him about his writing process, and he said something like, "I just draft it and then ask ChatGPT to make it sound professional or whatever." So after thinking about it for a while, I have decided that this is his preferred voice, so I'll accept it as his voice.
IMO it is not for you to decide how people recast their own voice. Once you adopt that dogma, you're committed to denying other people's experience of discrimination (through the lens of disability's symptoms). Whether or not you participate in that other type of biased discrimination is irrelevant.
This is weaponizing the situation of a single disabled person. The correct response is to make exceptions based on extreme circumstances, not to accept this behavior from everyone.
Too often, advocates try to smuggle in their preferred policy using stories like this as cover.
Coming from a social scene in which I'm involved in modding and deconstructing video games, this behavior was immediately apparent to me. It's the same contrived story that cheaters use to explain why they really really need a feature that gives them an advantage over other players in online games.
The story itself being true or not doesn't really matter - they're weaponizing an appeal to emotion by using a disabled person as a prop to violate everyone else's standards of interaction.
The overton window has shifted so much that we can call balls and strikes as we see them without creating too much reee'ing. As long as people stay civil, it's good.
Count me as a weapon, too, then.
This is not weaponizing to a single disabled person. I am not disabled, but I have always had difficulty expressing myself effectively, and that difficulty has increased as I've aged. I use AI to help organize my thoughts, to help give voice to that little tidbit of an idea that is trying to escape, and it has been a genuine help. Asking me to not use that assistance is similar to asking a user to not use accessibility features. It's an asinine policy and is an overcorrection.
Is this not the difference between using AI as an aid to organise yourself, as opposed to using AI as a total replacement for your thoughts or your writing and therefore removing the personal touch?
The bone of contention is that the signal:noise ratio on GPT's output is super low and there is no way to tell the difference between a thoughtful GPT post and slop, and given how easy it is to post at volume with low-effort AI posts, there is a bias towards caution rather than acceptance.
At best it's a case-by-case affordance to use AI as opposed to a blanket rule.
1 reply →
For all the challenges that AI poses to online communities, it does allow people for whom typing and dictation are painful, difficult, or impossible, to participate in those communities in ways they never could before.
I think HN is broadly supportive of these voices, and I think that an "unwritten exception" to this rule is implicit here. But I'm in the camp that making an explicit exception for special circumstances would be a meaningful statement that all voices are welcome.
>it does allow people for whom typing and dictation are painful, difficult, or impossible
Putting aside the example proposed above where typing or dictation may be difficult, "impossible" seems, well, impossible. I am curious how you suppose that someone who cannot type or dictate at all would prompt an LLM.
In a forum/community context, speed is vital! If it takes an order of magnitude more time to generate responses like yours and mine, one must choose which conversations one participates in much more carefully, and every such investment risks having the context of the conversation shift dramatically while drafting a response - to the point that one might be considered rude or disconnected. That makes participation essentially impossible.
Someone with a slower rate of both reading and creating text would benefit less from LLM assistance, to be sure. But someone who can read quickly, but may only be able to generate/select a few bits of entropy per second due to physical limitations? (Human speech is widely cited at a median of 39 bits per second.) They’d benefit massively from a system that could generate proposed responses that could be chosen from and refined.
In other words, if you’re the oracle, and the machine asks multiple choice questions until it is certain it speaks with your voice - is there a better set of such questions than just letter-by-letter a-z, a-z, a-z? Does that imply the content is AI-edited? Or is it an accessibility tool?
1 reply →
Without negating your point I want to add that at some threshold of tediousness, usability issues become accessibility issues. The fact that this threshold varies from individual to individual makes heuristic guidelines difficult.
1 reply →