Comment by taurath
5 days ago
Just responding to 5 here, as I think the rest is a capable examination but I think starts to move around the point I'm trying to make, that I disagree that one morally has to engage with AI. Its not just to "understand what you are facing" - that's a tactical choice, not a moral one. Its just not a moral imperative. Non-engagement can be a protest as well. Its one of the ways that the overton window maintains itself - if someone were to take the, to me, extreme view that AI/LLMs will within the next 5 years result in massive economic changes and eliminate much of society's need for artists or programmers, I choose not to engage with that view and give it light. I grew up around doomsayers and those who claim armageddon, and the arguments being made are often on similar ground. I think they're kooks who don't give a fuck about the consequences of their acceleration-ism, they're just chasing dollars.
Just as I don't need to understand the finer points of extreme bigotry to be opposed to it, we don't need to be experts on LLMs to be opposed to the well-heeled and breathless hype surrounding it, and choose to not engage with it.
> Just as I don't need to understand the finer points of extreme bigotry to be opposed to it, we don't need to be experts on LLMs to be opposed to the well-heeled and breathless hype surrounding it, and choose to not engage with it.
If by the last "it" you mean "the hype", then I agree.
But -- sorry if I'm repeating -- I don't agree with conflating the tools themselves with the hype about them. It is fine to not engage with the hype. But it is unethical to boycott LLM tooling itself when it could serve ethical purposes. For example, many proponents of AI safety recommend using AI capabilities to improve AI safety research.
This argument does rely on consequentialist reasoning, which certainly isn't the only ethical game in town. That said, I would find it curious (and probably worth unpacking / understanding) if one claimed deontological reasons for avoiding a particular tool, such as an LLM (i.e. for intrinsic reasons). To give an example, I can understand how some people might say that lying is intrinsically wrong (though I disagree). But I would have a hard time accepting that _using_ an LLM is intrinsically wrong. There would need to be deeper reasons given: correctness, energy usage, privacy, accuracy, the importance of using one's own mental faculties, or something plausible.
In case it got lost from several comments higher in the chain, there is/was an "if" baked into my statement:
>> If one is motivated by ethics, I think it is morally required to find effective ways to engage to shape and nudge the future.
Put another way, the claim could be stated as: "if one is motivated by ethics, then one should pay attention to consequences". Yes, this assumes one accepts consequentialism to some degree, which isn't universally accepted nor easy to apply in practice. Still, I don't think many people (even those who are largely guided by deontology) completely reject paying attention to consequences.