Comment by throw4847285

2 days ago

The problem is one of negative polarization. I found myself skeptical of a lot of the claims around LLMs, but was annoyed by AI critics forming an angry mob anytime AI was used for anything. However, I still considered myself in that camp, and ended up far more annoyed by AI boosterism than AI skepticism, which pushed me in the direction of being even more negative about AI than I started. It's the mirror of what happened to you, as far as I can tell. And I'm sure both are very common, though admitting it makes one seem reactive rather than rational and so we don't talk about it.

However, I do dispute your central claim that the issues with LLMs parallel the issues with people. I think that's a very dehumanizing and self-defeating perspective. The only ethical system that is rational is one in which humans have more than instrumental value to each other.

So when critics divide LLMs and humans, sure, there is a descriptive element of trying to be precise about what human thought is, and how it is different than LLMs, etc. But there is also a prescriptive argument that people are embarrassed to make, which is that human beings have to be afforded a certain kind of dignity and there is no reason to extend that to an LLM based on everything we understand about how they function. So if a person screws up your order at a restaurant, or your coworker makes a mistake when coding, you should treat them with charitability and empathy.

I'm sure this sounds silly to you, but it shouldn't. The bedrock of the Enlightenment project was that scientific inquiry would lead to human flourishing. That's humanism. If we've somehow strayed so far from that, such that appeals to human dignity don't make sense anymore, I don't know what to say.

It sounds silly to me not because I don't value humans. I don't value humans because of my personal grievances that are difficult to defend in a serious ethical discussion. It sounds silly to me because it leaves "human" undefined. To me, the question "is LLM human?" is eerily similar to "are black people people?" and "are Jews people?". AI displays intelligence but it doesn't deserve respect because it doesn't meet certain biological requirements. Really awkward position to defend.

Instead of "humanism", where "human" is at the centre, I'd like to propose a view where loosely defined intelligence is at the centre. In pre-AI world that view was consistent with humanism because humans were the only entity that displayed advanced intelligence, with the added bonus that it explains why people tend to value complex life forms more than simple ones. When AI enters the picture, it places sufficiently advanced AI above humans. Which is fine, because AI is nothing but the next step of evolution. It's like placing "homo sapiens" above "homo erectus" except AI is "homo sapiens" and we are "homo erectus". Makes a lot of sense IMO.

  • Now I understand your love of LLMs. What you write reads like the output of an LLM but with the dial turned from obsequious to edgelord. There is no content, just posturing. None of what you wrote holds up to any scrutiny, and much of it is internally contradictory, but it doesn't really matter to you, I guess. I don't think you're even talking to me.

    • I take it as a compliment. I've always been like this. I challenged core assumptions, people didn't like it, later it would turn out I was right.