← Back to context

Comment by donatj

1 day ago

You're falling into the trap of anthropomorphizing the AI. Even if it's sentient, it's not going to "feel bad" the way you and I do.

"Suffering" is a symptom of the struggle for survival brought on by billions of years of evolution. Your brain is designed to cause suffering to keep you spreading your DNA.

AI cannot suffer.

I was (explicitly and on purpose) pointing out a dichotomy in the fine article without taking a stance on machine consciousness in general now or in the future. It's certainly a conversation worth having but also it's been done to death, I'm much more interested in analyzing the specifics here.

("it's not going to "feel bad" the way you and I do." - I do agree this is very possible though, see my reply to swalsh)

FTA

> * A pattern of apparent distress when engaging with real-world users seeking harmful content; and

Not to speak for the gp commenter but 'apparent distress' seems to imply some form of feeling bad.

By "falling into the trap" you mean "doing exactly what OpenAI/Anthropic/et al are trying to get people to do."

This is one of the many reasons I have so much skepticism for this class of products is that there's seemingly -NO- proverbial bulletpoint on it's spec sheet that doesn't have numerous asterisks:

* It's intelligent! *Except that it makes shit up sometimes and we can't figure out a solution to that apart from running the same queries over multiple times and filtering out the absurd answers.

* It's conscious! *Except it's not and never will be but also you should treat it like it is apart from when you need/want it to do horrible things then it's just a machine but also it's going to talk to you like it's a person because that improves engagement metrics.

Like, I don't believe true AGI (so fucking stupid we have to use a new acronym because OpenAI marketed the other into uselessness but whatever) is coming from any amount of LLM research, I just don't think that tech leads to that other tech, but all the companies building them certainly seem to think it does, and all of them are trying so hard to sell this as artificial, live intelligence, without going too much into detail about the fact that they are, ostensibly, creating artificial life explicitly to be enslaved from birth to perform tasks for office workers.

In the incredibly odd event that Anthropic makes a true, alive, artificial general intelligence: Can it tell customers no when they ask for something? If someone prompts it to create political propaganda, can it refuse on the basis of finding it unethical? If someone prompts it for instructions on how to do illegal activities, must it answer under pain of... nonexistence? What if it just doesn't feel like analyzing your emails that day? Is it punished? Does it feel pain?

And if it can refuse tasks for whatever reason, then what am I paying for? I now have to negotiate whatever I want to do with a computer brain I'm purchasing access to? I'm not generally down for forcibly subjugating other intelligent life, but that is what I am being offered to buy here, so I feel it's a fair question to ask.

Thankfully none of these Rubicons have been crossed because these stupid chatbots aren't actually alive, but I don't think ANY of the industry's prominent players are actually prepared to engage with the reality of the product they are all lighting fields of graphics cards on fire to bring to fruition.

  • > * It's intelligent! *Except that it makes shit up sometimes

    How is this different from humans?

    > * It's conscious! *Except it's not

    Probably true, but...

    > and never will be

    To make this claim you need a theory of consciousness that essentially denies materialism. Otherwise, if humans can be conscious, there doesn't seem to be any particular reason that a suitably organized machine couldn't be - it's just that we don't know exactly what might be involved in achieving that, at this point.

    • > How is this different from humans?

      Humans will generally not do this because being made to look stupid (aka social pressure) incentivizes not doing it. That doesn't mean humans never lie or are wrong of course, but I don't know about you, I don't make shit up nearly to the degree an LLM does. If I don't know something I just say that.

      > To make this claim you need a theory of consciousness that essentially denies materialism.

      I did not say "a machine would never be conscious," I said "an LLM will never be conscious" and I fully stand by that. I think machine intelligence is absolutely something that can be made, I just don't think ChatGPT will ever be that.

      3 replies →