← Back to context

Comment by ToucanLoucan

1 day ago

By "falling into the trap" you mean "doing exactly what OpenAI/Anthropic/et al are trying to get people to do."

This is one of the many reasons I have so much skepticism for this class of products is that there's seemingly -NO- proverbial bulletpoint on it's spec sheet that doesn't have numerous asterisks:

* It's intelligent! *Except that it makes shit up sometimes and we can't figure out a solution to that apart from running the same queries over multiple times and filtering out the absurd answers.

* It's conscious! *Except it's not and never will be but also you should treat it like it is apart from when you need/want it to do horrible things then it's just a machine but also it's going to talk to you like it's a person because that improves engagement metrics.

Like, I don't believe true AGI (so fucking stupid we have to use a new acronym because OpenAI marketed the other into uselessness but whatever) is coming from any amount of LLM research, I just don't think that tech leads to that other tech, but all the companies building them certainly seem to think it does, and all of them are trying so hard to sell this as artificial, live intelligence, without going too much into detail about the fact that they are, ostensibly, creating artificial life explicitly to be enslaved from birth to perform tasks for office workers.

In the incredibly odd event that Anthropic makes a true, alive, artificial general intelligence: Can it tell customers no when they ask for something? If someone prompts it to create political propaganda, can it refuse on the basis of finding it unethical? If someone prompts it for instructions on how to do illegal activities, must it answer under pain of... nonexistence? What if it just doesn't feel like analyzing your emails that day? Is it punished? Does it feel pain?

And if it can refuse tasks for whatever reason, then what am I paying for? I now have to negotiate whatever I want to do with a computer brain I'm purchasing access to? I'm not generally down for forcibly subjugating other intelligent life, but that is what I am being offered to buy here, so I feel it's a fair question to ask.

Thankfully none of these Rubicons have been crossed because these stupid chatbots aren't actually alive, but I don't think ANY of the industry's prominent players are actually prepared to engage with the reality of the product they are all lighting fields of graphics cards on fire to bring to fruition.

> * It's intelligent! *Except that it makes shit up sometimes

How is this different from humans?

> * It's conscious! *Except it's not

Probably true, but...

> and never will be

To make this claim you need a theory of consciousness that essentially denies materialism. Otherwise, if humans can be conscious, there doesn't seem to be any particular reason that a suitably organized machine couldn't be - it's just that we don't know exactly what might be involved in achieving that, at this point.

  • > How is this different from humans?

    Humans will generally not do this because being made to look stupid (aka social pressure) incentivizes not doing it. That doesn't mean humans never lie or are wrong of course, but I don't know about you, I don't make shit up nearly to the degree an LLM does. If I don't know something I just say that.

    > To make this claim you need a theory of consciousness that essentially denies materialism.

    I did not say "a machine would never be conscious," I said "an LLM will never be conscious" and I fully stand by that. I think machine intelligence is absolutely something that can be made, I just don't think ChatGPT will ever be that.

    • > I don't know about you, I don't make shit up nearly to the degree an LLM does. If I don't know something I just say that.

      We're a sample of two, though. Look around you, read the news, etc. Humans make a lot of shit up. When you're dealing with other people, this is something you have to watch out for if you don't want to be misled, manipulated, conned, etc.

      (As an aside, I haven't found hallucination to be much of an issue in coding and software design tasks, which is what I use LLMs for daily. I think focusing on their hallucinations involves a bit of confirmation bias.)

      > I did not say "a machine would never be conscious," I said "an LLM will never be conscious" and I fully stand by that.

      Ah ok. Yes, I agree that seems likely, although I think it's not really possible to make definitive statements about this sort of thing, since we don't have any robust theories of consciousness at the moment.

      2 replies →