← Back to context

Comment by dumpsterdiver

4 hours ago

> Just because something can communicate in a way that you can interpret, doesnt mean something is conscious

The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.

I think these ideas are orthogonal. I do not think that conciousness is defined by human experience at all - in fact, I think humans do a profound disservice to animals in our current lack of appreciation for their clear displays of conciousness.

That said, if a chimpanzee bares its teeth to me, I could interpret that to be a smile when in fact its a threatening gesture. Its this misinterpretation that I am trying to get at. The overlaying of my human experiences onto something which is not human. We fall for this over and over again, likely as we are hard wired to - akin to mistakenly seeing eyes when observing random patterns in nature.

In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour? To me, its humans falling foul of false pattern matching in the pursuit of understanding

  • What makes you certain that human thought is more than pattern matching?

    As I understand it neuroscience hasn’t come up with a clear explanation of thought, much less a mind or consciousness. It seems to me complex pattern matching is a reasonable a cause of consciousness as anything else.

  • Why does a neuron, which is simply a cell that takes in chemicals and electricity, and shits out neurotransmitters; why does 90 billion of those give rise to human intelligence? Neurons are just next chemical state machines. We can model individual ones on a computer. Yet 90 billion of them together make up a human brain, and gives rise to consciousness and intelligence. If you get stuck on the next word prediction part, and ignore the ridiculous scale that's involved with training a model, you miss the forest for the trees.