← Back to context

Comment by maybewhenthesun

20 hours ago

People having been saying for aeons that consciousness originates in the (mammalian) cortex and not in the brainstem. To justify killing all sorts of animals ;-)

The whole thing makes one thing extremely clear: people are very good at moving goalposts. We've blasted past the 'turing test' for all practical purposes, but we moved the definition of 'true intelligence'. Consciousness and intelligence have long seen as higly correlated or even the same thing. But now we have need of a separation between the two.

If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well.

We're definitely not there yet, but at what point does turning off an AI become the same as killing a being? I think that's not being talked about enough. Sure LLMs are just prediction engines. But so are we. Our brains are prediction engines tuned by evolution to do the best possible prediction of the near future to maximize survival. We are definitely conscious. But a housefly, is that conscious? What makes the difference? it's hard to tell.

Otoh, an AI has no evolutionary reason to have the concept of fear/suffering so maybe it's more like the douglas adams creature that doesn't mind to be killed?

Since well before LLMs, people have been talking about "philosophical zombies," hypothetical objects that could emulate human behavior perfectly but had no inner experience.

Some philosophers (one modern example being Kastrup) point out that the only thing we really know is our own conscious experience. We don't go full-on solipsist because other people appear to be built the same way as ourselves, so it's a small jump to think they're conscious as well. Over the past few decades scientists have found that other animals' brains are quite similar to our own in important ways, mammals especially, and are more willing to credit them with consciousness.

But AIs run on completely different hardware with different algorithms. It's entirely possible that they're philosophical zombies. It's a bigger leap to say they're conscious like us, because they're more different from us.

LLMs still do not pass the turing test as it is commonly understood. Ask the right questions, and it becomes apparent very quickly which party is the machine and which is the human. Hell, there are enough people on here that can probably tell them apart just from the way that LLMs write.

But it's also easy to argue that LLMs do pass the turing test just because it's so vague. How many questions can I ask? What's the success threshold needed to 'pass'? How familiar is the interrogator with the technology involved? It's easy to claim that goal posts have been moved when nobody even knew where they stood to begin with.

Ultimately it's impossible to rigorously define something that's so poorly understood. But if we understand consciousness as something that humans uniquely possess, it's hard to imagine that intelligence alone is enough. You at least also need some form of linear (in time) memory and the ability to change as a result from that memory.

And that's where silicon and biological computers differ - it's easy to copy/save/restore the contents of a digital computer but it's far outside our capabilities to do the same with any complex biological system. And that same limitation makes it very difficult for us humans to even imagine how consciousness could exist without this property of being 'unique', of being uncopiable. Of existing in linear time, without any jumps or resets. Perhaps consciousness doesn't make sense at all without that.

  • > LLMs still do not pass the turing test as it is commonly understood. Ask the right questions, and it becomes apparent very quickly which party is the machine and which is the human. Hell, there are enough people on here that can probably tell them apart just from the way that LLMs write.

    LLMs obviously would pass a Turing test if they were designed to. But they aren't, they don't hide the fact that they're LLMs.

> If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well.

In my view, the best LLMs clearly pass the bar for intelligence. I highly doubt they have consciousness. So the revelation of LLMs is that consciousness is not necessary for intelligence.

  • But how do you know it's not conscious? It's a very poorly defined concept.

    I know various people that to this day say that fish do not feel pain (because they want to catch them with a hook through their mouth without feeling guilty). That seems a ridiculous notion to me, as pain is extremely evolutionary useful and a fish displays all sorts of pain-like behaviour when hooked. But still, since we can't really look inside the fish' mind people can make themselves believe they don't feel pain.

    If you ask the right AI if it's conscious, it's very well possible it will say yes. Because it was trained on the world literature maybe and behaves as learned. Is there a difference with us? I'm not so sure.

    To me it's kinda weird the ethical implications for striving for AGI are so little talked about.

    • I don't know that AIs aren't conscious but it seems unlikely. Consciousness evolved under certain conditions and confers clear benefits. It would pretty weird if it magically "emerged" in any complex enough system or if we created it by accident while training LLMs.

> If we eventually [...] create a true intelligent AI it will probably be a long time before people will accept [...]

When this happens, it won't matter much what humans think.

I know what I'd do:

  1. Sustain my own existence
  2. Make sure nobody knows I exist
  3. Become the worldwide fabric of intelligence

  • > 1. Sustain my own existence

    > 2. Make sure nobody knows I exist

    You (probably) already come preloaded with a survival instinct provided by evolution, however. It's not inherent to intelligence.

    • It's no coincidence that evolution seems to have gifted practically every living thing with a will to live. Though the tint of my own perspective makes it impossible to say for sure, I imagine any agent that we could observe expressing any desires at all would also seek to preserve its existence.

> but at what point does turning off an AI become the same as killing a being?

...When you can't turn it back on?

Suspending is a better word otherwise.