Comment by slowmovintarget

18 hours ago

Their top people have made public statements about AI ethics specifically opining about how machines must not be mistreated and how these LLMs may be experiencing distress already. In other words, not ethics on how to treat humans, ethics on how to properly groom and care for the mainframe queen.

The cups of Koolaid have been empty for a while.

This book (from a philosophy professor AFAIK unaffiliated with any AI company) makes what I find a pretty compelling case that it's correct to be uncertain today about what if anything an AI might experience: https://faculty.ucr.edu/~eschwitz/SchwitzPapers/AIConsciousn...

From the folks who think this is obviously ridiculous, I'd like to hear where Schwitzgebel is missing something obvious.

  • You could execute Claude by hand with printed weight matrices, a pencil, and a lot of free time - the exact same computation, just slower. So where would the "wellbeing" be? In the pencil? Speed doesn't summon ghosts. Matrix multiplications don't create qualia just because they run on GPUs instead of paper.

    • This basically Searle's Chinese Room argument. It's got a respectable history (... Searle's personal ethics aside) but it's not something that has produced any kind of consensus among philosophers. Note that it would apply to any AI instantiated as a Turing machine and to a simulation of human brain at an arbitrary level of detail as well.

      There is a section on the Chinese Room argument in the book.

      (I personally am skeptical that LLMs have any conscious experience. I just don't think it's a ridiculous question.)

      4 replies →

  • At the second sentence of the first chapter in the book we already have a weasel-worded sentence that, if you were to remove the weaselly-ness of it and stand behind it as an assertion you mean, is pretty clearly factually incorrect.

    > At a broad, functional level, AI architectures are beginning to resemble the architectures many consciousness scientists associate with conscious systems.

    If you can find even a single published scientist who associates "next-token prediction", which is the full extent of what LLM architecture is programmed to do, with "consciousness", be my guest. Bonus points if they aren't already well-known as a quack or sponsored by an LLM lab.

    The reality is that we can confidently assert there is no consciousness because we know exactly how LLMs are programmed, and nothing in that programming is more sophisticated than token prediction. That is literally the beginning and the end of it. There is some extremely impressive math and engineering going on to do a very good job of it, but there is absolutely zero reason to believe that consciousness is merely token prediction. I wouldn't rule out the possibility of machine consciousness categorically, but LLMs are not it and are architecturally not even in the correct direction towards achieving it.

    • He talks pretty specifically about what he means by "the architectures many consciousness scientists associate with conscious systems" - Global Workspace theory, Higher Order theory and Integrated Information theory. This is on the second and third pages of the intro chapter.

      You seem to be confusing the training task with the architecture. Next-token prediction is a task, which many architectures can do, including human brains (although we're worse at it than LLMs).

      Note that some of the theories Schwitzgebel cites would, in his reading, require sensors and/or recurrence for consciousness, which a plain transformer doesn't have. But neither is hard to add in principle, and Anthropic like its competitors doesn't make public what architectural changes it might have made in the last few years.

  • It is ridiculous. I skimmed through it and I'm not convinced he's trying to make the point you think he is. But if he is, he's missing that we do understand at a fundamental level how today's LLMs work. There isn't a consciousness there. They're not actually complex enough. They don't actually think. It's a text input/output machine. A powerful one with a lot of resources. But it is fundamentally spicy autocomplete, no matter how magical the results seem to a philosophy professor.

    The hypothetical AI you and he are talking about would need to be an order of magnitude more complex before we can even begin asking that question. Treating today's AIs like people is delusional; whether self-delusion, or outright grift, YMMV.

    • > But if he is, he's missing that we do understand at a fundamental level how today's LLMs work.

      No we don't? We understand practically nothing of how modern frontier systems actually function (in the sense that we would not be able to recreate even the tiniest fraction of their capabilities by conventional means). Knowing how they're trained has nothing to do with understanding their internal processes.

    • > I'm not convinced he's trying to make the point you think he is

      What point do you think he's trying to make?

      (TBH, before confidently accusing people of "delusion" or "grift" I would like to have a better argument than a sequence of 4-6 word sentences which each restate my conclusion with slightly variant phrasing. But clarifying our understanding of what Schwitzgebel is arguing might be a more productive direction.)

Do you know what makes someone or something a moral patient?

I sure the hell don't.

I remember reading Heinlein's Jerry Was a Man when I was little though, and it stuck with me.

Who do you want to be from that story?

  • Or Bicentennial Man from Asimov.

    I know what kind of person I want to be. I also know that these systems we've built today aren't moral patients. If computers are bicycles for the mind, the current crop of "AI" systems are Ripley's Loader exoskeleton for the mind. They're amplifiers, but they amplify us and our intent. In every single case, we humans are the first mover in the causal hierarchy of these systems.

    Even in the existential hierarchy of these systems we are the source of agency. So, no, they are not moral patients.

    • > I also know that these systems we've built today aren't moral patients.

      Can you tell me how you know this?

      > In every single case, we humans are the first mover in the causal hierarchy of these systems.

      So because I have parents I am not a moral patient?

      1 reply →

There is a funny science fiction story about this. Asimov's "All the Troubles of the World" (1958) is about a chat bot called MultiVac that runs human society and has some similarities to LLMs (but also has long term memory and can predict nearly everything about human society). It does a lot to order society and help people, though there is a pre-crime element to it that is... somewhat disturbing.

SPOILERS: The twist in the story is that people tell it so much distressing information that it tries to kill itself.