← Back to context

Comment by baq

1 month ago

you could argue that feelings are the same thing, just not words

That would be a silly argument because feelings involve qualia, which we do not currently know how to precisely define, recognize or measure. These qualia influence further perception and action.

Any relationships between certain words and a modified probabilistic outcome in current models is an artifact of the training corpus containing examples of these relationships.

I contend that modern models are absolutely capable of thinking, problem-solving, expressing creativity, but for the time being LLMs do not run in any kind of sensory loop which could house qualia.

  • One of the worst or most uncomfortable logical outcomes of

    > which we do not currently know how to precisely define, recognize or measure

    is that if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does.

    Ridiculous to treat a computer like it has emotions, but breaking down the problem into steps, it's incredibly hard to avoid that conclusion. "When in doubt, be nice to the robot".

    • > is that if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does.

      This is how people end up worshipping rocks & thunderstorms.

      3 replies →

    • > if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does

      This would be like treating characters in a book as if they have real feelings just because they have text on the page that suggests they do.

      8 replies →

    • Well, what you're describing is a system of ethics, which has little to do with morality. Morality involves my own personal understanding of "right" vs "wrong". Ethics are rules of conduct prescribed by societies, such as "treat everything like it is alive".

      We don't have precise definitions for (artificial) intelligence, subjective consciousness, or even life. But that doesn't mean we can't still talk about what may be possible within various levels of complexity. In order to convince me a system has a comparable experience to my own, you would need to describe to me the complex, structured internal communication occurring in said system, and present a theory as to how it could support the kind of emotion and qualia that I experience in my daily life.

      Your argument could apply to plants. I already do not eat meat... if I stare at a timelapse of a plant it seems quite alive, but I'll starve if I don't eat something. Yet, my mom thinks plants "dream" in the way we do. She thinks that if I tell a plant, "I love you," every day, my good vibes will make it grow stronger and larger. I can't explain to her that intelligence comes in different magnitudes of complexity and that plants cannot understand the English language. That telepathy between humans and plants is as pseudo-scientific as it gets. I can't explain any of this stuff because she lacks a deep understanding of philosophy, physics and neurochemistry. Especially when she earnestly thinks white Jesus is running around phasing between dimensions as an ambassador for all planets in our "quadrant", or that the entire universe is actually just the plot line of Andy Weir's "The Egg".

      Similarly, while I can have a high-level discussion about this stuff with people who don't, it's quite difficult to have a low-level discussion wherein the nature and definition of things come into play. There are too many gaps in knowledge where ignorance can take root. Too many people work backwards from an outcome they would like to see, and justify it with things that sound right but are either misunderstood or aren't rooted in the scientific process. I am definitely not comparing your open-minded, well-intended, cautionary approach to my mother's, but just using an extreme to illustrate why so much of these discussions must be underpinned by a wealth of contemplation and observation.

  • > qualia, which we do not currently know how to precisely define, recognize or measure

    > which could house qualia.

    I postulate this is a self-negating argument, though.

    I'm not suggesting that LLMs think, feel or anything else of the sort, but these arguments are not convincing. If I only had the transcript and knew nothing about who wiped the drive, would I be able to tell it was an entity without qualia? Does it even matter? I further postulate these are not obvious questions.

    • Unless there is an active sensory loop, no matter how fast or slow, I don't see how qualia can enter the picture

      Transformers attend to different parts of their input based on the input itself. Currently, if you want to tell an LLM it is sad, potentially altering future token prediction and labeling this as "feelings" which change how the model interprets and acts on the world, you have to tell the model that it is sad or provide an input whose token set activates "sad" circuits which color the model's predictive process.

      You make the distribution flow such that it predicts "sad" tokens, but every bit of information affecting that flow is contained in the input prompt. This is exceedingly different from how, say, mammals process emotion. We form new memories and brain structures which constantly alter our running processes and color our perception.

      It's easy to draw certain individual parallels to these two processes, but holistically they are different processes with different effects.

      11 replies →

  • qualia may not exist as such. they could just be essentially 'names' for states of neurons that we mix and match (like chords on a keyboard. arguing over the 'redness' of a percept is like arguing about the C-sharpness of a chord. we can talk about some frequencies but that's it.) we would have no way of knowing otherwise since we only perceive the output of our neural processes, and don't get to participate in the construction of these outputs, nor sense them happening. We just 'know' they are happening when we achieve those neural states and we identify those states relative to the others.

    • The point of qualia is that we seem to agree that these certain neuronal states "feel" like something. That being alive and conscious is an experience. Yes, it's exceedingly likely that all of the necessary components for "feeling" something is encoded right in the neuronal state. But we still need a framework for asking questions such as, "Does your red look the same as my red?" and "Why do I experience sensation, sometimes physical in nature, when I am depressed?"

      It is absolutely an ill-defined concept, but it's another blunt tool in our toolbox that we use to better explore the world. Sometimes, our observations lead to better tools, and "artificial" intelligence is a fantastic sandbox for exploring these ideas. I'm glad that this discussion is taking place.

      9 replies →

  • "It's different. I can't say why it's different, except by introducing a term that no one knows how to define," isn't the ironclad meat defense you were perhaps hoping it was.

  • > That would be a silly argument because feelings involve qualia, which we do not currently know how to precisely define, recognize or measure.

    If we can't define, recognize or measure them, how exactly do we know that AI doesn't have them?

    I remain amazed that a whole branch of philosophy (aimed, theoretically, at describing exactly this moment of technological change) is showing itself up as a complete fraud. It's completely unable to describe the old world, much less provide insight into the new one.

    I mean, come on. "We've got qualia!" is meaningless. Might as well respond with "Well, sure, but AI has furffle, which is isomporphic." Equally insightful, and easier to pronounce.

    • > If we can't define, recognize or measure them, how exactly do we know that AI doesn't have them?

      In the same way my digital thermometer doesn't have quaila. LLM's do not either. I really tire of this handwaving 'magic' concepts into LLM's.

      Qualia being difficult to define and yet being such an immediate experience that we humans all know intimately and directly is quite literally the problem. Attempted definitions fall short and humans have tried and I mean really tried hard to solve this.

      Please see Hard problem of consciousness https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

      30 replies →

    • Have you considered that you just don't fully understand the literature? It's quite arrogant to write off the entire philosophy of mind as "a complete fraud".

      > It's completely unable to describe the old world, much less provide insight into the new one.

      What exactly were you expecting?

      Philosophy is a science, the first in fact, and it follows a scientific method for asking and answering questions. Many of these problems are extremely hard and their questions are still yet unanswered, and many questions are still badly formed or predicated on unproven axioms. This is true for philosophy of mind. Many other scientific domains are similarly incomplete, and remain active areas of research and contemplation.

      What are you adding to this research? I only see you complaining and hurling negative accusations, instead of actually critically engaging with any specifics of the material. Do you have a well-formed theory to replace philosophy of mind?

      > I mean, come on. "We've got qualia!" is meaningless. Might as well respond with "Well, sure, but AI has furffle, which is isomporphic." Equally insightful, and easier to pronounce.

      Do you understand what qualia is? Most philosophers still don't, and many actively work on the problem. Admitting that something is incomplete is what a proper scientist does. An admission of incompleteness is in no way evidence towards "fraud".

      The most effective way to actually attack qualia would be to simply present it as unfalsifiable. And I'd agree with that. We might hopefully one day entirely replace the notion of qualia with something more precise and falsifiable.

      But whatever it is, I am currently experiencing a subjective, conscious experience. I'm experiencing it right now, even if I cannot prove it or even if you do not believe me. You don't even need to believe I'm real at all. This entire universe could all just be in your head. Meanwhile, I like to review previous literature/discussions on consciousness and explore the phenomenon in my own way. And I believe that subjective, conscious experience requires certain elements, including a sensory feedback loop. I never said "AI can't experience qualia", I made an educated statement about the lack of certain components in current-generation models which imply to me the lack of an ability to "experience" anything at all, much less subjective consciousness and qualia.

      Even "AI" is such a broadly defined term that such a statement is just ludicrous. Instead, I made precise observations and predictions based on my own knowledge and decade of experience as a machine learning practitioner and research engineer. The idea that machines of arbitrary complexity inherently can have the capability for subjective consciousness, and that specific baselines structures are not required, is on par with panpsychism, which is even more unfalsifiable and theoretical than the rest of philosophy of mind.

      Hopefully, we will continue to get answers to these deep, seemingly unanswerable questions. Humans are stubborn like that. But your negative, vague approach to discourse here doesn't add anything substantial to the conversation.

      14 replies →

  • [flagged]

    • > Do we know how to imprecisely define, recognize, or measure these? As far as I've ever been able to ascertain, those are philosophy department nonsense dreamt up by people who can't hack real science so they can wallow in unfounded beliefs.

      Read the rest of the thread, I'm not interested in repeating myself about why philosophy is the foundational science. It's a historically widely-accepted fact, echoed by anyone who has actually studied it.

      > I contend that they are not even slightly capable of any of that.

      Contend all you want. Your contention is overwhelmingly suffocated by the documented experiences of myself and others who use these tools for creative problem-solving. As much as you want to believe in something, if it is empirically refuted, it's just a crackpot belief. Just because you haven't been able to get good results out of any models, doesn't mean your experience rings true for others.

      I'm not interested in further discussing this with you. Your first comment is negative and unsubstantial, and I have no reason to believe that further discussion with lead to more positive and substantial discourse, when the opposite is usually the case. That's all I have to say.

      2 replies →

Feelings have physical analogs which are (typically) measurable, however. At least without a lot of training to control.

Shame, anger, arousal/lust, greed, etc. have real physical ‘symptoms’. An LLM doesn’t have that.

  • LLMs don't really exist physically (except in the most technical sense), so point is kind of moot and obvious if you accept this particular definition of a feeling.

    LLMs are not mammals nor animals, expecting them to feel in a mammalian or animal way is misguided. They might have a mammalian-feeling-analog just like they might have human-intelligence-analog circuitry in the billions (trillions nowadays) of parameters.