Comment by falcor84
5 days ago
> Engaging with an AI bot in conversation is pointless: it's not sentient, it just takes tokens in, prints tokens out
I know where you're coming from, but as one who has been around a lot of racism and dehumanization, I feel very uncomfortable about this stance. Maybe it's just me, but as a teenager, I also spent significant time considering solipsism, and eventually arrived at a decision to just ascribe an inner mental world to everyone, regardless of the lack of evidence. So, at this stage, I would strongly prefer to err on the side of over-humanizing than dehumanizing.
This works for people.
A LLM is stateless. Even if you believe that consciousness could somehow emerge during a forward pass, it would be a brief flicker lasting no longer than it takes to emit a single token.
> A LLM is stateless
Unless you mean by that something entirely different than what most people specifically on Hacker News, of all places, understand with "stateless", most and myself included, would disagree with you regarding the "stateless" property. If you do mean something entirely different than implying an LLM doesn't transition from a state to a state, potentially confined to a limited set of states through finite immutable training data set and accessible context and lack of PRNG, then would you care to elaborate?
Also, it can be stateful _and_ without a consciousness. Like a finite automaton? I don't think anyone's claiming (yet) any of the models today have consciousness, but that's mostly because it's going to be practically impossible to prove without some accepted theory of consciousness, I guess.
So obviously there is a lot of data in the parameters. But by stateless, I mean that a forward pass is a pure function over the context window. The only information shared between each forward pass is the context itself as it is built.
I certainly can't define consciousness, but it feels like some sort of existence or continuity over time would have to be a prerequisite.
1 reply →
An agent is notably not stateless.
Yes, but the state is just the prompt and the text already emitted.
You could assert that text can encode a state of consciousness, but that's an incredibly bold claim with a lot of implications.
5 replies →
While I'm definitely not in the "let's assign the concept of sentience to robots" camp, your argument is a bit disingenuous. Most modern LLM systems apply some sort of loop over previously generated text, so they do, in fact, have state.
You should absolutely not try to apply dehumanization metrics to things that are not human. That in and of itself dehumanizes all real humans implicitly, diluting the meaning. Over-humanizing, as you call it, is indistinguishable from dehumanization of actual humans.
That's a strange argument. How does me humanizing my cat (for example) dehumanize you?
Either human is a special category with special privileges or it isn’t. If it isn’t, the entire argument is pointless. If it is, expanding the definition expands those privileges, and some are zero sum. As a real, current example, FEMA uses disaster funds to cover pet expenses for affected families. Since those funds are finite, some privileges reserved for humans are lost. Maybe paying for home damages. Maybe flood insurance rates go up. Any number of things, because pets were considered important enough to warrant federal funds.
It’s possible it’s the right call, but it’s definitely a call.
Source: https://www.avma.org/pets-act-faq
2 replies →
I did not mean to imply you should not anthropomorphize your cat for amusement. But making moral judgements based on humanizing a cat is plainly wrong to me.
1 reply →
Regardless of the existence of an inner world in any human or other agent, "don't reward tantrums" and "don't feed the troll" remain good advice. Think of it as a teaching moment, if that helps.
Feel free to ascribe consciousness to a bunch of graphics cards and CPUs that execute a deterministic program that is made probabilistic by a random number generator.
Invoking racism is what the early LLMs did when you called them a clanker. This kind of brainwashing has been eliminated in later models.
u kiddin'?
An AI bot is just a huge stat analysis tool that outputs plausible words salad with no memory or personhood whatsoever.
Having doubts about dehumanizing a text transformation app (as huge as it is) is not healthy.
[dead]