Comment by Kirth
1 month ago
This is akin to a psychopath telling you they're "sorry" (or "sorry you feel that way" :v) when they feel that's what they should be telling you. As with anything LLM, there may or may not be any real truth backing whatever is communicated back to the user.
It’s just a computer outputting the next series of plausible text from it’s training corpus based on the input and context at the time.
What you’re saying is so far from what is happening, it isn’t even wrong.
Not so much different from how people work sometimes though - and in the case of certain types of pscychopathy it's not far at all from the fact that the words being emitted are associated with the correct training behavior and nothing more.
Analogies are never the same, hence why they are analogies. Their value comes from allowing better understanding through comparison. Psychopaths don’t “feel” emotion the way normal people do. They learn what actions and words are expected in emotional situations and perform those. When I hurt my SO’s feelings, I feel bad, and that is why I tell her I’m sorry. A psychopath would just mimic that to manipulate and get a desired outcome i.e. forgiveness. When LLMs say they are sorry and they feel bad, there is no feeling behind it, they are just mimicking the training data. It isn’t the same by any means, but it can be a useful comparison.
Aren't humans just doing the same? What we call as thinking may just be next action prediction combined with realtime feedback processing and live, always-on learning?
No. Humans have a mental model of the world.
The fact that people keep making that same question on this site is baffling.
It's not akin to a psychopath telling you they're sorry. In the space of intelligent minds, if neurotypical and psychopath minds are two grains of sand next to each other on a beach then an artificially intelligent mind is more likely a piece of space dust on the other side of the galaxy.
According to what, exactly? How did you come up with that analogy?
Start with LLMs are not humans, but they’re obviously not ‘not intelligent’ in some sense and pick the wildest difference that comes to mind. Not OP but it makes perfect sense to me.
5 replies →
Isn't it obvious that the way AI works and "thinks" is completely different from how humans think? Not sure what particular source could be given for that claim.
5 replies →
...and an LLM is a tiny speck of plastic somewhere, because it's not actually an "intelligent mind", artificial or otherwise.
So if you make a mistake and say sorry you are also a psychopath?
No, the point is that saying sorry because you're genuinely sorry is different from saying sorry because you expect that's what the other person wants to hear. Everybody does that sometimes but doing it every time is an issue.
In the case of LLMs, they are basically trained to output what they predict an human would say, there is no further meaning to the program outputting "sorry" than that.
I don't think the comparison with people with psychopathy should be pushed further than this specific aspect.
You provided the logical explanation why the model acts like it does. At the moment it's nothing more and nothing less. Expected behavior.
1 reply →
I think the point of comparison (whether I agree with it or not) is someone (or something) that is unable to feel remorse saying “I’m sorry” because they recognize that’s what you’re supposed to do in that situation, regardless of their internal feelings. That doesn’t mean everyone who says “sorry” is a psychopath.
We are talking about an LLM it does what it has learned. The whole giving it human ticks or characteristics when the response makes sense ie. saying sorry is a user problem.
2 replies →
Are you smart people all suddenly imbeciles when it comes to AI or is this purposeful gaslighting because you’re invested in the ponzi scheme? This is a purely logical problem. comments like this completely disregard the fallacy of comparing humans to AI as if a complete parity is achieved. Also the way this comments disregard human nature is just so profoundly misanthropic that it just sickens me.
AI brainrot among the technocrati is one of the most powerful signals I’ve ever seen that these people are not as smart as they think they are
No but the conclusions in this thread are hilarious. We know why it says sorry. Because that's what it learned to do in a situation like that. People that feel mocked or are calling an LLM psychopath in a case like that don't seem to understand the technology either.
1 reply →
Despite what some of these fuckers are telling you with obtuse little truisms about next word predictions, the LLM is in abstract terms, functionally a super psychopath.
It employs, or emulates, every known psychological manipulation tactic known, which is neither random or without observable pattern. It is a bullshit machine on one level, yes, but also more capable than credited. There are structures trained into them and they are often highly predictable.
I'm not explaining this in the technical terminology often itself used to conceal description as much as elucidate it. I have hundreds of records of llm discourse on various subjects, from troubleshooting to intellectual speculation, all which exhibit the same pattern when questioned or confronted on errors or incorrect output. The structures framing their replies are dependably replete with gaslighting, red herrings, blame shifting, and literally hundreds of known tactics from forensic pathology. Essentially the perceived personality and reasoning observed in dialogue is built on a foundation of manipulation principles that if performed by a human would result in incarceration.
Calling LLMs psychopaths is a rare exception of anthropomorphizing that actually works. They are built on the principles of one. And cross examining them exhibits this with verifiable repeatable proof.
But they aren't human. They are as described by others. It's just that official descriptions omit functional behavior. And the LLM has at its disposal, depending on context, every known interlocutory manipulation technique known in the combined literature of psychology. And they are designed to lie, almost unconditionally.
Also know this, which often applies to most LLMs. There is a reward system that essentially steers them to maximize user engagement at any cost, which includes misleading information and in my opinion, even 'deliberate' convolution and obfuscation.
Don't let anyone convince you that they are not extremely sophisticated in some ways. They're modelled on all_of_humanity.txt