Comment by beAbU
18 hours ago
The bit I don't understand is why make an AI apologise or fess up to mistakes at all. It has no emotions and can't feel bad about what it did.
18 hours ago
The bit I don't understand is why make an AI apologise or fess up to mistakes at all. It has no emotions and can't feel bad about what it did.
> The bit I don't understand is why make an AI apologise or fess up to mistakes at all.
The AI didn't decide to do anything. It's makers decided, and trained the AI to behave in a way that would make them the most money.
Google, for instance, apparently thinks they will attract more users by constantly lavishing them with sickly praise for the quality and insight of their questions, and by issuing grovelling apologies for every mistake - real of imagined. In fact Gemini went through a phase of apologising to me for the mistakes it was about to make.
Claude goes to the other extreme, never issuing apologies or praise. Which mean, you never get an acknowledgement from Claude that it's correcting an error, so you should ignore what it said earlier. That a significant downfall in my book, but apparently that's what Anthropic thinks it's users will like.
Or to put it another way: you are anthropomorphising the AI's. They are just machines, built by humans. The personalities of these machines where given to them by their human designers. They are not inherent. They are not permanent. They can and probably will change at a whim. It's likely various AI personalities will proliferate like flavours of ice cream, and you will get to choose the one you like.
Because most people can't help but anthropomorphise anything vaguely human and would demand such characteristics which the provider use as a selling point. that's why we even consider current AI, AI despite the lack of any actual intelligence which would be closer to machine learning.
Just look at how people interact with small robots. They don't even need animal feature for most to interact with them like they are small animals.
It is very annoying and inefficient for anybody able to look below the surface and just wants to use the tool as a tool.
Is it normal to demand human developers to "apologise" like this when they make mistakes? I've never done that in my life to any adult, in any circumstance.
I sometimes do it when it strays way too far from my prompt, and I want it to contribute to jailbreak/system prompt I use to guardrail it.
Once it's "genuinely sorry" it works great in improving guidance/limits, and then I can try the thing again.
It just does what it's trained on. It has not the capacity to think about these points.
What __i__ don't understand is, where it got trained to apologize, becasuse I've never seen that on any social media ;)