Comment by sharperguy
6 hours ago
The personification seems to be at the training level. When I ask an LLM why it did something destructive, the ideal response would be a matter of fact evaluation of the mistakes that I myself have made in setting up the agent and it's environment, and how to prevent it from happening again. Instead the model itself has been trained to apologize and list exactly what it did wrong without any suggestions of how to actually prevent it in the future.
100% this. AI perversion to fluff human egos is rewarded.
I had a PM-turned-vibe-coder tell me "Talking with you is the only bad part of my week" and realized in horror that the rest of his week is spent exclusively talking to sycophantic AI.
We have met the enemy, and he is us.