Comment by dabaja
4 days ago
Interesting framing, but I think emotions are a proxy for something more tractable: loss functions over time. Engineers remember bad hygiene because they've felt the cost. You can approximate this for agents by logging friction: how many iterations did a task take, how many reverts, how much human correction. Then weight memory retrieval by past-friction-on-similar-tasks. It's crude, but it lets the agent "learn" that certain shortcuts are expensive without needing emotions. The hard part is defining similarity well enough that the signal transfers. Still early, but directionally this has reduced repeat mistakes in our pipeline more than static rules did.
How do you choose which loss function over time to pursue?
Honestly, it's empirical. We started with what was easiest to measure: human correction rate. If I had to step in and fix something, that's a clear signal the agent took a bad path. Iterations and reverts turned out to be noisier -- sometimes high iteration count means the task was genuinely hard, not that the agent made a mistake. So we downweighted those. The meta-answer is: pick the metric that most directly captures "I wish the agent hadn't done that." For us that's human intervention. For a team with better test coverage, it might be test failures after commit. For infra work, maybe rollback frequency. There's no universal loss function — it depends on where your pain actually is. We just made it explicit and started logging it. The logging alone forced clarity.