Comment by godelski
4 hours ago
> "get multiple different minds involved to check each other's blind spots
This is actually my big gripe about chatbot coding agents. They are trained on human preference and thus they optimize for errors that are in our blind spots.
I don't think people take this subtly seriously enough. Unless we have an /objective/ ground truth we end up proxying our optimization. So we don't optimize for code that /is/ correct, we optimize for code that /looks/ correct. It may seem like a subtle difference but it is critical.
The big difference is when they make errors they are errors that are more likely to be difficult for humans to detect.
Good tools should complement tool users. Fill in gaps. But as we've been trying to train agents to replace humans we are not focusing on this distinction. I want my coding agent to make errors that are obvious to me just as I want errors I make to be obvious to it (or for it to be optimized to detect errors I make)
No comments yet
Contribute on Hacker News ↗