← Back to context

Comment by pron

7 days ago

So the role of a coding agent is to challenge me to play in hard mode?

And suppose getting developers to not lie or hide important information is on me, what should I do to get an LLM to not do that?

no, the point is LLMs will behave the same way humans you have to manage do (there's obviously differences - eg LLMs tend to forget context more often than most humans, but also they tend to know a lot more than the average human). So some of the same skills that'll help you manage humans will also help you get more consistency out of LLMs.

  • I don't know of anyone who would like to work with someone who lies to them over and over, and will never stop. LLMs do certain things better than people, but my point is that there's nothing you can trust them to do. That's fine for research (we don't trust, and don't need to trust, any human or tool to do a fully exhaustive research, anyway), but not for most other work tasks. That's not to say that LLMs can't be utilised usefully, but something that can never be trusted behaves like neither person nor tool.

    • Anthropomorphizing LLMs is not going to help anyone. They're not "lying" to you. There's no intent to deceive.

      I really think that the people who have the hardest time adapting to AI tools are the ones that take everything personally.

      It's just a text generator, not a colleague.

      1 reply →