← Back to context

Comment by herval

6 days ago

no, the point is LLMs will behave the same way humans you have to manage do (there's obviously differences - eg LLMs tend to forget context more often than most humans, but also they tend to know a lot more than the average human). So some of the same skills that'll help you manage humans will also help you get more consistency out of LLMs.

I don't know of anyone who would like to work with someone who lies to them over and over, and will never stop. LLMs do certain things better than people, but my point is that there's nothing you can trust them to do. That's fine for research (we don't trust, and don't need to trust, any human or tool to do a fully exhaustive research, anyway), but not for most other work tasks. That's not to say that LLMs can't be utilised usefully, but something that can never be trusted behaves like neither person nor tool.

  • Anthropomorphizing LLMs is not going to help anyone. They're not "lying" to you. There's no intent to deceive.

    I really think that the people who have the hardest time adapting to AI tools are the ones that take everything personally.

    It's just a text generator, not a colleague.

    • > It's just a text generator, not a colleague.

      The person you are responding to is quite literally making the same point. This entire thread of conversation is in response to the post's author stating that using a coding agent is strongly akin to collaborating with a colleague.