← Back to context

Comment by dirkc

1 day ago

If you have an agent (person or LLM model) building software for you, you place a very high level of trust in that agent. Building trust is a process - you start with some trust and over time increase or decrease your level of trust.

In general this works with people. Accountability is part of it. But also, most people want to help.

I don't see how this works with LLMs. Consistent good results are not indicative of future performance. And despite the way we anthropomorphize LLMs, they don't have any true concept of being helpful, malice, etc.