← Back to context

Comment by B1FIDO

1 month ago

Look, just because an LLM thing is named "agent" doesn't mean it is "legally an agent".

If I were an attorney in court, I would argue that a "mechanical or automatic agent" cannot truly be a personal agent unless it can be trusted to do things only in line with that person's wishes and consent.

If an LLM "agent" runs amok and does things without user consent and without reason or direction, how can the person be held responsible, except for saying that they never should've granted "agency" in the first place? Couldn't the LLM's corporate masters be held liable instead?

That's where "scope of agency" comes in. It's no different than if Amy, as in my example, ran amok and started signing agreements with the mob to bind Global Corp to a garbage pickup contract, when all she had was the authority to sign a contract for a software purchase.

So in a case like this, if your agent exceeded its authority, and you could prove it, you might not be bound.

Keep in mind that an LLM is not an agent. Agents use LLMs, but are not LLMs themselves. If you only want your agent to be capable of doing limited actions, program or configure it that way.

There is established jurisprudence that decisions from LLM based customer support chatbots are considered binding.

  • That's due to authorized humans at the company setting up the LLMs to publish statements which are materially relied upon. Not because company officers have delegated legal authority to the LLM process to be a legal agent that forms binding contracts.

    It's basically the same with longstanding customer service "agents". They are authorized to do only what they are authorized to semantically express in the company's computer system. Even if you get one to verbally agree "We will do X for $Y', if they don't put that into their computer system it's not like you can take the company to court to enforce that.

    • > That's due to authorized humans at the company setting up the LLMs to publish statements which are materially relied upon. Not because company officers have delegated legal authority to the LLM process to form binding contracts.

      It's not that straightforward. A contract, at heart, is an agreement between two parties, both of whom must have (among other things) reasonable material reliance in each other that they were either the principals themselves or were operating under the authority of their principal.

      I am sure that Air Canada did not intend to give the autonomous customer service agent the authority to make the false promises that it did. But it did so anyway by not constraining its behavior.

      > It's basically the same with longstanding customer service "agents". They are authorized to do only what they are authorized to semantically express in the company's computer system. Even if you get one to verbally agree "We will do X for $Y', if they don't put that into their computer system it's not like you can take the company to court to enforce that.

      I don't think that's necessarily correct. I believe the law (again, not legal advice) would bind the seller to the agent's price mistake unless 1/the customer knew it was a mistake and tried to take advantage of it anyway or 2/the price was so outlandish that no reasonable person would believe it. That said, there's often a wide gap between what the law requires and what actually happens. Nobody's going to sue over a $10 price mistake.

      5 replies →

If I were an attorney in court, I would argue…

A guy who's not a lawyer arguing about lawyering with an actual lawyer. Typical tech bubble hubris.

  • What makes you think I'm not a lawyer? The point is that we're not in court, we're in a pseudonymous open forum on the Internet, where everyone has a stinky opinion, where actual attorneys are posting disclaimers that they are explicitly not giving legal advice.

    • Because principal/agent theory is covered (at least at the basic level) in 1L contract law and you'd have to know this to pass the Bar Exam.

      1 reply →