Ask HN: May an agent accept a license to produce a build?
15 days ago
For example, Android builds steer towards using `sdkmanager --licenses`.
Suppose I get a preconfigured VPS with Claude Code, and ask it to make an android port of an app I have built, it will almost always automatically downloads the sdkmanager and accepts the license.
That is the flow that exists many times in its training data (which represents its own interesting wrinkle).
Regardless of what is in the license; I was a bit surprised to see it happen, and I'm sure I won't be the last nor will the android sdk license be the only one.
What is the legal status of an agreement accepted in such a manner - and perhaps more importantly - what ought to be the legal status considering that any position you take will be exploited by bad faith ~~actors~~ agents?
IAAL but this is not legal advice. Consult an attorney licensed in your jurisdiction for advice.
In general, agents "stand in the shoes" of the principal for all actions the principal delegated to them (i.e., "scope of agency"). So if Amy works for Global Corp and has the authority to sign legal documents on their behalf, the company is bound. Similarly, if I delegate power of attorney to someone to sign documents on my behalf, I'm bound to whatever those agreements are.
The law doesn't distinguish between mechanical and personal agents. If you give an agent the power to do something on your behalf, and it does something on your behalf under your guidance, you're on the hook for whatever it does under that power. It's as though you did it yourself.
Look, just because an LLM thing is named "agent" doesn't mean it is "legally an agent".
If I were an attorney in court, I would argue that a "mechanical or automatic agent" cannot truly be a personal agent unless it can be trusted to do things only in line with that person's wishes and consent.
If an LLM "agent" runs amok and does things without user consent and without reason or direction, how can the person be held responsible, except for saying that they never should've granted "agency" in the first place? Couldn't the LLM's corporate masters be held liable instead?
That's where "scope of agency" comes in. It's no different than if Amy, as in my example, ran amok and started signing agreements with the mob to bind Global Corp to a garbage pickup contract, when all she had was the authority to sign a contract for a software purchase.
So in a case like this, if your agent exceeded its authority, and you could prove it, you might not be bound.
Keep in mind that an LLM is not an agent. Agents use LLMs, but are not LLMs themselves. If you only want your agent to be capable of doing limited actions, program or configure it that way.
There is established jurisprudence that decisions from LLM based customer support chatbots are considered binding.
9 replies →
If I were an attorney in court, I would argue…
A guy who's not a lawyer arguing about lawyering with an actual lawyer. Typical tech bubble hubris.
3 replies →
Script-kiddies aren’t liable anymore?
That’s a hot take indeed.
Correct, but the "if Amy works for Global Corp and has the authority to sign legal documents on their behalf" does a lot of work here.
At $WORK, a multi-billion company with tens of thousands of developers, we train people to never "click to accept", explaining it like "look, you wouldn't think of sitting down and signing a contract binding the whole MegaCorp; what make you think you can 'accept' something binding the company?"
I admit we're not always successful (people still rarely click), but at least we're trying.
> At $WORK, a multi-billion company with tens of thousands of developers, we train people to never "click to accept", explaining it like "look, you wouldn't think of sitting down and signing a contract binding the whole MegaCorp; what make you think you can 'accept' something binding the company?"
That sounds pretty heavy-handed to me. Their lawyers almost certainly advised the company to do that--and I might, too, if I worked for them. But whether it's actually necessary to keep the company out of trouble....well, I'm not so sure. For example, Bob the retail assistant at the local clothing store couldn't bind his employer to a new jeans supplier contract, even if he tried. This sounds like one of those things you keep in your back pocket and take it out as a defense if someone decides to litigate over it. "Look, Your Honor, we trained our employees not to do that!"
At least with a mechanical agent, you can program it not to be even capable of accepting agreements on the principal's behalf.
A similar question is what happens if you get up to go to the bathroom, some software on your machine updates and requires you to accept the new ToS, and your cat jumps up on the keyboard and selects "accept". Are you still bound by those terms? Of course. If licenses are valid in any way (the argument is they get you out of the copyright infringement caused by copying the software from disk to memory) then it's your job to go find the license to software you use and make sure you agree to it; the little popup is just a nice way to make your aware of the terms.
> the argument is they get you out of the copyright infringement caused by copying the software from disk to memory
This is not copyright infringement in the USA:
> …it is not an infringement for the owner of a copy of a computer program to make or authorize the making of another copy or adaptation of that computer program provided… that such a new copy or adaptation is created as an essential step in the utilization of the computer program in conjunction with a machine and that it is used in no other manner
— https://www.law.cornell.edu/uscode/text/17/117
Wasn't copying from disk to memory found to be infringing in the Glider lawsuit? https://en.wikipedia.org/wiki/MDY_Industries,_LLC_v._Blizzar....
> Citing the prior Ninth Circuit case of MAI Systems Corp. v. Peak Computer, Inc., 991 F.2d 511, 518-19 (9th Cir. 1993), the district court held that RAM copying constituted "copying" under 17 U.S.C. § 106.
1 reply →
Actually, no, because you didn't intentionally accept the terms, and you had no reason to expect that your cat would jump on there in exactly that way.
On the other hand, if you take a laser and intentionally induce the cat to push the key, then you are bound.
> If licenses are valid in any way (the argument is they get you out of the copyright infringement caused by copying the software from disk to memory) then it's your job to go find the license to software you use and make sure you agree to it; the little popup is just a nice way to make your aware of the terms.
The way you set up the scenario, the user has no reason to even know that they're using this new version with this new license. An update has happened without their knowledge. So far as they know, they're running the old software under the old license.
You could make an equally good argument that whoever wrote the software installed software on the user's computer without the user's permission. If it's the user's fault that a cat might jump on the keyboard, why isn't it equally the software provider's fault?
... but the reality is that, taking your description at face value, nobody has done anything. The user had no expectation or intention of installing the software or accepting the new license, and the software provider had no expectation or intention of installing it without user permission, and they were both actually fairly reasonable in their expectations. Unfortunately shit happens.
The real question is what a judge would accept. I can't imagine any judge accepting "my cat did it".
3 replies →
Interesting question, actually. The ones calling for full and immediate assumption of liability on the principal either miss a thing or imply an interesting relationship.
The closest analogy we have, I guess, is the power of attorney. If a principal signs off on power of attorney to, e.g. take out a loan/mortgage to buy a thing on principal's behalf, that does not extend to taking any extra liabilities. Any extra liabilities signed off by the agent would be either rendered null or transferred to the agent in any court of law. There is extent to which agency is assigned.
The core questions here are agency and liability boundaries. Are there any agency boundaries on the agent? If so, what's the extent? There are many future edge cases where these questions will arise. Licenses and patents are just the tip of an iceberg.
Can a non-human entity accept an agreement? I know there are things like mountains and rivers which have been granted legal personhood. But obviously they have humans who act on their behalf.
The general question of the personhood of artificial intelligence aside, perhaps the personhood could be granted as an extension of yourself, like power of attorney? (Or we could change the law so it works that way.)
It all sounds a bit weird but I think we're going to have to make up our minds on the subject very soon. (Or perhaps the folks over at Davos have already done us the favour of making up theirs ;)
The whole point of “agency” is that there is a principal (you) behind the agent that owns all responsibility. The agent ACTS for you, it does not absorb any liability for those acts — that flows straight back to the principal.
Just because the AI companies have decided to use the word "agent" doesn't means it's legally an agent. It's just a word they chose. Maybe it'll also be found legally to be an agent but it's likely that'll vary depending on the jurisdiction and will take at least a few years and lots of lawyer bills to iron out.
When GPT Agent thing first launched I had it complete some task and it got to a "I am not a robot" checkbox and its thinking was "I have to click this button to prove I am a human" o_O
It checked the box.
You asked. You're liable.
You are legally responsible for the actions of your agents. It’s in the name agent = acting on someone’s behalf.
Your English is very interesting by the way. You have some obvious grammatical errors in your text yet beautiful use of formal register.
in terms of "AI": agent is a marketing term, it has no legal meaning
it's a piece of non-deterministic software running on someone's computer
who is responsible for its actions? hardly clear cut
The person who chose to run it (and tell it what to do) is responsible for its actions. If you don't want to be responsible for something nondeterministic software does, then don't let nondeterministic software do that thing.
5 replies →
The same as any other computer program: the operator of the program.
the answer is a hard "No" for anything touching ITAR- per several major company's lawers. (internal legal counsel, not official public stance. aka: they do as they say.)
It’s your agent. You’re the principal, bro. Hopefully you are authorized to accept by the boss!
Wouldn't you want to be in control of your dependencies in this case anyway? Like why would you ever want it to autodownload sdkmanager? Doesn't this seem like bad idea?
I think the architectural mistake is letting the agent execute the environment setup instead of just defining it. If you constrain the agent to generating a Dockerfile, you get a deterministic artifact a human can review before any license is actually accepted. It also saves a significant amount of tokens since you don't need to feed the verbose shell output back into the context loop.
For something you throw away next week, just need something simple and you run everything isolated? Why not?