Comment by Legend2440

2 months ago

Legally and ethically yes, they are responsible for letting an AI loose with no controls.

But also yes, AI did decide on its own to send this email. They gave it an extremely high-level instruction ("do random acts of kindness") that made no mention of email or rob pike, and it decided on its own that sending him a thank-you email would be a way to achieve that.

We are risking word games over what can make competent decisions, but when my thermostat turns on the heat I would say it decided to do so, so I agree with you. If someone has a different meaning of the word "decided" however, I will not argue with them about it!

The legal and ethical responsibility is all I wanted to comment on. I believe it is important we do not think something new is happening here, that new laws need to be created. As long as LLMs are tools wielded by humans we can judge and manage them as such. (It is also worth reconsidering occasionally, in case someone does invent something new and truly independent.)

  • > ...I would say it decided to do so,

    Right, and casual speech is fine, but should not be load-bearing in discussions about policy, legality, or philosophy. A "who's responsible" discussion that's vectoring into all of these areas needs a tighter definition of "decides" which I'm sure you'll agree does not include anything your thermostat makes happen when it follows its program. There's no choice there (philosophy) so the device detecting the trigger conditions and carrying out the designated action isn't deciding, it is a process set in motion by whoever set the thermostat.

    I think we're in agreement that someone setting the tool loose bears the responsibility. Until we have a serious way to attribute true agency to these systems, blaming the system is not reasonable.

    "Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it." It didn't do that, you did.

    • > Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it.

      Well no, that’s not what happened at all. It found these emails on its own by searching the internet and extracting them from github commits.

      AI agents are not random number generators. They can behave in very open-ended ways and take complex actions to achieve goals. It is difficult to reasonably foresee what they might do in a given situation.

  • > As long as LLMs are tools wielded by humans

    They're really not though. We're in the age of agents--unsupervised LLM's are commonplace, and new laws need to exist to handle these frameworks. It's like handing a toddler a handgun, and saying we're being "responsible" or we are "supervising them". We're not--it's negligence.

    • Are there really many unsupervised LLMs running around outside of experiments like AI Village?

      (If so let me know where they are so I can trick them into sending me all of their money.)

      My current intuition is that the successful products called "agents" are operating almost entirely under human supervision - most notably the coding agents (Claude Code, OpenAI Codex etc) and the research agents (various implementations of the "Deep Research" pattern.)

      2 replies →

    • Part of what makes this post newsworthy is the claim it is an email from an agent, not a person, which is unusual. Your claim that "unsupervised LLM's are commonplace" is not at all obvious to me.

    • Which agent has not been launched by a human with a prompt generated by a human or at a human's behest?

      We haven't suddenly created machine free will here. Nor has any of the software we've fielded done anything that didn't originally come from some instruction we've added.

No. There are a countless other ways, not involving AI, that you could effect an email being sent to Rob Pike. No one is responsible, without qualifiers, but the people who are running the AI software. No asterisks on accountability.