Comment by crawshaw

2 months ago

The important point that Simon makes in careful detail is: an "AI" did not send this email. The three people behind the Sage AI project used a tool to email him.

According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.

No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.

Legally and ethically yes, they are responsible for letting an AI loose with no controls.

But also yes, AI did decide on its own to send this email. They gave it an extremely high-level instruction ("do random acts of kindness") that made no mention of email or rob pike, and it decided on its own that sending him a thank-you email would be a way to achieve that.

  • We are risking word games over what can make competent decisions, but when my thermostat turns on the heat I would say it decided to do so, so I agree with you. If someone has a different meaning of the word "decided" however, I will not argue with them about it!

    The legal and ethical responsibility is all I wanted to comment on. I believe it is important we do not think something new is happening here, that new laws need to be created. As long as LLMs are tools wielded by humans we can judge and manage them as such. (It is also worth reconsidering occasionally, in case someone does invent something new and truly independent.)

    • > ...I would say it decided to do so,

      Right, and casual speech is fine, but should not be load-bearing in discussions about policy, legality, or philosophy. A "who's responsible" discussion that's vectoring into all of these areas needs a tighter definition of "decides" which I'm sure you'll agree does not include anything your thermostat makes happen when it follows its program. There's no choice there (philosophy) so the device detecting the trigger conditions and carrying out the designated action isn't deciding, it is a process set in motion by whoever set the thermostat.

      I think we're in agreement that someone setting the tool loose bears the responsibility. Until we have a serious way to attribute true agency to these systems, blaming the system is not reasonable.

      "Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it." It didn't do that, you did.

      1 reply →

    • > As long as LLMs are tools wielded by humans

      They're really not though. We're in the age of agents--unsupervised LLM's are commonplace, and new laws need to exist to handle these frameworks. It's like handing a toddler a handgun, and saying we're being "responsible" or we are "supervising them". We're not--it's negligence.

      6 replies →

  • No. There are a countless other ways, not involving AI, that you could effect an email being sent to Rob Pike. No one is responsible, without qualifiers, but the people who are running the AI software. No asterisks on accountability.

Okay. So Adam Binksmith, Zak Miller, and Shoshannah Tekofsky sent a thoughtless, form-letter thank you email to Rob Pike. Let's take it even further. They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more. There's no call to action here, no invitation to respond. It's blank, emotionless thank you emails. Wasteful? Sure. But worthy of naming and shaming? I don't think so.

Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney (and wasted far more people's time on Usenet with this)!

This whole anger seems weirdly misplaced. As far as I can tell, Rob Pike was infuriated at the AI companies and that makes sense to me. And yes this is annoying to get this kind of email no matter who it's from (I get a ridiculous amount of AI slop in my inbox, but most of that is tied with some call to action!) and a warning suffices to make sure Sage doesn't do it again. But Sage is getting put on absolute blast here in an unusual way.

Is it actually crossing a bright moral line to name and shame them? Not sure about bright. But it definitely feels weirdly disproportionate and makes me uncomfortable. I mean, when's the last time you named and shamed all the members of an org on HN? Heck when's the last time that happened on HN at all (excluding celebrities or well-known public figures)? I'm struggling to think of any startup or nonprofit, where every team member's name was written out and specifically held accountable, on HN in the last few years. (That's not to say it hasn't happened: but I'd be surprised if e.g. someone could find more than 5 examples out of all the HN comments in the past year).

The state of affairs around AI slop sucks (and was unfortunately easily predicted by the time GPT-3 came around even before ChatGPT came out: https://news.ycombinator.com/item?id=32830301). If you want to see change, talk to policymakers.

  • I do not have a useful opinion on another person’s emotional response. My post you are responding to is about responsibility. A legal entity is always responsible for a machine.

    • This is mildly disingenuous no? I'm not talking about Rob Pike's reaction which as I call out, "makes sense to me." And you are not just talking about legal entities. After all the legal entity here is Sage.

      You're naming (and implicitly shaming as the downstream comments indicate) all the individuals behind an organization. That's not an intrinsically bad thing. It just seems like overkill for thoughtless, machine-generated thank yous. Again, can you point me to where you've named all the people behind an organization for accountability reasons previously on HN or any other social media platform (or for that matter any other comment from anyone else on HN that's done this? This is not rhetorical; I assume they exist and I'm curious what circumstances those were under)?

      4 replies →

  • > They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more ... > Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney ... > And yes this is annoying to get this kind of email no matter who it's from ...

    Pretty sure Rob Pike doesn't react this way to every article of spam he receives, so maybe the issue isn't really about spam, huh? More of an existential crisis: I helped build this thing that doesn't seem to be an agent of good. It's an extreme & emotional reaction but it isn't very hard to understand.

    • You're misreading my comment. I understand Rob Pike's reaction (which is against the general state of affairs, not those three individuals). I explicitly said it makes sense to me. I'm reacting to @crawshaw specifically listing out the names of people.

no computer system just does stuff on its own. a human (or collection of them) built and maintains the system, they are responsible for it

neural networks are just a tool, used poorly (as in this case) or well

  • I truly don’t understand comments like this.

    You agreed with the other poster while reframing their ideas in slightly different words without adding anything to the conversation?

    Most confusingly you did so in emphatic statements reminiscent of a disagreement or argument without there being one

    > no computer system just does stuff on its own.

    This was the exact statement the GP was making, even going so far as to dox the nonprofit directors to hold them accountable… then you added nothing but confusion.

    > a human (or collection of them) built and maintains the system, they are responsible for it

    Yup, GP covered this word for word… AI village built this system.

    Why did you write this?

    Is this a new form of AI? A human with low English proficiency? A strange type of empathetically supportive comment from someone who doesn’t understand that’s the function of the upvote button in online message boards?

    • my point was more concise and general (should I have just commented instead of replying?), sorry you’re so offended and not sure why you felt the need to write this (you can downvote)

      accusing people of being AI is very low-effort bot behavior btw

      3 replies →

  • > a human (or collection of them) built and maintains the system, they are responsible for it

    But at what point is the maker distant enough that they are no longer responsible? E.g. is Apple responsible for everything people do using an iPhone?

    • “it depends” (there’re plenty of laws and case law on this topic)

      I think the case here is fairly straightforward

    • the only actual humans in the loop here are the startup founders and engineers. pretty cut and dry case here

      unless you want to blame the AI itself, from a legal perspective?

I think this AI system just registers for Gmail and sends stuff.

  • It looks to me like each of the agents that are running has its own dedicated name-of-model@agentvillage.org Gmail address.

    • Huh, at that point they should just equip it with an email client rather than forcing it to laboriously navigate the webmail interface with a browser!

      This whole idea is ill-conceived, but if you're going to equip them with email addresses you've arranged by hand, just give them sendmail or whatever.

      1 reply →

  • That is really interesting and does suggest some new questions. I would claim it does not change who is responsible in this case, but an example of a new question: there was a time when it was legally ambiguous that click-through terms of service were valid. Now if an agent goes and clicks through for me, are they valid?

> The important point that Simon makes in careful detail is: an "AI" did not send this email.

same as the NRA slogan: "guns don't kill people, people kill people"

  • That is why the argument is not against guns per se, but against human access to guns. Gun laws aim to limit access to guns. Problems only start when humans have guns. Some for AI, maybe we should limit human access to AI.

  • I think it's important to agree with you and point out the obvious, again, in this thread. The people behind Sage are responsible (or, shall I say, irresponsible.)

    The attitude towards AI is much more mixed than the attitude towards guns, so it should be even easier to hammer this home.

    Adam Binksmith, Zak Miller, and Shoshannah Tekofsky are _bad_ people who are intentionally doing something objectively malicious under the guise of charity.

  • does a gun on its own kill people?

    my understanding, and correct me if I’m wrong, is a human is always involved. even if you build an autonomous killing robot, you built it, you’re responsible

    typically this logic is used to justify the regulation of firearms —- are you proposing the regulation of neural networks? if so, how?

  • The gun comparison comes up a lot. It especially seemed to come up when AI people argued that ChatGPT was not responsible for sycophanting depressed people to death or into psychosis.

    It is a core libertarian defence and it is going to come up a lot: people will conflate the ideas of technological progress and scientific progress and say “our tech is neutral, it is how people use it” when, for example, the one thing a sycophantic AI is not is “neutral”.

Let’s not turn this into a witch hunt please.

While you are technically able to call out their full names like this, erring on the side of not looking like doxxing would be a safe bet, especially at this time of year. You could after all post their LinkedIn accounts and email addresses but with some lines it’s better to not play “how close can I get without crossing it?”.

  • Making people accountable for their actions is NOT a witch hunt.

    It's horrible to even propose that people are absolved of their decisionmaking consequences just because they filtered them through software.

    • Oh no, they send him a "thank you for all the hard work you've done" email, how could they, off to prison with these monsters, they need to be held responsible for all the suffering and pain they've caused.

      3 replies →

  • I certainly have no intention of doing anyone harm. I went to their website and clicked three times to get the names of the people and organization behind it, there is a prominent About page with profile links. If an admin considers this inappropriate please remove the names from my post.

  • Are they not proud of their work and publicly displaying their names as the authors of the project?

  • Have you considered that the sites associated with this project have a very prominent meet-the-team page and that every AI Village blogpost is signed off by a member of said team? Can you explain what you're seeing in the parent comment that's private?

    EDIT: Public response: https://x.com/adambinksmith/status/2004651906019541396

    • It’s not that they are private people, it’s that I feel uneasy when a discussion about the ethics and morality drifts towards these-are-their-names and here-are-some-pitchforks.

      We can all go find out their names and dust off our own pitchforks. I don’t see any value in encouraging this behaviour on a site like this.

  • Dude, what? The fuckers set up an automated system that found people’s private email addresses and blasted them with unwanted emails. The outrage is exactly that they built a line-crossing machine. Your moralizing is incoherent.

    • The goals (initially "raise as much money for charity as you can", currently "Do random acts of kindness") don't seem ill-intentioned, particularly since it was somewhat successful at the first ($1481 for Helen Keller International and $503 for the Malaria Consortium). To my understanding it also didn't send more than one email per person.

      I think "these emails are annoying, stop it sending them" is entirely fair, but a lot of the hate/anger, analogizing what they're doing to rape, etc. seems disproportionate.

  • Lets turn this into an accountability thing please.

    The same way we name and shame petrol and plastic CEOs whose trash products flood our environment, we should be able to shame slop makers. Digital trash is still trash.