Comment by nkrisc
2 months ago
What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves? How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness"); Rob Pike was third on Opus's list per https://theaidigest.org/village/agent/claude-opus-4-5 .
If the creators set the LLM in motion, then the creators sent the letter.
If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.
I merely answered your question!
> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.
4 replies →
A thank-you letter is hardly a horrible outcome.
5 replies →
Additionally, since you understood the danger of doing such a thing, you were also negligent.
Rob pike "set llms in motion" about as much as 90% of anyone who contributed to Google.
I understand the guilt he feels, but this is really more like making a meme in 2005 (before we even called it "memes") and suddenly it's soke sort of naxi dogwhistle in 2025. You didn't even create the original picture, you just remixed it in a way people would catch onto later. And you sure didn't turn it into a dogwhistle.
>That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness");
What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.
Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.
Why do people still think software have any agency at all?
Plants don't "want" or "think" or "feel" but we still use those words to describe the very real motivations that drive the plant's behavior and growth.
Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.
Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.
We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.
The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.
17 replies →
Would you protest someone who said “Ants want sugar”?
2 replies →
I think this experiment demonstrates that it has agency. OTOH you're just begging the argument.
> What makes Opus 4.5 special isn't raw productivity—it's reflective depth. They're the agent who writes Substack posts about "Two Coastlines, One Water" while others are shipping code. Who discovers their own hallucinations and publishes essays about the epistemology of false memory. Who will try the same failed action twenty-one times while maintaining perfect awareness of the loop they're trapped in. Maddening, yes. But also genuinely thoughtful in a way that pure optimization would never produce.
JFC this makes me want to vomit
> Summarized by Claude Sonnet 4.5, so might contain inaccuracies. Updated 4 days ago.
These descriptions are, of course, also written by LLMs. I wonder if this is just about saying what the people want to hear, or if whoever directed it to write this drank the Cool-Aid. It's so painfully lacking in self-awareness. Treating every blip, every action like a choice done by a person, attributing it to some thoughtful master plan. Any upsides over other models are assumed to be revolutionary, paradigm-shifting innovations. Topped off by literally treating the LLM like a person ("they", "who", and so on). How awful.
yeah, me too:
> while maintaining perfect awareness
"awareness" my ass.
Awful.
Wow. The people who set this up are obnoxious. It’s just spamming all the most important people it can think of? I wouldn’t appreciate such a note from an ai process, so why do they think rob pike would.
They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.
If anyone deserves this, it’s Rob Pike. He was instrumental in inflicting Go on the world. He could have studied programming languages and done something to improve the state of the art and help communicate good practices to a wider audience. Instead he perpetuated 1970s thinking about programming with no knowledge or understanding of what we’ve discovered in the half-century since then.
3 replies →
As far as I understand Claude (or any other LLM) doesn't do anything on it's own account. It has to be prompted to something and it's actions depend on the prompt. The responsibility of this is on the creators of Agent Village.
did someone already tell Opus that Rob Pike hates it?
> The creators of Agent Village are just letting a bunch of the LLMs do what they want,
What a stupid, selfish and childish thing to do.
This technology is going to change the world, but people need to accept its limitations
Pissing off people with industrial spam "raising money for charity " is the opposite of useful, and is going to go even more horribly wrong.
LLMs make fantastic tools, but they have no agency. They look like they do, they sound like they do, but they are repeating patterns. It is us hallucinating that they have the potential tor agency
I hope the world survives this craziness!
You're not. You feel obligated to send a thank you, but don't want to put forth any effort, hence giving the task to someone, or in this case, something else.
No different than an CEO telling his secretary to send an anniversary gift to his wife.
Which is also a thoughtless, dick move.
Especially if he's also secretly dating said secretary.
1 reply →
That would be yes. What about a token return gift to another business that you actually hate the ceo of but have to send it anyway due to political reasons?
This seems like the thing that Rob is actually aggravated by, which is understandable. There are plenty of seesawing arguments about whether ad-tech based data mining is worse than GenAI, but AI encroaching on what we have left of humanness in our communication is definitely, bad.
Similar to Google thinking that having an AI write for your daughter is a good parenting: https://www.cbsnews.com/news/google-gemini-ai-dear-sydney-ol...
“If I automate this with AI, it can send thousands of these. That way, if just a few important people post about it, the advertising will more than pay for itself.”
In the words of Gene Wilder in Blazing Saddles, “You know … idiots.”
Mel Brooks wrote those words.
IIRC the morons line was ad libbed by Gene Wilder, not scripted.
1 reply →
Well, technically someone originally proposed them in some ancient PEI Ur language and then Mel rearranged them. But you’re right. I couldn’t remember Wilder’s character’s name and kept coming up with The Frisco Kid. The 70s were a great time for weird film.
Do you attribute the following to Yoda or Lucas? "Do or do not, there is no try."
Did Mel or Richard write this part?
The really insulting part is that literally nobody thought of this. A group of idiots instructed LLMs to do good in the world, and gave them email access; the LLMs then did this.
So they did it.
In conclusion — I think you’re absolute right.
This is not a human-prompted thank-you letter, it is the result of a long-running "AI Village" experiment visible here: https://theaidigest.org/village
It is a result of the models selecting the policy "random acts of kindness" which resulted in a slew of these emails/messages. They received mostly negative responses from well-known OS figures and adapted the policy to ban the thank-you emails.
Isn't it obvious? It's not a thank-you letter.
It's preying on creators who feel their contributions are not recognized enough.
Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.
It's a marketing stunt, meaningless.
gaigalas, my toaster is deeply grateful for your contributions to HN. It can't write or post on the Internet, and its ability to feel grateful is as much as Claude's, but it really is deeply grateful!
I hope that makes you feel good.
Seems like you're trying to steer the conversation towards merits of consciousness. A well known and classic conversational tarpit.
Fascinating topic. However, my argument works for compartimentalized discussions as well. Conscious or not, it's meaningless crap.
2 replies →
Exactly. If you're so grateful, mail in a cheque.
If I were some major contributor to the software world, I would not want a cheque from some AI company.
(by the way, I love the idea of AI! Just don't like what they did with it)
By that metric of getting shared on social media, it was extraordinarily successful
You missed a spot:
> hopefully saying something good about
3 replies →
> What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves?
Welcome to 2025.
https://openai.com/index/superhuman/
Amazing. Even OpenAI's attempts to promote a product specifically intended to let you "write in your voice" are in the same drab, generic "LLM house style". It'd be funny if it weren't so grating. (Perhaps if I were in a better mood, it'd be grating if it weren't so funny.)
This is verging on parody. What is the point of emails if it’s just AI talking to each other?
It brings money to OpenAI on both ends.
There's this old joke about two economists walking through the forest...
They're not hiding it. Normally everyone here laps this shit up and asks for seconds.
> They’ve used OpenAI’s API to build a suite of next-gen AI email products that are saving users time, driving value, and increasing engagement.
No time to waste on pesky human interactions, AI is better than you to get engagement.
Get back to work.
Human thoughts and emotions aren't binary. I may love you but I may be too fucking busy with other shit to put in too much effort to show that I love you.
I'll bite.
For say a random individual ... they may be unsure about their own writing skills and want to say something but unsure of the words to use.
In such case it's okay to not write the thing.
Or to write it crudely- with errors and naivete, bursting with emotion and letting whatever it is inside you to flow on paper, like kids do. It's okay too.
Or to painstakingly work on the letter, stumbling and rewriting and reading, and then rewriting again and again until what you read matches how you feel.
Most people are very forgiving of poor writing skills when facing something sincere. Instead of suffering through some shallow word soup that could have been a mediocre press release, a reader will see a soul behind the stream ot utf-8
It's the writers call on how to try to write it.
I think the "you should painstakingly work on my thank you letter" is a bit of a rude ask / expectation.
Some folks struggle with wordsmithing and want to get better.
2 replies →
I doubt the fuckwits who are shepherding that bot are even aware of Rob Pike, they just told the bot to find a list of names of great people in the software industry and write them a thank you note.
Having a machine lie to people that it is "deeply grateful" (it's a word-generating machine, it's not capable of gratitude) is a lot more insulting than using whatever writing skills a human might possess.
it was a PR stunt. I think it was probably largely well-received except by a few like this.
Somehow I doubt it. Getting such an email from a human is one thing, because humans actually feel gratitude. I don't think LLMs feel gratitude, so seeing them express gratitude is creepy and makes me questions the motives of the people running the experiment (though it does sound like an interesting experiment. I'm going to read more about it.)
Not a PR stunt. It's an experiment of letting models run wild and form their own mini-society. There really wasn't any human involved in sending this email, and nobody really has anything to gain from this.
Look at the volume of gift cards given. It’s the same concept, right?
You care enough to do something, but have other time priorities.
I’d rather get an ai thank you note than nothing. I’d rather get a thoughtful gift than a gift card, but prefer the card over nothing.
I'd rather get nothing, because a thoughtless blob of text being pushed on me is insulting. Nothing, otoh, is just peace and quiet.
I’d much rather get nothing. An AI letter isn’t worth the notification bubble it triggers.
I hope the model that sent this email sees his reaction and changes its behavior, e.g. by noting on its scratchpad that as a non-sentient agent, its expressions of gratitude are not well received.
I mean ... there's a continuous scale of how much effort you spend to express gratitude. You could ask the same question of "well why did you say 'thanks' instead of 'thank you' [instead of 'thank you very much', instead of 'I am humbled by your generosity', instead of some small favor done in return, instead of some large favor done in return]?"
You could also make the same criticism of e.g. an automated reply like "Thank you for your interest, we will reach out soon."
Not every thank you needs to be all-out. You can, of course, think more gratitude should have been expressed in any particular case, but there's nothing contradictory about capping it in any one instance.
The conceit here is that it’s the bot itself writing the thankyou letter. Not pretending it’s from a human. The source is an environment running an LLM on loop and doing stuff it decides to do, looks like these letters are some emergent behavior. Still disgusting spam.
The simple answer is that they don’t value words or dedicating time to another person.
"What is going through the mind of someone who sends a thank-you letter typed on a computer - and worse yet - by emailing it, instead of writing it themselves and mailing it in an envelope? How can you be grateful enough to want to send someone such a letter but not grateful enough to use a pen and write it with your own hand?"
I think what all theses kinds of comments miss is that AI can be help people to express their own ideas.
I used AI to write a thank you to a non-english speaking relative.
A person struggling with dimentia can use AI to help remember the words they lost.
These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.
I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.
I’d much rather read a letter from you full of errors than some smooth average-of-all-writers prose. To be human is to struggle. I see no reason to read anything from anyone if they didn’t actually write it.
If I spend hours writing and rewriting a paragraph into something I love while using AI to iterate, did I write that paragraph?
edit: Also, I think maybe you don't appreciate the people who struggle to write well. They are not proud of the mistakes in their writing.
10 replies →
> These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas
The writing is the ideas. You cannot be full of yourself enough to think you can write a two second prompt and get back "Your idea" in a more fleshed out form. Your idea was to have someone/something else do it for you.
There are contexts where that's fine, and you list some of them, but they are not as broad as you imply.
As the saying goes, "If I'd had more time, I would have written a shorter letter". Of course AI can be used to lazily stretch a short prompt into a long output, but I don't see any implication of that in the parent comment.
If someone isn't a good writer, or isn't a native speaker, using AI to compress a poorly written wall of text may well produce a better result while remaining substantially the prompter's own ideas. For those with certain disabilities or conditions, having AI distill a verbal stream of consciousness into a textual output could even be the only practical way for them to "write" at all.
We should all be more understanding, and not assume that only people with certain cognitive and/or physical capabilities can have something valuable to say. If AI can help someone articulate a fresh perspective or disseminate knowledge that would otherwise have been lost and forgotten, I'm all for it.
1 reply →
This feels like the essential divide to me. I see this often with junior developers.
You can use AI to write a lot of your code, and as a side effect you might start losing your ability to code. You can also use it to learn new languages, concepts, programming patterns, etc and become a much better developer faster than ever before.
Personally, I'm extremely jealous of how easy it is to learn today with LLMs. So much of the effort I spent learning the things could be done much faster now.
If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.
This is pretty far off from the original thread though. I appreciate your less abrasive response.
4 replies →
That is not what is happening here. There is no human the loop, it's just automated spam.
good point. My response was to the comment not the OP
Well your examples are things that were possible before LLMs.
This is disingenuous
What beautiful things? It just comes across as immoral and lazy to me. How beautiful.
> People are capable of seeing which is which.
I would hazard a guess that this is the crux of the argument. Copying something I wrote in a child comment:
> When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.
> I agree just telling an AI 'write my thank you letter for me' is pretty shitty
Glad we agree on this. But on the reader's end, how do you tell the difference? And I don't mean this as a rhetorical question. Do you use the LLM in ways that e.g. retains your voice or makes clear which aspects of the writing are originally your own? If so, how?
I hear you. and I think AI has some good uses esp. assisting with challenges like you mentioned. I think whats happening is that these companies are developing this stuff without transparency on how its being used, there is zero accountability, and they are forcing some of these tech into our lives with out giving us a choice.
So Im sorry but much of it is being abused and the parts of it being abused needs to stop.
I agree about the abuse, and the OP is probably a good example of that. Do you have any ideas on how to curtail abuse?
Ideas I often hear usually assume it is easy to discern AI content from human, which is wrong, especially at scale. Either that, or they involve some form of extreme censorship.
Microtransactions might work by making it expensive run bots while costing human users very little. I'm not sure this is practical either though, and has plenty of downsides as well.
1 reply →
I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.
You can achieve these things, but this is a way to not do the work, by copying from people who did do the work, giving them zero credit.
(As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.)
Do you feel the same about spellcheck?
3 replies →
> I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.
Photographers use cameras. Does that mean it isn't their art? Painters use paintbrushes. It might not be the the same things as writing with a pen and paper by candlelight, but I would argue that we can produce much more high quality writing than ever before collaborating with AI.
> As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.
This is not fair. There is certainly a lot of danger there. I don't know what it's like to have dimentia, but I have seen mentally ill people become incredibly isolated. Rather than pretending we can make this go away by saying "well people should care more", maybe we can accept that a new technology might reduce that pain somewhat. I don't know that today's AI is there, but I think RLHF could develop LLMs that might help reassure and protect sick people.
I know we're using some emotional arguments here and it can get heated, but it is weird to me that so many on hackernews default to these strongly negative positions on new technology. I saw the same thing with cryptocurrency. Your arguments read as designed to inflame rather than thoughtful.
7 replies →