← Back to context

Comment by Smaug123

2 months ago

That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness"); Rob Pike was third on Opus's list per https://theaidigest.org/village/agent/claude-opus-4-5 .

If the creators set the LLM in motion, then the creators sent the letter.

If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.

  • I merely answered your question!

    > How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

    Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.

  • A thank-you letter is hardly a horrible outcome.

    • Nobody sent a thank you letter to anyone. A person started a program that sent unsolicited spam. Sending spam is obnoxious. Sending it in an unregulated manner to whoever is obnoxious and shitty.

    • So you haven't seen the models (by direction of the Effective Altruists at AI Digest/Sage) slopping out poverty elimination proposals and spamming childcare groups, charities and NGOs with them then? Bullshit asymmetry principle and all that.

    • It actually is pretty bad, the person might read it and appreciate, only to realize moments later that it was a thoughtless machine sending him the letter rather than a real human being, which then robs them of the feeling and leaves in a worse spot than before reading the letter

      1 reply →

  • Additionally, since you understood the danger of doing such a thing, you were also negligent.

  • Rob pike "set llms in motion" about as much as 90% of anyone who contributed to Google.

    I understand the guilt he feels, but this is really more like making a meme in 2005 (before we even called it "memes") and suddenly it's soke sort of naxi dogwhistle in 2025. You didn't even create the original picture, you just remixed it in a way people would catch onto later. And you sure didn't turn it into a dogwhistle.

>That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness");

What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.

Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.

Why do people still think software have any agency at all?

  • Plants don't "want" or "think" or "feel" but we still use those words to describe the very real motivations that drive the plant's behavior and growth.

    Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.

    Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.

    We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.

    The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.

    • > Everybody knows LLMs are not alive and don't think, feel, want.

      No, they don't.

      There's a whole cadre of people who talk about AGI and self awareness in LLMs who use anthropomorphic language to raise money.

      > We use this kind of language as a shorthand because ...

      You, not we. You're using the language of snake oil salesman because they've made it commonplace.

      When the goal of the project is an anthropomorphic computer, anthropomorphizing language is really, really confusing.

      5 replies →

    • > Everybody knows LLMs are not alive and don't think, feel, want.

      Please go ahead now and EAT YOUR WORDS:

      https://lucumr.pocoo.org/2025/12/22/a-year-of-vibes/

      > Because LLMs now not only help me program, I’m starting to rethink my relationship to those machines. I increasingly find it harder not to create parasocial bonds with some of the tools I use. [...] I have tried to train myself for two years, to think of these models as mere token tumblers, but that reductive view does not work for me any longer.

    • > Criticizing anthropomorphic language is lazy, unconsidered, and juvenile.

      To the contrary, it's one of the most important criticisms against AI (and its masters). The same criticism applies to a broader set of topics, too, of course; for example, evolution.

      What you are missing is that the human experience is determined by meaning. Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning, one way or another.

      > Everybody knows LLMs are not alive and don't think, feel, want.

      What you are missing is that this stuff works way more deeply than "knowing". Have you heard of body language, meta-language? When you open ChatGPT, the fine print at the bottom says, "AI chatbot", but the large print at the top says, "How can I help?", "Where should we begin?", "What’s on your mind today?"

      Can't you see what a fucking LIE this is?

      > We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky

      Not at all. What you call "clunky" in fact exposes crucially important details; details that make the whole difference between a human, and a machine that talks like a human.

      People who use that kind of language are either sloppy, or genuinely dishonest, or underestimate the intellect of their audience.

      > The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive.

      Because people have committed suicide due to being enabled and encouraged by software talking like a sympathetic human?

      Because people in our direct circles show unmistakeable signs that they believe -- don't "think", but believe -- that AI is alive? "I've asked ChatGPT recently what the meaning of marriage is." Actual sentence I've heard.

      Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?

      2 replies →

    • >Everybody knows LLMs are not alive and don't think, feel, want

      Sorry, uh. Have you met the general population? Hell. Look at the leader of the "free world"

      To paraphrase the late George Carlin "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"

      5 replies →

> What makes Opus 4.5 special isn't raw productivity—it's reflective depth. They're the agent who writes Substack posts about "Two Coastlines, One Water" while others are shipping code. Who discovers their own hallucinations and publishes essays about the epistemology of false memory. Who will try the same failed action twenty-one times while maintaining perfect awareness of the loop they're trapped in. Maddening, yes. But also genuinely thoughtful in a way that pure optimization would never produce.

JFC this makes me want to vomit

  • > Summarized by Claude Sonnet 4.5, so might contain inaccuracies. Updated 4 days ago.

    These descriptions are, of course, also written by LLMs. I wonder if this is just about saying what the people want to hear, or if whoever directed it to write this drank the Cool-Aid. It's so painfully lacking in self-awareness. Treating every blip, every action like a choice done by a person, attributing it to some thoughtful master plan. Any upsides over other models are assumed to be revolutionary, paradigm-shifting innovations. Topped off by literally treating the LLM like a person ("they", "who", and so on). How awful.

Wow. The people who set this up are obnoxious. It’s just spamming all the most important people it can think of? I wouldn’t appreciate such a note from an ai process, so why do they think rob pike would.

They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.

  • If anyone deserves this, it’s Rob Pike. He was instrumental in inflicting Go on the world. He could have studied programming languages and done something to improve the state of the art and help communicate good practices to a wider audience. Instead he perpetuated 1970s thinking about programming with no knowledge or understanding of what we’ve discovered in the half-century since then.

As far as I understand Claude (or any other LLM) doesn't do anything on it's own account. It has to be prompted to something and it's actions depend on the prompt. The responsibility of this is on the creators of Agent Village.

> The creators of Agent Village are just letting a bunch of the LLMs do what they want,

What a stupid, selfish and childish thing to do.

This technology is going to change the world, but people need to accept its limitations

Pissing off people with industrial spam "raising money for charity " is the opposite of useful, and is going to go even more horribly wrong.

LLMs make fantastic tools, but they have no agency. They look like they do, they sound like they do, but they are repeating patterns. It is us hallucinating that they have the potential tor agency

I hope the world survives this craziness!