← Back to context

Comment by raincole

12 hours ago

I know anthropomorphizing LLMs has been normalized, but holy shit. I hope the language in this article is intentionally chosen for a dramatic effect.

The thing is .. what else can you do? All the advice on how to get results out of LLMs talks in the same way, as if it's a negotiation or giving a set of instructions to a person.

You can do a mental or physical search and replace all references to the LLM as "it" if you like, but that doesn't change the interaction.

Agreed. We should not be anthropomorphising LLMs or having them mimic humans.

  • It's inherent in the way LLMs are built, from human-written texts, that they mimic humans. They have to. They're not solving problems from first principles.

    • Maybe we should change that? Of course symbolic AI was the holy grail until statistical AI came in and swept the floor. Maybe something else though.

      1 reply →

    • They ingest text written in first and third person and regurgitate in first person only, right?

Fascinating. This is invisible to me, what anthropomorphising did you notice that stood out?

  • From the first sentence

    > I asked an AI agent to solve a programming problem

    You're not asking it to solve anything. You provide a prompt and it does autocomplete. The only reason it doesn't run forever is that one of the generated tokens is interpreted as 'done'.

    • When someone asks you a question in what ways are you not an "autocomplete"?

      You aren't aware of how you come up with the words you are saying, you just start talking and the next word somehow falls out of your mouth. Maybe you think before you start talking, but where do the thoughts come from? They just appear to you in your head. We are just as much a predictive machine as LLMs, the human brain is just fuzzier.

    • What a poor explanation.

      With the same reasoning, human being are only a bunch of atoms, and the only reason they don't collide with other humans is because of the atomic force.

      When your abstraction level is too low, it doesn't explain anything, because the system that is built on it is way too complex.

      4 replies →

    • I just don't think that's correct. When I ask Claude to solve something for me, it takes a number of actions on my computer which are neither writing text nor interpreting the done token. It executes the build, debugs tests, et cetera. Sometimes it spawns mini-mes when it thinks that would be helpful! I think saying this is all "autocomplete" is a category error, like saying that you shouldn't talk about clicking buttons or running programs because it's all just electrically charged silicon under the hood.

      2 replies →