← Back to context

Comment by andrewstuart

14 days ago

Personally I’d prefer that LLMs did not refer to themselves as “I”.

It’s software, not an “I”.

As per Dennett, it's useful for us to adopt the "intentional stance" when trying to reason about and predict the behavior of any sufficiently complex system. Modern AIs are definitely beyond the threshold of complexity, and at this stage, however they refer to themselves, most people will think of them as having an "I" regardless to how they present themselves.

I definitely think of them as "I"s, but that just always came naturally to me, at least going back to thinking about how Ghandi would act against me in Civ 1.

Well, it is a speaker (writer) after all. It has to use some way to refer to itself.

  • I don't think that's true. It's more of a function on how these models are trained (remember the older pre-ChatGPT clients?)

    Most of the software I use doesn't need to refer it itself in the first person. Pretending what we're speaking with an agent is more of a UX/marketing decision rather than a technical/logical constraint.

    • I'm not sure about that. What happens if you "turn down the weight" (cf. https://www.anthropic.com/news/golden-gate-claude) for self-concept, expressed in the use not of first-person pronouns but "the first person" as a thing that exists? Do "I" and "me" get replaced with "this one" like someone doing depersonalization kink, or does it become like Wittgenstein's lion in that we can no longer confidently parse even its valid utterances? Does it lose coherence entirely, or does something stranger happen?

      It isn't an experiment I have the resources or the knowledge to run, but I hope someone does and reports the results.

If I start a prompt with "Can you...", what do you suggest the LLM to respond? Or do you think I'm doing it wrong?

  • Have you tried dropping the "can you"? I haven't had a problem using minimal verbiage - for instance I prompted it with "load balancer vs reverse proxy" yesterday and it came back with the info I wanted.

My pet peeve is when an LLM starts off a statement with "honestly, ..." Like what? You would lie to me? I go nuts when I see that. Year ago I caught myself using "honestly ...", and I immediately trained myself out of it once I realized what it implies.

  • "I'd normally lie to you but," is not what's actually implied when "Honestly," is used conversationally. If you overthink things like this you're going to have a tough time communicating with people.

    • I'm not saying you need to stop using it, but I prefer to not indicate that in some situations I would lie, but in this one specifically I won't. I communicate with customers constantly in my job, and my integrity and reputation is most important to me. If I'm going to lie, I'd rather not call attention to it.

      When an LLM says "honestly", it's just stupid. An LLM can't "lie".

  • There are shades of grey w.r.t. truth, and in many contexts there is a negative correlation between honesty and other factors (e.g. I think of “bluntness” as prioritizing truth over politeness). When I hear or read a sentence beginning with “honestly”, I interpret it to mean the speaker is warning or indicating that they are intentionally opting to be closer to truth at the expense of other factors. Other factors might be contextual appropriateness such as professional decorum, or even the listener’s perception of the speaker’s competence (“Honestly, I don’t know.”)

  • I've noticed "honestly" is often used in place of "frankly". As in someone wants to express something frankly without prior restraint to appease the sensibilities of the recipient(s). I think it's because a lot of people never really learned the definition of frankness or think "frankly..." sounds a bit old fashioned. But I'm no language expert.

  • "Honestly" and "literally" are now used in English for emphasis. I dislike this, but it's the current reality. I don't think there's any way to get back to only using them with their original meanings.

    • I don't think anyone needs to change their language. I understand that it's a common way to indicate candor, but it's hilariously inappropriate for a computer to say "some times I might lie to you to save your feelings, but this time, you really are ugly and you need to know."

      1 reply →

  • Or when it asks you questions.

    The only time an LLM should ask questions is to clarify information. A word processor doesn’t want to chit chat about what I’m writing about, nor should an LLM.

    Unless it is specifically playing an interactive role of some sort like a virtual friend.

    • Like so many things, it depends on the context. You didn't want it to ask questions if you're asking a simple math problem or giving it punishing task like counting the R's in strawberry.

      On the other hand, asking useful questions can help prevent hallucinations or clarify tasks. If you're going spawn off an hour long task, asking a few questions first can make a huge difference.

    • My initial reaction to this is typically negative too, but more than once, on a second thought, I found its question to be really good, leading me to actually think about the matter more deeply. So I'm growing to accept this.

    • ChatGPT is very casual with asking questions, and FRANKLY, I enjoy getting into a little bit of a daydream with it from time to time. It's taken the place of falling into a Wikipedia hole. Not sure if that's something that's good or bad.