← Back to context

Comment by latexr

6 hours ago

It was not my intention to be patronising nor snarky, nor was I the least bit concerned for you (faux or otherwise). Though on a reread I do understand how my reply can be understood as unkind. I regret that and apologise for it. It was not my intention but it was my mistake. I should’ve made it shorter:

> It’s not a person, It’s a tool. There’s no reason to anthropomorphise it.

Fair enough; I appreciate the follow-up.

Without wanting to be argumentative, I would push back and say that I really did stop to consider my implied assignment of personhood before committing to it. I went with it because it reflects both the role it plays - you'll be relieved that I stopped short of deploying "mentor" - and the fact that English is highly adaptable and already the linguistic tug to use They feels very comfortable in relation to LLMs. Buckle up!

  • > you'll be relieved that I stopped short of deploying "mentor"

    Funnily enough, I think that might’ve been better. I don’t think a mentor has to necessarily be human; one can learn from nature or pets. Or even a machine: Stockfish can teach you to play better chess and give context as to why you fumbled and how to do better next time.

    I just don’t think LLMs are people and that we should avoid anthropomorphising them (for a whole plethora of reasons which are another discussion). I’m not even saying I think there could never be a robot which is a person. Just not what we have now.