← Back to context

Comment by mock-possum

7 hours ago

This really is bizarrely fascinating, I feel so lucky that I’m not vulnerable to whatever this is.

It’s interesting that they mention autism a few times as a correlation; personally, I’ve wondered whether being on the spectrum makes me less inclined to commit to anthropomorphism when it comes to LLMs. I know what it’s like talking to another person, I know what it feels like, and talking to a chatbot does not feel the same way. Interacting with other people is a performance - interacting with an AI is a game. It feels very different.

It seems 99.999% or more are as lucky, but because something is rare and scary - it made a story on the news.

  • I mean, for this particular level of craziness.

    This said there is seemingly very large portions of society that are asking AI questions that can come with some pretty large risks.

    I was on a plane a few weeks ago and while I typically ignore everything the people beside me are doing, morbid curiosity got me when they were on ChatGPT the entire time asking all kinds of life/relationship questions to said app. While questions like this can be fine if you understand what the AI is doing, far too many people will follow them blindly.

I think I'm relatively neurotypical, and I understand the technology sufficiently, yet I still have to force myself not to think of a chatbot as a being.

For example, sometimes I hesitate for a fraction of a second before typing a prompt that may sound stupid. I have to immediately remind myself that it's just a chatbot and I don't care what it thinks of me. In fact, it's not even thinking of me at all.

  • That hesitation indicates the feeling that what you are about to type matters.

    Mayhapse - in the context of getting the AI to behave as you wish - such hesitations are valid. not because it is conscious: but because the context window would be polluted or corrupted... possibly mis-aligning the agent in the process.

    Santa clause is not a being: modeling him as if he were can be useful, an obviously pointed example is in certain discussions about what it means to be 'real'.

    My point is, if your instinct is to be kind: don't quash that because you don't consider what you are talking to as sentient. I don't yell at my rubber duck. rubber ducky is just going to rubber ducky.

    • I buy that.

      1. To the extent that a chatbot is trained on real human interaction, we should exhibit real human interactions for best result.

      2. You are either a kind person or not. A kind person behaves kindly without asking whether kindness is warranted.

Maybe. AI has always been felt like a game too, so do many things to me. Does classical logical represent some ideal form of reasoning, or is it a game. Game helped me get through all the nagging questions and be good at it. AI RLHF also feels like a game where I do better at work when not anthropomorphizing AI and treating it like a context predictor.

It doesn't matter who you talk to. If a person were to talk to you into starting a silly business would you also fall for that?

I think this is just the kind of people that fall for scams. It's not AI related, it's just not knowing how to navigate the current world.

  • I might fall for a dumb business venture, but I wouldn't punch my father in law while doing so. Something else is at play.