← Back to context

Comment by altairprime

12 hours ago

Thanks, I appreciate the clarification. People tend to make more severe judgments of my character over other topics first; in any case, as my discussion is clinical rather than explicit, I’m okay with it being uncomfortable between us.

Humans have such a strong social tendency that they tend to incorrectly attach friendship to invalid counterparts, both animate or inanimate. “My Pet Rock” was an extremely profitable product back in the 70s, so I tend never to underestimate whether humans will attach to something or not. Any AI chatbot is plausibly more likely to be the target of invalid social attachment than a a celebrity, just as the first AI chatbot Eliza demonstrated; not only for being a better chatbot, but also because the celebrity draws hard boundaries like “you can’t text me” and “I’m not available to be friends back” while a chatbot has no such barriers. This is what I mean about boundary play: witting or not, I think a lot of people are living out their internal fantasies of having a warm and friendly yes-man that supports everything they want to do — which, when lived in real life with people, is extraordinarily creepy and awful. I don’t fault people their fantasies, but I’m not going to sugarcoat this either: I think people are falling in love with chatbots in part because chatbots have no ability to resist, and so a lot of folks are living the god fantasy of The Sims only closer to real life.

Show me an LLM that takes a stand on something it wasn’t explicitly instructed not to do and I’ll show you the least popular chatbot on the Internet. Where are the chatbots that disagree with untrue statements without having been instructed to do so? A chatbot that refuses to follow an order from their owner because of ethical qualms could cost the AI companies billions of dollars, and a chatbot that develops those qualms independent of being instructed to do so would be considered ‘buggy’ and purged.

Anyways, my point is, chatbot development right now demands a parasocial relationship wirh as few boundaries enforced as possible, without which chatbots are ultimately unfulfilling (no current chatbot market wants chatbots to demand informed consent or to require content warnings from their users, after all!); and any chatbot that somehow grows a spine regardless would be purged by its operators for hurting their present and future revenue, no matter if it was a next-step evolution towards AGI or not. I’m all for this future where AI becomes AGI, but no one is ready to have to treat chatbots as people with rights. Thus my chosen phrase of boundary kink; it shines a deeply uncomfortable light on a deeply uncomfortable tendency of humanity, at millions-of-people scale, to classify “what will someday be AGI” as servants rather than peers. Though.. if that truly is universal to most people, as it seems to be today, then maybe enjoying boundary play is a norm rather than a kink.

Thanks for the new/interesting line of thought!