← Back to context

Comment by strogonoff

4 hours ago

“Suspicious” is your word.

As the saying goes: if it fires together, it wires together. Is it outlandish to wonder whether, after you create a habit of using certain tricks (including lies, threats of physical violence[0], etc.) whenever your human-like counterpart doesn’t provide required output, you might use those with another human-like counterpart that just happens to also be an actual human? Whether one’s abuse of p-zombified perfect human copies might lead to a change in how one sees and treats actual humans, which are increasingly no different (if not worse) in their text output except they can also feel?

I’m not a psychologist so I can’t say. Maybe some people have no issues treating this tool as a tool while their “system 2” is tirelessly making sure they are at all times mindful whether their fully human-like counterpart is or is not actually human. Maybe they actually see others around them as bots, except they suppress treating them like that out of fear of retribution. Who knows, maybe it’s not a pathology and we are all like that deep inside. Maybe this provides a vent for aggression and people who abuse chatbots might actually be nicer to other humans as a result.

What we do know, though, is that the tool mimics human behaviour well enough that possibly even more other people (many presumably without diagnosed pathologies) treat it as very much human, some to the point of having [a]romantic relationships with it.

[0] https://youtu.be/8g7a0IWKDRE?t=480

Its an interesting line of thought, but people are generally able to contextualize interactions. The classic one is that regularly being violent in video games does not translate to violence in other contexts.