← Back to context

Comment by Terr_

14 days ago

I think that's putting the cart before the horse: All this hubbub comes from humans relating to a fictional character evoked from text of in a hidden document, where some code looks for fresh "ChatGPT says..." text and then performs the quoted part at a human who starts believing it.

The exact same techniques can provide a "chat" with Frankenstein's Monster from its internet-enabled hideout in the arctic. We can easily conclude "he's not real" without ever going into comparative physiology, or the effects of lightning on cadaver brains.

We don't need to characterize the neuro-chemistry of a playwright (the LLM's real role) in order to say that the characters in the plays are fictional, and there's no reason to assume that the algorithm is somehow writing self-inserts the moment we give it stories instead of other document-types.