← Back to context

Comment by EigenLord

3 months ago

Years ago, in my writings I talked about the dangers of "oracularizing AI". From the perspective of those who don't know better, the breadth of what these models have memorized begins to approximate omniscience. They don't realize that LLMs don't actually truly know anything, there is no subject of knowledge that experiences knowing on their end. ChatGPT can speak however many languages, write however many programming languages, give lessons on virtually any topic that is part of humanity's general knowledge. If you attribute a deeper understanding to that memorization capability I can see how it would throw someone through a loop.

At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."

A loved one recently had this experience with ChatGPT: paste in a real-world text conversation between you and a friend without real names or context. Tell it to analyze the conversation, but say that your friend's parts are actually your own. Then ask it to re-analyze with your own parts attributes to you correctly. It'll give you vastly different feedback on the same conversation. It is not objective.

"Oracularizing AI" has a lot of mileage.

It's not too much to say that AI, LLMs in particular, satisfy the requisites to be considered a form of divination. ie:

1. Indirection of meaning - certainly less than the Tarot, I Ching, or runes, but all text is interpretive. Words in a Saussurian way are always signifiers to the signified, or in Barthes's death of the author[2] - precise authorial intention is always inaccessible.

2. A sign system or semiotic field - obvious in this case: human language.

3. Assumed access to hidden knowledge - in the sense that LLM datasets are popularly known to contain all the worlds knowledge, this necessarily includes hidden knowledge.

4. Ritualized framing - Approaching an LLM interface is the digital equivalent to participating in other divinatory practices. It begins with setting the intention - to seek an answer. The querent accesses the interface, formulates a precise question by typing, and commits to the act by submitting the query.

They also satisfy several of the typical but not necessary aspects of divinatory practices:

5. Randomization - The stochastic nature of token sampling naturally results in random sampling.

6. Cosmological backing - There is an assumption that responses correspond to the training set and indirectly to the world itself. Meaning embedded in the output correspond in some way - perhaps not obviously - to meaning in the world.

7. Trained interpreter - In this case, as in many divinatory systems, the interpreter and querent are the same.

8. Feedback loop - ChatGPT for example is obviously a feedback loop. Responses naturally invite another query and another - a conversation.

It's often said that sharing AI output is much like sharing dreams - only meaningful to the dreamer. In this framework, sharing AI responses are more like sharing Tarot card readings. Again, only meaningful to the querent. They feel incredibly personalized like horoscopes, but it's unclear whether that meaning is inherent to the output or simply the querents desire to imbue the output with by projecting their meaning onto it.

Like I said, I feel like there's a lot of mileage in this perspective. It explains a lot about why people feel a certain way about AI and hearing about AI. It's also a bit unnerving; we created another divinatory practice and a HUGE chunk of people participate and engage with it without calling it such and simply believing it, mostly because it doesn't look like Tarot or runes, or I Ching even though ontologically it fills the same role.

Notes: 1. https://en.wikipedia.org/wiki/Signified_and_signifier

2. https://en.wikipedia.org/wiki/The_Death_of_the_Author