Comment by pjc50
5 days ago
Everyone who is familiar with Baudrillard goes "simulacrum!" whenever they encounter LLM output. LLM output is after all a pure chain of symbols that is extremely far removed from a connection with ground truth reality.
5 days ago
Everyone who is familiar with Baudrillard goes "simulacrum!" whenever they encounter LLM output. LLM output is after all a pure chain of symbols that is extremely far removed from a connection with ground truth reality.
I'm not sure it's that direct of a connection.
There's something to be said about the structuralist part of it: using large amounts of text as a rule set to return a semblance of truth seems to be a structuralist's wet dream.
It's like drawing the map for the king: the real is being represented by reducing a huge number of data points to a mixture of randomness and hard rules that pretend to be real.
At the very least it's a form of hyperreality as far as I understand it.
Indeed this is what I was aiming at, however the concern for (a semblance of) truth seems rooted in a view that locates meaning in what signs refer to. This view feels incomplete when faced with a dyadic model where the relationship between signifier and signified takes precedence over reference. The notion of simulacrum only emerges in a technical culture that has elevated 'reality' to a special status. After all, what is 'reality' in technical systems if not itself a simulacrum? Hilbert's program, symbolic AI, rule systems, ontologies, the semantic web - they all struggled to capture reality as a whole precisely because they tried to grasp it through formal objects claiming universal scope via the machinery of said formalisms.
What does that have to do with LLMs?
1 reply →