Comment by thomastjeffery
15 hours ago
Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.
To maintain relevance, we must find common ground. There is no true objectivity, because every sign must be built up from an arbitrary ground. At the very least, there will be a conflict of aesthetics.
The problem with LLMs is that they avoid the ground entirely, making them entirely ignorant to meaning. The only intention an LLM has is to preserve the familiarity of expression.
So yes, this kind of AI will not accomplish any epistemology; unless of course, it is truly able to facilitate a functional system of logic, and to ground that system near the user. I'm not going to hold my breath.
I think the great mistake of "good ole fashioned AI" was to build it from a perspective of objectivity. This constrains every grammar to the "context-free" category, and situates every expression to a singular fixed ground. Nothing can be ambiguous: therefore nothing can express (or interpret) uncertainty or metaphor.
What we really need is to recreate software from a subjective perspective. That's what I've been working on for the last few years... So far, it's harder than I expected; but it feels so close.
LLM's are a mediocre map, but they're a great compass, telescope, navigation tools and what have ye
> What we really need is to recreate software from a subjective perspective.
What does "subjective" mean here? Are you talking about just-in-time software? That is, software that users get mold on the fly?
> Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.
I'm reminded immediately of the Enochian language which purportedly had the remarkable property of having a direct, unambiguous, 1-to-1 correspondence with the things being signified. To utter, and hear, any expression in Enochian is to directly transfer the author's intent into the listener's mind, wholly intact and unmodified:
The Tower of Babel is an allegory for the weak correspondence between human natural language and the things it attempts to signify (as opposed to the supposedly strong 1-to-1 correspondence of Enochian). The tongues are confused, people use the same words to signify different referents entirely, or cannot agree on which term should be used to signify a single concept, and the society collapses. This is similar to what Orwell wrote about, and we have already implemented Orwell's vision, sociopolitically, in the early 21st century, through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).
LLMs just accelerate this process of severing any connection whatsoever between signified and signifier. In some ways they are maximally Babelian, in that they maximize confusion by increasing the quantity of signifiers produced while minimizing the amount of time spent ensuring that the things we want signified are being accurately represented.
Speaking more broadly, I think there is much confusion in the spheres of both psychology and religion/spirituality/mysticism in their mutual inability to "come to terms" and agree upon which words should be used to refer to particular phenomenological experiences, or come to a mutual understanding of what those words even mean (try, for instance, to faithfully recreate, in your own mind, someone's written recollection of a psychedelic experience on erowid).
[0] https://archive.org/details/truefaithfulrela00deej/page/92/m...
That's always been a fun idea. Even a thousand years ago, when most people couldn't read or write, we yearned for more. Even without a description of the problem and its domain, it's immediately obvious that perfect communication would be magic.
The problem is that it's impossible. Even if you could directly copy experience from one mind to the other, that experience would be ungrounded. Experience is just as subjective as any expression: that's why we need science.
> through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).
That's a pretty mean rejection of empathy you've got going on there. People are doing their best to describe their genuine experiences, yet the only interpretations you have bothered to subject their expression to are completely irrelevant to them. Maybe this is a good opportunity to explore a different perspective.
> LLMs just accelerate this process of severing any connection whatsoever between signified and signifier.
That's my entire point. There was never any connection to begin with. The sign can only point to the signified. The signified does not actually interact with any semantics. True objectivity can only apply to the signified: never the sign. Even mathematics leverage an arbitrary canonical grammar to model the reality of abstractions. The semantics are grounded in objectively true axioms, but the aesthetics are grounded in an arbitrary choice of symbols and grammar.
The words aren't our problem. The problem is relevance. If we want to communicate effectively, we must find common ground, so that our intentions can be relevant to each others' interpretations. In other words, we must leverage empathy. My goal is to partially automate empathy with computation.