← Back to context

Comment by thomastjeffery

18 hours ago

Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.

To maintain relevance, we must find common ground. There is no true objectivity, because every sign must be built up from an arbitrary ground. At the very least, there will be a conflict of aesthetics.

The problem with LLMs is that they avoid the ground entirely, making them entirely ignorant to meaning. The only intention an LLM has is to preserve the familiarity of expression.

So yes, this kind of AI will not accomplish any epistemology; unless of course, it is truly able to facilitate a functional system of logic, and to ground that system near the user. I'm not going to hold my breath.

I think the great mistake of "good ole fashioned AI" was to build it from a perspective of objectivity. This constrains every grammar to the "context-free" category, and situates every expression to a singular fixed ground. Nothing can be ambiguous: therefore nothing can express (or interpret) uncertainty or metaphor.

What we really need is to recreate software from a subjective perspective. That's what I've been working on for the last few years... So far, it's harder than I expected; but it feels so close.

LLM's are a mediocre map, but they're a great compass, telescope, navigation tools and what have ye

  • Yes. The main problem is that they can only lead you to familiar territory. The next problem is that it's hard to notice when you are back where you started.

    This has been my main struggle using LLMs to soundboard my new idea. They can write an eloquent interpretation of the entire concept, but as soon as we get to implementation, it stumbles right into creating the very system I intend to replace.

    So I would say it's the other way around: LLMs are an excellent map, but a terrible compass. Good enough if you want to explore familiar territory, but practically unusable on an adventure.

> What we really need is to recreate software from a subjective perspective.

What does "subjective" mean here? Are you talking about just-in-time software? That is, software that users get mold on the fly?

  • That's a feature that could be implemented by a subjective framework.

    Traditionally, we use definition as the core primitive for programming. The programming language grammar defines the meaning of every possible expression, precisely and exhaustively. This is useful, because intention and interpretation are perfectly matched, making the system predictable. This is the perspective of objectivity.

    The problem with objectivity is that it is categorically limited. A programming language compiler can only interpret using the predetermined rules of its grammar. The only abstract concepts that can be expressed are the ones that are implemented as programming language features. Ambiguity is unspeakable.

    The other problem is that it is tautologically stagnant. The interpretation that you are going to use has already been completely defined. The programming language grammar is its own fundamental axiom: a tautology that dictates how every interpretation will be grounded. You can't choose a different axiom. Every programming language is its own silo of expression, forever incompatible with the rest. Sure, we have workarounds, like FFIs or APIs, but none of them can solve the root issue.

    A subjective perspective would allow us to write and interpret ambiguous expression, which could be leveraged to (weakly) solve natural language processing. It would also allow us to change where our interpretations are grounded. That would (weakly) solve incompatibility. Instead of refactoring the expression, you would compose a new interpreter.

    Because code is data, we can objectify our interpreters. We can apply logical deduction to choose the most relevant one, like a type system chooses the right polymorphic function. We can also compose interpreters like combinators, and decompose them by expressing their intentions. This way, we could have an elegant recursive self-referential system that generates relevant interpreters.

    Any adequately described algorithm or data structure could be implemented to be perfectly compatible with any adequately interpreted system, all wrapped in whatever aesthetics the user chooses. On the fly. That's the dream, anyway.

> Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.

I'm reminded immediately of the Enochian language which purportedly had the remarkable property of having a direct, unambiguous, 1-to-1 correspondence with the things being signified. To utter, and hear, any expression in Enochian is to directly transfer the author's intent into the listener's mind, wholly intact and unmodified:

    Every Letter signifieth the member of the substance whereof it speaketh.
    Every word signifieth the quiddity of the substance.

    - John Dee, "A true & faithful relation of what passed for many yeers between Dr. John Dee ... and some spirits," 1659 [0].

The Tower of Babel is an allegory for the weak correspondence between human natural language and the things it attempts to signify (as opposed to the supposedly strong 1-to-1 correspondence of Enochian). The tongues are confused, people use the same words to signify different referents entirely, or cannot agree on which term should be used to signify a single concept, and the society collapses. This is similar to what Orwell wrote about, and we have already implemented Orwell's vision, sociopolitically, in the early 21st century, through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).

LLMs just accelerate this process of severing any connection whatsoever between signified and signifier. In some ways they are maximally Babelian, in that they maximize confusion by increasing the quantity of signifiers produced while minimizing the amount of time spent ensuring that the things we want signified are being accurately represented.

Speaking more broadly, I think there is much confusion in the spheres of both psychology and religion/spirituality/mysticism in their mutual inability to "come to terms" and agree upon which words should be used to refer to particular phenomenological experiences, or come to a mutual understanding of what those words even mean (try, for instance, to faithfully recreate, in your own mind, someone's written recollection of a psychedelic experience on erowid).

[0] https://archive.org/details/truefaithfulrela00deej/page/92/m...

  • That's always been a fun idea. Even a thousand years ago, when most people couldn't read or write, we yearned for more. Even without a description of the problem and its domain, it's immediately obvious that perfect communication would be magic.

    The problem is that it's impossible. Even if you could directly copy experience from one mind to the other, that experience would be ungrounded. Experience is just as subjective as any expression: that's why we need science.

    > through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).

    That's a pretty mean rejection of empathy you've got going on there. People are doing their best to describe their genuine experiences, yet the only interpretations you have bothered to subject their expression to are completely irrelevant to them. Maybe this is a good opportunity to explore a different perspective.

    > LLMs just accelerate this process of severing any connection whatsoever between signified and signifier.

    That's my entire point. There was never any connection to begin with. The sign can only point to the signified. The signified does not actually interact with any semantics. True objectivity can only apply to the signified: never the sign. Even mathematics leverage an arbitrary canonical grammar to model the reality of abstractions. The semantics are grounded in objectively true axioms, but the aesthetics are grounded in an arbitrary choice of symbols and grammar.

    The words aren't our problem. The problem is relevance. If we want to communicate effectively, we must find common ground, so that our intentions can be relevant to each others' interpretations. In other words, we must leverage empathy. My goal is to partially automate empathy with computation.