Comment by dvt
13 days ago
> I don't really understand why this type of pattern occurs, where the later words in a sentence don't properly connect to the earlier ones in AI-generated text.
Because AI is not intelligent, it doesn't "know" what it previously output even a token ago. People keep saying this, but it's quite literally fancy autocorrect. LLMs traverse optimized paths along multi-dimensional manifolds and trick our wrinkly grey matter into thinking we're being talked to. Super powerful and very fun to work with, but assuming a ghost in the shell would be illusory.
> Because AI is not intelligent, it doesn't "know" what it previously output even a token ago.
Of course it knows what it output a token ago, that's the whole point of attention and the whole basis of the quadratic curse.
> Of course it knows what it output a token ago...
It doesn't know anything. It has a bunch of weights that were updated by the previous stuff in the token stream. At least our brains, whatever they do, certainly don't function like that.
I don't know anything (or even much) about how our brains function, but the idea of a neuron sending an electrical output when the sum of the strengths of its inputs exceeds some value seems to be me like "a bunch of weights" getting repeatedly updated by stimulus.
To you it might be obvious our brains are different from a network of weights being reconfigured as new information comes in; to me it's not so clear how they differ. And I do not feel I know the meaning of the word "know" clearly enough to establish whether something that can emit fluent text about a topic is somehow excluded from "knowing" about it through its means of construction.
i dont think this is a meaningful distinction.
it knows the past tokens because theyre part of the input for predicting the next token. its part of the model architecture that it knows it.
if that isnt knowing, people dont know how to walk, only how to move limbs, and not even that, just a bunch of neurons firing
5 replies →
Wait till you learn how human memory works.
Every time you recall a memory it is modified, every time you verbalise a memory it is modified even more so.
Eye-witness accounts are notoriously unreliable, people who witness the same events can have shockingly differing versions.
Memories are modified when new information, real or fabricated, is added.
It’s entirely possible to convince people to recall events that never occurred.
Which of your memories are you certain are of real occurrences, or memories of dreams?
3 replies →
This must be what it was like when geocentrism was disproved.
If all the training data contains semantically-meaningful sentences it should be possible to build a network optimized for generating semantically-meaningful sentence primarily/only.
But we don't appear to have entirely done that yet. It's just curious to me that the linguistic structure is there while the "intelligence", as you call it, is not.
> If all the training data contains semantically-meaningful sentences it should be possible to build a network optimized for generating semantically-meaningful sentence primarily/only.
Not necessarily. You can check this yourself by building a very simple Markov Chain. You can then use the weights generated by feeding it Moby Dick or whatever, and this gap will be way more obvious. Generated sentences will be "grammatically" correct, but semantically often very wrong. Clearly LLMs are way more sophisticated than a home-made Markov Chain, but I think it's helpful to see the probabilities kind of "leak through."
But there is a very good chance that is what intelligence is.
Nobody knows what they are saying either, the brain is just (some form) of a neural net that produces output which we claim as our own. In fact most people go their entire life without noticing this. The words I am typing right now are just as mysterious to me as the words that pop on screen when an LLM is outputting.
I feel confident enough to disregard duelists (people who believe in brain magic), that it only leaves a neural net architecture as the explanation for intelligence, and the only two tools that that neural net can have is deterministic and random processes. The same ingredients that all software/hardware has to work with.
6 replies →
Sentences only have semantic meaning because you have experiences that they map to. The LLM isn't training on the experiences, just the characters. At least, that seems about right to me.
What does an experience map to?
3 replies →
Why would that be curious? The network is trained on the linguistic structure, not the "intelligence."
It's a difficult thing to produce a body of text that conveys a particular meaning, even for simple concepts, especially if you're seeking brevity. The editing process is not in the training set, so we're hoping to replicate it simply by looking at the final output.
How effectively do you suppose model training differentiates between low quality verbiage and high quality prose? I think that itself would be a fascinatingly hard problem that, if we could train a machine to do, would deliver plenty of value simply as a classifier.
I’m not up with what all the training data is exactly.
If it contains the entire corpus of recorded human knowledge…
And most of everything is shit…
https://en.wikipedia.org/wiki/Sturgeon%27s_law
Because AI is not intelligent, it doesn't "know" what it previously output even a token ago.
You have no idea what you're talking about. I mean, literally no idea, if you truly believe that.
That's only true if you consider the process the LLM is undergoing to be a faithful replica of the processes in the brain, right?
No.