Comment by throw310822
12 hours ago
> just find the most probable word that follows next
Well, if in all situations you can predict which word Einstein would probably say next, then I think you're in a good spot.
This "most probable" stuff is just absurd handwaving. Every prompt of even a few words is unique, there simply is no trivially "most probable" continuation. Probable given what? What these machines learn to do is predicting what intelligence would do, which is the same as being intelligent.
>Probable given what?
The training data..
>predicting what intelligence would do
No, it just predict what the next word would be if an intelligent entity translated its thoughts to words. Because it is trained on the text that are written by intelligent entities.
If it was trained on text written by someone who loves to rhyme, you would be getting all rhyming responses.
It imitates the behavior -- in text -- of what ever entity that generated the training data. Here the training data was made by intelligent humans, so we get an imitation of the same.
It is a clever party trick that works often enough.
> The training data
If the prompt is unique, it is not in the training data. True for basically every prompt. So how is this probability calculated?
The prompt is unique but the tokens aren't.
Type "owejdpowejdojweodmwepiodnoiwendoinw welidn owindoiwendo nwoeidnweoind oiwnedoin" into ChatGPT and the response is "The text you sent appears to be random or corrupted and doesn’t form a clear question." because the prompt doesnt correlate to training data.
2 replies →
Just using a scaled up and cleverly tweaked version of linear regression analysis...
1 reply →
Hamiltonian paths and previous work by Donald Knuth is more than likely in the training data.
2 replies →
It is impossible to accurately imitate the action of intelligent beings without being intelligent. To believe otherwise is to believe that intelligence is a vacuous property.
So the actors who portrait great thinkers are great thinkers?
2 replies →
An unintelligent device can accurately imitate the action of intelligent beings within a given scope, in the same way an actor can accurately imitate the action of a fictional character in a given scope (the stage or camera) without actually being that character.
If the idea is that something cannot accurately replicate the entirety of intelligence without being intelligent itself, then perhaps. But that isn't really what people talk about with LLMs given their obvious limitations.
>It is impossible to accurately imitate the action of intelligent beings without being intelligent.
Wait what? So a robot who is accurately copying the actions of an intelligent human, is intelligent?
5 replies →