Comment by PaulHoule
2 years ago
"Blurry JPEG" for how ChatGPT "compresses" character-based knowledge into vectors. That "compression" process gives ChatGPT an ability to generalize because it learns statistics (unlike JPEG) but like JPEG it is a lossy process.
It's a terrible analogy because the entire point of ML systems is to generalize well to new data, not to reproduce the original data as accurate as possible with a space/time tradeoff.
I don't think you can describe the math in this context as "generalize well to new data."
ChatGPT certainly can't generate new data. It's not gonna correctly tell you today who won the World Series in 2030. It's not going to write a poem in the style of someone who hasn't been born yet.
But it can interpolate between and through a bunch of existing data that's on the web to produce novel mixes of it. I find the "blurring those things together" analogy pretty compelling there, in the same way that blurring or JPEG-compressing something isn't going to give you a picture of a new event but it might change what you appear to see in the data you already had.
(Obviously it's not exactly the same, that's why it's an analogy and not a definition. As an analogy, it works much better if you ignore much of what you know about the implementation details of both of them. It's not trying to teach someone how to build it, but to teach a lay person how to think about the output.)
It absolutely can generate new data, it does so all the time. If you are claiming otherwise I think we need a more formal definition of what you mean by new data.
Are you suggesting because it can't predict the future it can't generate novel data?
2 replies →
The thing is that generalization is good enough to make people squee and not notice that the output is wrong but not good enough to get the right answer.
If it were going to produce ‘explainable’ correct answers for most of what it does that would be a matter of looking up the original sources to make sure they really say what it thinks they do. I mean, I can say, “there’s this paper that backs up my point” but I have to go look it up to get the exact citation at the very least.
There is definitely a misconception about how to use a tool like ChatGPT.
If you give it an analytic prompt like "turn this baseball box score into an entertaining outline" it will reliably act as a translator because all of the facts about the game are contained in the prompt.
If you give it a synthetic prompt like "give me quotes from the broadcasters" it will reliably acts as a synthesizer because none of the facts of the transcript are in the prompt.
This ability to perform as a synthesizer is what you are identifying here as "good enough to make people squee and not notice that the output is wrong but not good enough to get the right answer", which is correct, but sometimes fiction is useful!
If all web pages were embedded in ChatGPT's 1536 dimensional vector space and used for analytic augmentation then a tool would more reliably be able to translate a given prompt. The UI could also display the URLs of the nearest-neighbor source material was used to augment the prompt. That seems to be what Bing/Edge has in store.
2 replies →