Comment by PaulHoule
2 years ago
The thing is that generalization is good enough to make people squee and not notice that the output is wrong but not good enough to get the right answer.
If it were going to produce ‘explainable’ correct answers for most of what it does that would be a matter of looking up the original sources to make sure they really say what it thinks they do. I mean, I can say, “there’s this paper that backs up my point” but I have to go look it up to get the exact citation at the very least.
There is definitely a misconception about how to use a tool like ChatGPT.
If you give it an analytic prompt like "turn this baseball box score into an entertaining outline" it will reliably act as a translator because all of the facts about the game are contained in the prompt.
If you give it a synthetic prompt like "give me quotes from the broadcasters" it will reliably acts as a synthesizer because none of the facts of the transcript are in the prompt.
This ability to perform as a synthesizer is what you are identifying here as "good enough to make people squee and not notice that the output is wrong but not good enough to get the right answer", which is correct, but sometimes fiction is useful!
If all web pages were embedded in ChatGPT's 1536 dimensional vector space and used for analytic augmentation then a tool would more reliably be able to translate a given prompt. The UI could also display the URLs of the nearest-neighbor source material was used to augment the prompt. That seems to be what Bing/Edge has in store.
That's a touch beyond state of the art but we might get there.
If there was one big problem w/ today's LLMs it is that the attention window is too short to hold a "complete" document. I can put the headline of an HN submission through BERT and expect BERT to capture it but there is (as of yet) no way to cut up a document up into 512 (BERT) or 4096 (ChatGPT) token slices and then mash those embeddings together to make an embedding that can do all the things the model is trained to do on a smaller data set. I'm sure we will see larger models, but it seems a scalable embedding that grows with the input text would be necessary to move to the next level.
No, this is the current state of the art: https://supabase.com/blog/chatgpt-supabase-docs
The same thing could be done with search engine results and from recent demos it looks like this is the kind of analytic augmentation that MS and OpenAI have added to Bing.