Comment by zozbot234
2 days ago
> ... Also, I would not put it past OpenAI to drag up a similar proof using ChatGPT, refine it and pretend that ChatGPT found it. ...
That's the best part! They don't even need to, because ChatGPT will happily do its own private "literature search" and then not tell you about it - even Terence Tao has freely admitted as much in his previous comments on the topic. So we can at least afford to be a bit less curmudgeonly and cynical about that specific dynamic: we've literally seen it happen.
> ChatGPT will happily do its own private "literature search" and then not tell you about it
Also known as model inference. This is not something "private" or secret [*]. AI models are lossily compressed data stores and will always will be. The model doesn't report on such "searches", because they are not actual searches driven by model output, but just the regular operation of the model driven by the inference engine used.
> even Terence Tao has freely admitted as much
Bit of a (willfully?) misleading way of saying they actively looked for it on a best effort basis, isn't it?
[*] A valid point of criticism would be that the training data is kept private for the proprietary models Tao and co. using, so source finding becomes a goose chase with no definitive end to it.
An I think valid counterpoint however is that if locating such literature content is so difficult for subject matter experts, then the model being able to "do so" in itself is a demonstration of value. Even if the model is not able to venture a backreference, by virtue of that not being an actual search.
This is reflected in many other walks of life too. One of my long held ideas regarding UX for example is that features users are not able to find "do not exist".