Comment by SomewhatLikely

2 years ago

Thanks, this gave some good insight to GPT4. If I provide the entire Wikipedia page contents but blank out the movie name and director name it can't recall it. https://chat.openai.com/share/c499e163-3745-48c3-b00e-11ea42...

However, if I add the director it gets it right: https://chat.openai.com/share/a602b3b0-5c17-4b4d-bed8-124197...

If I only give it 1980s film and the director's name it can still get it. https://chat.openai.com/share/d6cf396b-3199-4c80-84b9-d41d23...

So it's clearly not able to look this movie up semantically and needs a strong key like the director's name.

EDIT: Digging deeper it's clear the model only has a very foggy idea of what the movie is about: https://chat.openai.com/share/d0701f53-1250-421e-aa4b-dc8156... People have described these types of outputs as the text equivalent of a highly compressed JPEG, which seems to fit well with what's going on here. It gets some of the top level elements right and kind of remembers there's some kind of vehicle that's important but it has forgotten all the details, even the date the movie was released. But unlike a human who might indicate their fuzziness (was it a helicopter or a submarine?), GPT4 gladly pretends like it knows what it's talking about. I think it's likely a solvable problem, the model probably has the information to know when it's confident and when it's in a fuzzy JPEG region but the current alignment isn't doing a great job of surfacing that.