Comment by hypnagogic
1 year ago
I think there's a bigger underlying problem with the current GPT-4 model(s) atm:
Go to the API Playground and ask the model what is its current cutoff date. For example, in its chat, if you're not instructing it with anything else, it will tell you that its cutoff date is in 2021. Even if you explicitly tell the model via system prompt: "you are gpt-4-turbo-2024-04-09", in some cases it still thinks its in April 2023.
The fact that the model (variants of GPT-4 including gpt-4-turbo-2024-04-09) hallucinates its cutoff date being in 2021 unless specifically instructed with its model type is a major factor in this equation.
Here are the steps to reproduce the problem:
Try an A/B comparison at: https://platform.openai.com/playground/chat?model=gpt-4-turb...
A) Make sure "gpt-4-turbo-2024-04-09" is indeed selected. Don't tell it anything specific via the system prompt and in a worst case scenario, it'll think it's in 2021 as to its cutoff date. It also can't answer to questions about more current events.
* Reload the web page between prompts! *
B) Tell it via the system prompt: "You are gpt-4-turbo-2024-04-09" => you'll get answers to recent events. Ask anything about what's been going on in the world i.e. after April 2023 to verify against A.
I've tried this multiple times now, and have always gotten the same results. IMHO this implies a deeper issue in the model where the priming goes way off if the model number isn't mentioned in its system message. This might explain the bad initial benchmarks as well.
The problem seems pretty bad at the moment. Basically, if you omit the priming message ("You are gpt-4-turbo-2024-04-09"), it will in worst cases revert to hallucinating 2021 cutoff dates and doesn't get grounded into what should be its most current cutoff date.
If you do work at OpenAI, I suggest you look into it. :-)