Comment by SparkyMcUnicorn
2 months ago
Interesting. It's claiming different knowledge cutoff dates depending on the question asked.
"Who is president?" gives a "April 2024" date.
2 months ago
Interesting. It's claiming different knowledge cutoff dates depending on the question asked.
"Who is president?" gives a "April 2024" date.
Question for HN: how are content timestamps encoded during training?
Claude 4's system prompt was published and contains:
"Claude’s reliable knowledge cutoff date - the date past which it cannot answer questions reliably - is the end of January 2025. It answers all questions the way a highly informed individual in January 2025 would if they were talking to someone from {{currentDateTime}}, "
https://docs.anthropic.com/en/release-notes/system-prompts#m...
I thought best guesses were that Claude's system prompt ran to tens of thousands of tokens, with figures like 30,000 tokens being bandied about.
But the documentation page linked here doesn't bear that out. In fact the Claude 3.7 system prompt on this page clocks in at significantly less than 4,000 tokens.
they arent.
a model learns words or tokens more pedantically but has no sense of time nor cant track dates
Yup. Either the system prompt includes a date it can parrot, or it doesn't and the LLM will just hallucinate one as needed. Looks like it's the latter case here.
Technically they don’t, but OpenAI must be injecting the current date and time into the system prompt, and Gemini just does a web search for the time when asked.
7 replies →