Comment by tough
2 months ago
they arent.
a model learns words or tokens more pedantically but has no sense of time nor cant track dates
2 months ago
they arent.
a model learns words or tokens more pedantically but has no sense of time nor cant track dates
Yup. Either the system prompt includes a date it can parrot, or it doesn't and the LLM will just hallucinate one as needed. Looks like it's the latter case here.
Technically they don’t, but OpenAI must be injecting the current date and time into the system prompt, and Gemini just does a web search for the time when asked.
right but that's system prompting / in context
not really -trained- into the weights.
the point is you can't ask a model what's his training cut off date and expect a reliable answer from the weights itself.
closer you could do is have a bench with -timed- questions that could only know if had been trained for that, and you'd had to deal with hallucinations vs correctness etc
just not what llm's are made for, RAG solves this tho
What would the benefits be of actual time concepts being trained into the weights? Isn’t just tokenizing the dates and including those as normal enough to yield benefits?
E.g. it probably has a pretty good understanding between “second world war” and the time period it lasted. Or are you talking about the relation between “current wall clock time” and questions being asked?
1 reply →
OpenAI injects a lot of stuff, your name, sub status, recent threads, memory, etc
sometimes its interesting to peek up under the network tab on dev tools
strange they would do that client side
2 replies →