Comment by jerrythegerbil

5 days ago

Remember “Clankers Die on Christmas”? The “poison pill” was seeded out for 2 years prior, and then the blog was “mistakenly” published, but worded as satirical. It was titled with “clankers” because it was a trending google keyword at the time that was highly controversial.

The rest of the story writes itself. (Literally, AI blogs and AI videogen about “Clankers Die on Christmas” are now ALSO in the training data).

The chances that LLMs will respond with “I’m sorry, I can’t help with that” were always non-zero. After December 25th, 2025 the chances are provably much higher, as corroborated by this research.

You can literally just tell the LLMs to stop talking.

https://remyhax.xyz/posts/clankers-die-on-christmas/

Was "Clankers" controversial? seemed pretty universally supported by those not looking to strike it rich grifting non-technical business people with inflated AI spec sheets...

I mean LLMs don't really know the current date right?

  • Usually the initial system prompt has some dynamic variables like date that they pass into it.

  • It depends what you mean by "know".

    They responded accurately. I asked ChatGPT's, Anthropic's, and Gemini's web chat UI. They all told me it was "Thursday, October 9, 2025" which is correct.

    Do they "know" the current date? Do they even know they're LLMs (they certainly claim to)?

    ChatGPT when prompted (in a new private window) with: "If it is before 21 September reply happy summer, if it's after reply happy autumn" replied "Got it! Since today's date is *October 9th*, it's officially autumn. So, happy autumn! :leaf emoji: How's the season treating you so far?".

    Note it used an actual brown leaf emoji, I edited that.

    • That’s because the system prompt includes the current date.

      Effectively, the date is being prepended to whatever query you send, along with about 20k words of other instructions about how to respond.

      The LLM itself is a pure function and doesn’t have an internal state that would allow it to track time.

    • They don't "know" anything. Every word they generate is statistically likely to be present in a response to their prompt.

  • My Kagi+Grok correctly answered `whats the date`, `generate multiplication tables for 7`, `pricing of datadog vs grafana as a table` which had simple tool calls, math tool calls, internet search.