Comment by m3kw9
2 months ago
Even that, we don’t know what got updated and what didn’t. Can we assume everything that can be updated is updated?
2 months ago
Even that, we don’t know what got updated and what didn’t. Can we assume everything that can be updated is updated?
> Can we assume everything that can be updated is updated?
What does that even mean? Of course an LLM doesn't know everything, so it we wouldn't be able to assume everything got updated either. At best, if they shared the datasets they used (which they won't, because most likely it was acquired illegally), you could make some guesses what they tried to update.
> What does that even mean?
I think it is clear what he meant and it is a legitimate question.
If you took a 6 year old and told him about the things that happened in the last year and sent him off to work, did he integrate the last year's knowledge? Did he even believe it or find it true? If that information was conflicting what he knew before, how do we know that the most recent thing he is told he will take as the new information? Will he continue parroting what he knew before this last upload? These are legitimate questions we have about our black box of statistics.
Interesting, I read GGP as:
If they stopped learning (=including) at march 31 and something popup on the internet on march 30 (lib update, new Nobel, whatever) there’s many chances it got scrapped because they probably don’t scrap everything in one day (do they ?).
That isn’t mutually exclusive with your answer I guess.
edit: thanks adolph to point out the typo.
1 reply →
You might be able to ask it what it knows.
So something's odd there. I asked it "Who won Super Bowl LIX and what was the winning score?" which was in February and the model replied "I don't have information about Super Bowl LIX (59) because it hasn't been played yet. Super Bowl LIX is scheduled to take place in February 2025.".
With LLMs, if you repeat something often enough, it becomes true.
I imagine there's a lot more data pointing to the super bowl being upcoming, then the super bowl concluding with the score.
Gonna be scary when bot farms are paid to make massive amounts of politically motivated false content (specifically) targeting future LLMs training
5 replies →
Why would you trust it to accurately say what it knows? It's all statistical processes. There's no "but actually for this question give me only a correct answer" toggle.
When I try Claude Sonnet 4 via web:
https://claude.ai/share/59818e6c-804b-4597-826a-c0ca2eccdc46
>This is a topic that would have developed after my knowledge cutoff of January 2025, so I should search for information [...]