Comment by Terr_
7 days ago
I don't think it's too unexpected: An LLM is an algorithm that takes a document and guesses a plausible extra piece to add. It makes sense it would generate more-pleasing output when run against a document which strongly resembles ones it was trained on, as opposed to a document made by merging two dissimilar and distinct kinds of document.
Sure, just one cat-fact can have a big impact, but it already takes a deal of circumstance and luck for an LLM to answer a math problem correctly. (Unless someone's cheating with additional non-LLM code behind the scenes.)
No comments yet
Contribute on Hacker News ↗