Searching for that phrase now shows your blog post as the top reference, and the AI overview now says it's a "nonsensical phrase used to illustrate how search engines can generate misleading or fabricated explanations for arbitrary inputs"! :O
lol so it's getting that bad. Assigning meaning to random phrases is BS. If it keeps on going it'll start attributing meaning to misspelled words.
LLMs are only as good or bad as they are created - or their function / parameters? Google got real sad mid 00s - it's all about the money now isn't it.
OK wow that actually fits here. https://simonwillison.net/2025/Apr/23/meaning-slop/
Searching for that phrase now shows your blog post as the top reference, and the AI overview now says it's a "nonsensical phrase used to illustrate how search engines can generate misleading or fabricated explanations for arbitrary inputs"! :O
lol so it's getting that bad. Assigning meaning to random phrases is BS. If it keeps on going it'll start attributing meaning to misspelled words.
LLMs are only as good or bad as they are created - or their function / parameters? Google got real sad mid 00s - it's all about the money now isn't it.
Topic recently [1] re Google A.I. BSing.
[1] https://news.ycombinator.com/item?id=43748171 ('Epistemological Slop: Lies, Damned Lies, and Google' - <newcartographies.com>)