Comment by snowmobile
21 hours ago
> That 20ms is a smoking gun - it lines up perfectly with the mysterious pattern we saw earlier!
Speaking of smoking guns, anybody else reckon Claude overuses that term a lot? Seems anytime I give it some debugging question, it'll claim some random thing like a version number or whatever, is a "smoking gun"
Yes! While this post was written entirely by me, I wouldn't be surprised if I had "smoking gun" ready to go because I spent so much time debugging with Claude last night.
It's interesting how LLMs influence us, right? The opposite happened to me: I loved using em dashes, but AI ruined it for me.
I still love using emdashes, and people already thought I was a robot!
https://xkcd.com/3126/
Soon the Andy 3000 will finally be a reality...
1 reply →
I used to love using em dashes.
I still do - but I used to, too.
2 replies →
Serious question though, since AI seems to be so all capable and intelligent. Why wouldn't it be able to tell you the exact reason that I could tell you just by reading the title of this post on HN? It is failing even at the one thing it could probably do decently, is being a search engine.
Direct answers are often useless without building up context for them.
Reminds me of ethimology nerd's videos. He has some content about how LLMs will influence human language.
Some day in the future we will complain about AIs with a 2015 accent because that’s the last training data that wasn’t recursive.
The "maybe" of yesterday is the "you're absolutely right!" of tomorrow.
shouldn't it be "human language influences human language"?
ChatGPT too. And "lines up perfectly" when it doesnt actually line up with anything
Same with Gemini.
You can absolutely see this pattern in Gemini in 2026.
Btw, is the injection of "absolutely" and "in $YEAR" prevalent in other LLMs as well, or is it just in Gemini's dialect?
2 replies →
"You're so right, that nice catch lines up perfectly!"
It's not just a coincidence, it's the emergence of spurious statistical correlations when observations happen across sessions rather than within sessions.
You can add an M-dash, and we completed the bs-bingo. :)
I chuckled out loud. It's funny cause it's true.
Or the "Eureka! That's not just a smoking gun, it's a classic case of LLMspeak."
Grok, ChatGPT, and Claude all have these tics, and even the pro versions will use their signature phrases multiple times in an answer. I have to wonder if it's deliberate, to make detecting AI easier?
A computational necromancer has likely figured out a way to power a data center by making Archimedes spin in his grave very fast.
I've love to delve into that.
https://pshapira.net/2024/03/31/delving-into-delve/
Without knowing how LLM's personality tuning works, I'd just hazard a guess that the excitability (tendency to use excided phrases) is turned up. "smoking gun" must be highly rated as a term of excitability. This should apply to other phrases like "outstanding!" or "good find!" "You're right!" etc.
I'm working on a little SRE agent to pre-load tickets with information to help our on-call and I'm already tired of Claude finding 'smoking guns'.
You might see certain phrases and mdashes ;-) rather often, because … these programs are trained on data written by people (or Microsoft's spelling correction) which overused them in the last n years? So what should these poor LLMs generate instead?
They love clichés, and hate repeating the same words for something (repetition penalty) so they'll say something like "cause" then it's a "smoking gun" then it's something else
I don't think claude has even once used this in my conversations (Claude Desktop, Claude Code, Voice conversations...) Sycophancy, yes absolutely!
Maybe it has something to do with your profile/memories?
smoking gun, you're absolutely right, good question, em dash, "it isn't just foo, it's also bar", real honest truth, brutal truth, underscores the issue, delves into, more em dashes, <20 different hr/corporate/cringe phrases>.
It's nauseating.
You might find this a fun read: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
It's what they read on The Internets when training, so don't expect them to generate new phrases, other than what they learned from it?
### The answer that fits everything (and what to do about it)
2 replies →
That's the point though, it doesn't reflect human usage of the word. If delve were so commonly used by humans too, we wouldn't be discussing how it's overused by LLMs.
Come on...haven't we all had to deal with the crazy smart lead who was loaded with those same types of annoying tics?
Considering what these LLMs bring to the table, I think a little tolerance for their cringe phrases is in order.
Yes, it’s kind of a corpus delicti. ;)
I see it from GPT5 too a lot
At this I'm just so glad that "you're absolutely right!" phase is over.
It's a smoking gun of Claude usage.
> Speaking of smoking guns
Oh shoot! A shooting.
So the TL;DR of this post is: don't change this setting unless you know what you're doing.
Chastise it with a reminder that you're using smokeless powder.