← Back to context

Comment by pixl97

15 hours ago

The funny thing is I knew people that used the phrase 'you're absolutely right' very commonly...

They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.

The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.

These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.

This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.

I'm a confessing user of em-dashes (or en-dashes in fonts that feature overly accentuated em-dashes). It's actually kind of hard to not use them, if you've ever worked with typography and know your dashes and hyphenations. —[sic!] Also, those dashes are conveniently accessible on a Mac keyboard. There may be some Win/PC bias in the em-dash giveaway theory.

  • A few writer friends even had a coffee mug with the alt+number combination for em-dash in Windows, given by a content marketing company. It was already very widespread in writing circles years ago. Developers keep forgetting they're in a massively isolated bubble.

I use them -but I generally use the short version (I'm lazy), while AI likes the long version (which is correct -my version is not).

I don't know why LLMs talk in a hybrid of corporatespeak and salespeak but they clearly do, which on the one hand makes their default style stick out like a sore thumb outside LinkedIn, but on the other hand, is utterly enervating to read when suddenly every other project shared here is speaking with one grating voice.

Here's my list of current Claude (I assume) tics:

https://news.ycombinator.com/item?id=46663856

> part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.

I can usually tell when someone is leading like this and I resent them for trying to manipulate me. I start giving the opposite answer they’re looking for out of spite.

I’ve also had AI do this to me. At the end of it all, I asked why it didn’t just give me the answer up front. It was a bit of a conspiracy theory, and it said I’d believe it more if I was lead there to think I got there on my own with a bunch of context, rather than being told something fairly outlandish from the start. That fact that AI does this to better reinforce the belief in conspiracy theories is not good.

  • An LLM cannot explain itself and its explanations have no relation to what actually caused the text to be generated.