Comment by chrisjj
19 hours ago
> The notorious “you are absolutely right”, which no-living human ever used before, at-least not that I know of
What should we conclude from those two extraneous dashes....
19 hours ago
> The notorious “you are absolutely right”, which no-living human ever used before, at-least not that I know of
What should we conclude from those two extraneous dashes....
That I'm a real human being that is stupid in English sometimes? :)
I knew it was real as soon as I read “I stared to see a pattern”, which is funny now I find weird little non spellcheck mistakes endearing since they stamp “oh this is an actual human” on the work
Ha! Despite the fact that I tend to proof read my posts before publishing, right after publishing, and sometimes re-reading them few months after publishing, I still tend to not notice some obvious typos. Kinda makes you feel appreciation for the profession of editors and spell checkers. (And yes, I use LanguageTools in neovim, but I refuse to feed my articles to LLMs).
1 reply →
Or the user has "ChatGPT, add random misspellings so it looks like a human wrote this" in their system config.
I'd read 100 blog posts by humans doing their best to write coherent English rather than one LLM-sandblasted post
That's just what an AI would say :)
Nice article, though. Thanks.
The funny thing is I knew people that used the phrase 'you're absolutely right' very commonly...
They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.
These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.
This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.
I'm a confessing user of em-dashes (or en-dashes in fonts that feature overly accentuated em-dashes). It's actually kind of hard to not use them, if you've ever worked with typography and know your dashes and hyphenations. —[sic!] Also, those dashes are conveniently accessible on a Mac keyboard. There may be some Win/PC bias in the em-dash giveaway theory.
A few writer friends even had a coffee mug with the alt+number combination for em-dash in Windows, given by a content marketing company. It was already very widespread in writing circles years ago. Developers keep forgetting they're in a massively isolated bubble.
I use them -but I generally use the short version (I'm lazy), while AI likes the long version (which is correct -my version is not).
You don't use em dashes then, you use en dash.
4 replies →
I don't know why LLMs talk in a hybrid of corporatespeak and salespeak but they clearly do, which on the one hand makes their default style stick out like a sore thumb outside LinkedIn, but on the other hand, is utterly enervating to read when suddenly every other project shared here is speaking with one grating voice.
Here's my list of current Claude (I assume) tics:
https://news.ycombinator.com/item?id=46663856
> part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
I can usually tell when someone is leading like this and I resent them for trying to manipulate me. I start giving the opposite answer they’re looking for out of spite.
I’ve also had AI do this to me. At the end of it all, I asked why it didn’t just give me the answer up front. It was a bit of a conspiracy theory, and it said I’d believe it more if I was lead there to think I got there on my own with a bunch of context, rather than being told something fairly outlandish from the start. That fact that AI does this to better reinforce the belief in conspiracy theories is not good.
An LLM cannot explain itself and its explanations have no relation to what actually caused the text to be generated.
Those are hyphens.