Comment by johnfn
20 hours ago
I suspect this is AI generated, but it’s quite high quality, and doesn’t have any of the telltale signs that most AI generated content does. How did you generate this? It’s great.
20 hours ago
I suspect this is AI generated, but it’s quite high quality, and doesn’t have any of the telltale signs that most AI generated content does. How did you generate this? It’s great.
Their comments are full of "it's not x, it's y" over and over. Short pithy sentences. I'm quite confident it's AI written, maybe with a more detailed prompt than the average
I guess this is the end of the human internet
To give them the benefit of the doubt, people who talk to AI too much probably start mimicking its style.
yea, i was suspicious by the second paragraph but was sure once i got to "that’s not engineering, it’s cosplay"
It's also the wording. The weird phrases
"Glorified Google search with worse footnotes" what on earth does that mean?
AI has a distinct feel to it
2 replies →
I've had that exact phrase pop up from an LLM when I asked it for a more negative code review
Your intuition on AI is out of date by about 6 months. Those telltale signs no longer exist.
It wasn't AI generated. But if it was, there is currently no way for anyone to tell the difference.
I’m confused by this. I still see this kind of phrasing in LLM generated content, even as recent as last week (using Gemini, if that matters). Are you saying that LLMs do not generate text like this, or that it’s now possible to get text that doesn’t contain the telltale “its not X, it’s Y”?
> But if it was there is currently no way for anyone to tell the difference.
This is false. There are many human-legible signs, and there do exist fairly reliable AI detection services (like Pangram).
I've tested some of those services and they weren't very reliable.
If such a thing did exist, it would exist only until people started training models to hide from it.
Negative feedback is the original "all you need."
> It wasn't AI generated.
You're lying: https://www.pangram.com/history/94678f26-4898-496f-9559-8c4c...
Not that I needed pangram to tell me that, it's obvious slop.
I wouldn't know how to prove to you otherwise other then to tell you that I have seen these tools show incorrect results for both AI generated text and human written text.
Good thing you had a stochastic model backing up (with “low confidence”, no less) your vague intuition of a comment you didn’t like being AI-written.
I must be a bot because I love existential dread, that's a great phrase. I feel like they trigger a lot on literate prose.
1 reply →
(edit: removed duplicate comment from above, not sure how that happened)
the poster is in fact being very sarcastic. arguing in favor of emergent reasoning does in fact make sense
It's a formal sarcasm piece.
It's bizarre. The same account was previously arguing in favor of emergent reasoning abilities in another thread ( https://news.ycombinator.com/item?id=46453084 ) -- I voted it up, in fact! Turing test failed, I guess.
(edit: fixed link)
I thought the mockery and sarcasm in my piece was rather obvious.
Poe's Law is the real Bitter Lesson.
We need a name for the much more trivial version of the Turing test that replaces "human" with "weird dude with rambling ideas he clearly thinks are very deep"
I'm pretty sure it's like "can it run DOOM" and someone could make an LLM that passes this that runs on an pregnancy test