regardless of if this text was written by an LLM or a human, it is still slop,with a human behind it just trying to wind people up . If there is a valid point to be made , it should be made, briefly.
If the point was triggering a reply, the length and sarcasm certainly worked.
I agree brevity is always preferred. Making a good point while keeping it brief is much harder than rambling on.
But length is just a measure, quality determines if I keep reading. If a comment is too long, I won’t finish reading it. If I kept reading, it wasn’t too long.
> This is a relief, honestly. A prior solution exists now, which means the model didn’t solve anything at all. It just regurgitated it from the internet, which we can retroactively assume contained the solution in spirit, if not in any searchable or known form. Mystery resolved.
Vs
> Interesting that in Terrance Tao's words: "though the new proof is still rather different from the literature proof)"
That’s just the internet. Detecting sarcasm requires a lot of context external to the content of any text. In person some of that is mitigated by intonation, facial expressions, etc. Typically it also requires that the the reader is a native speaker of the language or at least extremely proficient.
I suspect this is AI generated, but it’s quite high quality, and doesn’t have any of the telltale signs that most AI generated content does. How did you generate this? It’s great.
Their comments are full of "it's not x, it's y" over and over. Short pithy sentences. I'm quite confident it's AI written, maybe with a more detailed prompt than the average
I’m confused by this. I still see this kind of phrasing in LLM generated content, even as recent as last week (using Gemini, if that matters). Are you saying that LLMs do not generate text like this, or that it’s now possible to get text that doesn’t contain the telltale “its not X, it’s Y”?
It's bizarre. The same account was previously arguing in favor of emergent reasoning abilities in another thread ( https://news.ycombinator.com/item?id=46453084 ) -- I voted it up, in fact! Turing test failed, I guess.
We need a name for the much more trivial version of the Turing test that replaces "human" with "weird dude with rambling ideas he clearly thinks are very deep"
I'm pretty sure it's like "can it run DOOM" and someone could make an LLM that passes this that runs on an pregnancy test
I mean.. LLMs have hit a pretty hard wall a while ago, with the only solution being throwing monstrous compute at eking out the remaining few percent improvement (real world, not benchmarks). That's not to mention hallucinations / false paths being a foundational problem.
LLMs will continue to get slightly better in the next few years, but mainly a lot more efficient. Which will also mean better and better local models. And grounding might get better, but that just means less wrong answers, not better right answers.
So no need for doomerism. The people saying LLMs are a few years away from eating the world are either in on the con or unaware.
I firmly believe @threethirtytwo’s reply was not produced by an LLM
regardless of if this text was written by an LLM or a human, it is still slop,with a human behind it just trying to wind people up . If there is a valid point to be made , it should be made, briefly.
If the point was triggering a reply, the length and sarcasm certainly worked.
I agree brevity is always preferred. Making a good point while keeping it brief is much harder than rambling on.
But length is just a measure, quality determines if I keep reading. If a comment is too long, I won’t finish reading it. If I kept reading, it wasn’t too long.
Are you expecting people who can't detect self-dellusions to be able to detect sarcasm, or are you just being cruel?
> This is a relief, honestly. A prior solution exists now, which means the model didn’t solve anything at all. It just regurgitated it from the internet, which we can retroactively assume contained the solution in spirit, if not in any searchable or known form. Mystery resolved.
Vs
> Interesting that in Terrance Tao's words: "though the new proof is still rather different from the literature proof)"
Pity that HN's ability to detect sarcasm is as robust as that of a sentiment analysis model using keyword-matching.
The problem is more that it's an LLM-generated comment that's about 20x as long as it needed to be to get the point across.
It's obviously not LLM-generated.
1 reply →
It's not.
Evidence shows otherwise: Despite the "20x" length, many people actually missed the point.
5 replies →
That’s just the internet. Detecting sarcasm requires a lot of context external to the content of any text. In person some of that is mitigated by intonation, facial expressions, etc. Typically it also requires that the the reader is a native speaker of the language or at least extremely proficient.
I suspect this is AI generated, but it’s quite high quality, and doesn’t have any of the telltale signs that most AI generated content does. How did you generate this? It’s great.
Their comments are full of "it's not x, it's y" over and over. Short pithy sentences. I'm quite confident it's AI written, maybe with a more detailed prompt than the average
I guess this is the end of the human internet
To give them the benefit of the doubt, people who talk to AI too much probably start mimicking its style.
yea, i was suspicious by the second paragraph but was sure once i got to "that’s not engineering, it’s cosplay"
4 replies →
Your intuition on AI is out of date by about 6 months. Those telltale signs no longer exist.
It wasn't AI generated. But if it was, there is currently no way for anyone to tell the difference.
I’m confused by this. I still see this kind of phrasing in LLM generated content, even as recent as last week (using Gemini, if that matters). Are you saying that LLMs do not generate text like this, or that it’s now possible to get text that doesn’t contain the telltale “its not X, it’s Y”?
> But if it was there is currently no way for anyone to tell the difference.
This is false. There are many human-legible signs, and there do exist fairly reliable AI detection services (like Pangram).
2 replies →
> It wasn't AI generated.
You're lying: https://www.pangram.com/history/94678f26-4898-496f-9559-8c4c...
Not that I needed pangram to tell me that, it's obvious slop.
4 replies →
(edit: removed duplicate comment from above, not sure how that happened)
the poster is in fact being very sarcastic. arguing in favor of emergent reasoning does in fact make sense
It's a formal sarcasm piece.
It's bizarre. The same account was previously arguing in favor of emergent reasoning abilities in another thread ( https://news.ycombinator.com/item?id=46453084 ) -- I voted it up, in fact! Turing test failed, I guess.
(edit: fixed link)
I thought the mockery and sarcasm in my piece was rather obvious.
1 reply →
We need a name for the much more trivial version of the Turing test that replaces "human" with "weird dude with rambling ideas he clearly thinks are very deep"
I'm pretty sure it's like "can it run DOOM" and someone could make an LLM that passes this that runs on an pregnancy test
Why not plan for a future where a lot of non-trivial tasks are automated instead of living on the edge with all this anxiety?
[flagged]
come out of the irony layer for a second -- what do you believe about LLMs?
I mean.. LLMs have hit a pretty hard wall a while ago, with the only solution being throwing monstrous compute at eking out the remaining few percent improvement (real world, not benchmarks). That's not to mention hallucinations / false paths being a foundational problem.
LLMs will continue to get slightly better in the next few years, but mainly a lot more efficient. Which will also mean better and better local models. And grounding might get better, but that just means less wrong answers, not better right answers.
So no need for doomerism. The people saying LLMs are a few years away from eating the world are either in on the con or unaware.
If all of it is going away and you should deny reality, what does everything else you wrote even mean?
Yes, it is simply impossible that anyone could look at things and do your own evaluations and come to a different, much more skeptical conclusion.
The only possible explanation is people say things they don't believe out of FUD. Literally the only one.