Comment by threethirtytwo

1 day ago

[flagged]

I firmly believe @threethirtytwo’s reply was not produced by an LLM

  • regardless of if this text was written by an LLM or a human, it is still slop,with a human behind it just trying to wind people up . If there is a valid point to be made , it should be made, briefly.

    • If the point was triggering a reply, the length and sarcasm certainly worked.

      I agree brevity is always preferred. Making a good point while keeping it brief is much harder than rambling on.

      But length is just a measure, quality determines if I keep reading. If a comment is too long, I won’t finish reading it. If I kept reading, it wasn’t too long.

Are you expecting people who can't detect self-dellusions to be able to detect sarcasm, or are you just being cruel?

> This is a relief, honestly. A prior solution exists now, which means the model didn’t solve anything at all. It just regurgitated it from the internet, which we can retroactively assume contained the solution in spirit, if not in any searchable or known form. Mystery resolved.

Vs

> Interesting that in Terrance Tao's words: "though the new proof is still rather different from the literature proof)"

Pity that HN's ability to detect sarcasm is as robust as that of a sentiment analysis model using keyword-matching.

  • That’s just the internet. Detecting sarcasm requires a lot of context external to the content of any text. In person some of that is mitigated by intonation, facial expressions, etc. Typically it also requires that the the reader is a native speaker of the language or at least extremely proficient.

I suspect this is AI generated, but it’s quite high quality, and doesn’t have any of the telltale signs that most AI generated content does. How did you generate this? It’s great.

  • Their comments are full of "it's not x, it's y" over and over. Short pithy sentences. I'm quite confident it's AI written, maybe with a more detailed prompt than the average

    I guess this is the end of the human internet

  • Your intuition on AI is out of date by about 6 months. Those telltale signs no longer exist.

    It wasn't AI generated. But if it was, there is currently no way for anyone to tell the difference.

  • It's bizarre. The same account was previously arguing in favor of emergent reasoning abilities in another thread ( https://news.ycombinator.com/item?id=46453084 ) -- I voted it up, in fact! Turing test failed, I guess.

    (edit: fixed link)

    • We need a name for the much more trivial version of the Turing test that replaces "human" with "weird dude with rambling ideas he clearly thinks are very deep"

      I'm pretty sure it's like "can it run DOOM" and someone could make an LLM that passes this that runs on an pregnancy test

Why not plan for a future where a lot of non-trivial tasks are automated instead of living on the edge with all this anxiety?

  • [flagged]

    • I mean.. LLMs have hit a pretty hard wall a while ago, with the only solution being throwing monstrous compute at eking out the remaining few percent improvement (real world, not benchmarks). That's not to mention hallucinations / false paths being a foundational problem.

      LLMs will continue to get slightly better in the next few years, but mainly a lot more efficient. Which will also mean better and better local models. And grounding might get better, but that just means less wrong answers, not better right answers.

      So no need for doomerism. The people saying LLMs are a few years away from eating the world are either in on the con or unaware.

    • If all of it is going away and you should deny reality, what does everything else you wrote even mean?

    • Yes, it is simply impossible that anyone could look at things and do your own evaluations and come to a different, much more skeptical conclusion.

      The only possible explanation is people say things they don't believe out of FUD. Literally the only one.