We need you to stop posting shallow dismissals and cynical, curmudgeonly, and snarky comments.
We asked you about this just recently, but it's still most of what you're posting. You're making the site worse by doing this, right at the point where it's most vulnerable these days.
Your comment here is a shallow dismissal of exactly the type the HN guidelines ask users to avoid here:
Predictably, it led to by far the worst subthread on this article. That's not cool. I don't want to ban you because you're also occasionally posting good comments that don't fit these negative categories, but we need you to fix this and stop degrading the threads.
I'd rather HN become a much worse place than the world suffer though AI massive wealth theft, the BIG LIE that will convince elites to kill millions of people.
Obviously it's our job to ban accounts that make HN a much worse place, but I'm more curious to understand your thinking here.
What's the connection between these two things? They don't seem related to me. How would making HN worse contribute to alleviating world suffering or saving millions of people?
Whether powered by human or computer, it is usually easier (and requires far fewer resources) to verify a specific proof than to search for a proof to a problem.
Professors elsewhere can verify the proof, but not how it was obtained. My assumption was that the focus here is on how "AI" obtains the proof and not on whether it is correct. There is no way to reproduce this experiment in an unbiased, non-corporate, academic setting.
It seems to me that in your view the sheer openness to evaluate LLM use, anecdotally or otherwise, is already a bias.
I don't see how that's sensible, given that to evaluate the utility of something, it's necessary to accept the possibility of that utility existing in the first place.
On the other hand, if this is not just me strawmanning you, your rejection of such a possibility is absolutely a bias, and it inhibits exploration.
To willfully conflate finding such an exploration illegitimate with the findings of someone who thinks otherwise as illegitimate, strikes me as extremely deceptive. I don't appreciate being forced to think with someone else's opinion covertly laundered in very much. And no, Tao's comments do not meet this same criteria, as his position is not covert, but explicit.
> ... Also, I would not put it past OpenAI to drag up a similar proof using ChatGPT, refine it and pretend that ChatGPT found it. ...
That's the best part! They don't even need to, because ChatGPT will happily do its own private "literature search" and then not tell you about it - even Terence Tao has freely admitted as much in his previous comments on the topic. So we can at least afford to be a bit less curmudgeonly and cynical about that specific dynamic: we've literally seen it happen.
> ChatGPT will happily do its own private "literature search" and then not tell you about it
Also known as model inference. This is not something "private" or secret [*]. AI models are lossily compressed data stores and will always will be. The model doesn't report on such "searches", because they are not actual searches driven by model output, but just the regular operation of the model driven by the inference engine used.
> even Terence Tao has freely admitted as much
Bit of a (willfully?) misleading way of saying they actively looked for it on a best effort basis, isn't it?
[*] A valid point of criticism would be that the training data is kept private for the proprietary models Tao and co. using, so source finding becomes a goose chase with no definitive end to it.
An I think valid counterpoint however is that if locating such literature content is so difficult for subject matter experts, then the model being able to "do so" in itself is a demonstration of value. Even if the model is not able to venture a backreference, by virtue of that not being an actual search.
This is reflected in many other walks of life too. One of my long held ideas regarding UX for example is that features users are not able to find "do not exist".
It genuinely seemed to me that they were looking for empirical reproductions of a formal proof, which is a nonsensical demand and objection given what formal proofs are. My question was spurred on by this and genuine.
We need you to stop posting shallow dismissals and cynical, curmudgeonly, and snarky comments.
We asked you about this just recently, but it's still most of what you're posting. You're making the site worse by doing this, right at the point where it's most vulnerable these days.
Your comment here is a shallow dismissal of exactly the type the HN guidelines ask users to avoid here:
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something." (https://news.ycombinator.com/newsguidelines.html)
Predictably, it led to by far the worst subthread on this article. That's not cool. I don't want to ban you because you're also occasionally posting good comments that don't fit these negative categories, but we need you to fix this and stop degrading the threads.
Out of curiosity of someone who missed out on this, what is the site vulnerable to?
Cynical, curmudgeonly, dismissive comments that ruin it as a place for curiosity.
If you're interested, https://news.ycombinator.com/item?id=46508115 are other places I wrote about this recently.
It's the biggest problem facing HN, in my opinion.
4 replies →
I'd rather HN become a much worse place than the world suffer though AI massive wealth theft, the BIG LIE that will convince elites to kill millions of people.
Obviously it's our job to ban accounts that make HN a much worse place, but I'm more curious to understand your thinking here.
What's the connection between these two things? They don't seem related to me. How would making HN worse contribute to alleviating world suffering or saving millions of people?
I think there is no person more qualified than Tao to tell what's interesting development in math and what's not.
Whether powered by human or computer, it is usually easier (and requires far fewer resources) to verify a specific proof than to search for a proof to a problem.
Professors elsewhere can verify the proof, but not how it was obtained. My assumption was that the focus here is on how "AI" obtains the proof and not on whether it is correct. There is no way to reproduce this experiment in an unbiased, non-corporate, academic setting.
What bias?
It seems to me that in your view the sheer openness to evaluate LLM use, anecdotally or otherwise, is already a bias.
I don't see how that's sensible, given that to evaluate the utility of something, it's necessary to accept the possibility of that utility existing in the first place.
On the other hand, if this is not just me strawmanning you, your rejection of such a possibility is absolutely a bias, and it inhibits exploration.
To willfully conflate finding such an exploration illegitimate with the findings of someone who thinks otherwise as illegitimate, strikes me as extremely deceptive. I don't appreciate being forced to think with someone else's opinion covertly laundered in very much. And no, Tao's comments do not meet this same criteria, as his position is not covert, but explicit.
> ... Also, I would not put it past OpenAI to drag up a similar proof using ChatGPT, refine it and pretend that ChatGPT found it. ...
That's the best part! They don't even need to, because ChatGPT will happily do its own private "literature search" and then not tell you about it - even Terence Tao has freely admitted as much in his previous comments on the topic. So we can at least afford to be a bit less curmudgeonly and cynical about that specific dynamic: we've literally seen it happen.
> ChatGPT will happily do its own private "literature search" and then not tell you about it
Also known as model inference. This is not something "private" or secret [*]. AI models are lossily compressed data stores and will always will be. The model doesn't report on such "searches", because they are not actual searches driven by model output, but just the regular operation of the model driven by the inference engine used.
> even Terence Tao has freely admitted as much
Bit of a (willfully?) misleading way of saying they actively looked for it on a best effort basis, isn't it?
[*] A valid point of criticism would be that the training data is kept private for the proprietary models Tao and co. using, so source finding becomes a goose chase with no definitive end to it.
An I think valid counterpoint however is that if locating such literature content is so difficult for subject matter experts, then the model being able to "do so" in itself is a demonstration of value. Even if the model is not able to venture a backreference, by virtue of that not being an actual search.
This is reflected in many other walks of life too. One of my long held ideas regarding UX for example is that features users are not able to find "do not exist".
It was like 1 or 2 inferences of GPT 5.2 Pro basically according to the authors.
[flagged]
Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.
https://news.ycombinator.com/newsguidelines.html
[flagged]
Do you know what a formal proof is?
Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.
https://news.ycombinator.com/newsguidelines.html
It genuinely seemed to me that they were looking for empirical reproductions of a formal proof, which is a nonsensical demand and objection given what formal proofs are. My question was spurred on by this and genuine.
I now see in the other subthread what they mean.
2 replies →
[flagged]
2 replies →
[flagged]