← Back to context

Comment by Arainach

2 days ago

Whether powered by human or computer, it is usually easier (and requires far fewer resources) to verify a specific proof than to search for a proof to a problem.

Professors elsewhere can verify the proof, but not how it was obtained. My assumption was that the focus here is on how "AI" obtains the proof and not on whether it is correct. There is no way to reproduce this experiment in an unbiased, non-corporate, academic setting.

  • What bias?

    It seems to me that in your view the sheer openness to evaluate LLM use, anecdotally or otherwise, is already a bias.

    I don't see how that's sensible, given that to evaluate the utility of something, it's necessary to accept the possibility of that utility existing in the first place.

    On the other hand, if this is not just me strawmanning you, your rejection of such a possibility is absolutely a bias, and it inhibits exploration.

    To willfully conflate finding such an exploration illegitimate with the findings of someone who thinks otherwise as illegitimate, strikes me as extremely deceptive. I don't appreciate being forced to think with someone else's opinion covertly laundered in very much. And no, Tao's comments do not meet this same criteria, as his position is not covert, but explicit.