Comment by fc417fc802
15 hours ago
He's not a skeptic, he's asking you to explicitly state your reasoning with the expectation that either the readers will learn something or (more likely) you will realize that your thought and speech pattern there was the equivalent of an LLM hallucinating. Yes you can prompt it as you suggested and yes you will generally receive a convincing answer but it is not doing what you seem to think it is doing ie the generated rating is complete bullshit that the model pulled out of its proverbial ass.
are you actually curious or do you just want to argue against it?
I think you're obviously wrong (based on my relatively detailed but certainly somewhat out of date and not expert level knowledge of LLM internals) but if you're willing to explain your reasoning I'm willing to reconsider my own position in light of any new information or novel observations you might provide.
GP is obviously wrong, and probably doesn't know about calibration and/or that it isn't even clear how to calibrate frontier models in the manner we need, given how complex and expensive the training is, and how tricky calibration becomes in e.g. mixture-of-experts and chain of thought approaches.
1 reply →
"I can only explain my beliefs to people who promise they'll agree" is certainly a unique take.