← Back to context

Comment by zvmaz

2 years ago

> The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall.

Does 'highly improbably' here means "not previously seen"?

I interpret "highly improbable" here as referring to a model's "prior probability" before seeing the data.

It's kinda like accusing chatGPT explanations as p-hacking rather than truly generalisable insights based on plausibility and generalised predictive value.

Another way to interpret this is via the "It's the theory that determines what can be observed, not the other way round" adage (supposedly an Einstein quote). ChatGPT is fitting theories that are highly probable within its established universe of discourse, in that these are based on how it already interprets these observations. Theories that would require reinterpretation of the universe of discourse, with observations emerging or being interpreted under a different light, is simply not something that ChatGPT can do, and thus these theories would be given very low probability given the data. In other words, unlike model inference, theory generation is a forward process, not a posterior one.

Oh yeah, your question immediately made me think of "black swan events", the economic parable about things we've never seen or even imagined, until one day someone sees/imagines it. So loosely speaking Einstein's General Theory of Relativity was an improbable idea in this black swan sense.