← Back to context

Comment by Pooge

10 hours ago

> Pretty sure that's wrong. The way it works is: we have this equation. It predicts where we expect such stuff to be in X seconds. In X seconds, we check it's indeed there. It's there: actual confirmation, not confirmation bias.

Exactly. My point is that since Einstein's theory, we know that Newton's Law is incomplete. Therefore proving that it was confirmation bias (i.e. that our equations just confirmed what we observed). Since we observed black holes, we knew that Newton's was incomplete as it couldn't fully explain their behaviors.

> i.e. that our equations just confirmed what we observed

No, no, it's the opposite, and it's key! What we had been observing kept matching what the equations gave us "so far". Without cherry-picking, or refusing to see the cases where the model doesn't apply (consciously or not), which would have been confirmation bias.

We did, in fact, question the model as soon as we noticed it didn't apply.

Confirmation bias implies "cognitive blinkers", I don't think this happened in this Newton vs Einstein stuff.

But I agree the confirmation bias risk is not very far away. It's an issue in the general population, it's also likely a big issue in research.

  • Don't we start the equations after observing a phenomenon? It wouldn't make sense to try to explain something before observing it..

    For example, after observing black holes we understood that Newton's was not enough to explain them. Thus we had to find another theory that explained our observations. Now with quantum computing we know that Einstein's theory is insufficient, too (not very knowledgeable on quantum physics myself, though)

    • There's definitely a "Seeing an apple fall and intuiting a hypothesis" process which is early in the research process, which leads to formulating the equation as an hypothesis somehow.

      So you observe stuff, intuit and formulate an hypothesis. The hypothesis is a model that you hope matches how the world works well enough. Developing a scientific hypothesis takes scientific rigor. Among other things:

      - it needs to be testable (it needs to be possible to design some scientific protocol to check the hypothesis)

      - it needs to be formulated before you start experimenting and collecting data (that doesn't mean you can't observe your world before, you just can't use these observations in the data that backs your thesis)

      - it needs to be rooted in existing science, knowledge, it's not a simple "naive" guess. It certainly takes being deeply familiar with the research area.

      Then you test your hypothesis with experiments. You design a significant number of them. You must not cherry pick here, that would be confirmation bias (but you can encode the limitations of the model in the previous step). You predict your expected results with the model you have in your hypothesis. You run your experiments, make your measurements, compute the deltas. Here too, you must not discard or tweak the results to your liking. That would be cherry-picking, or event outright falsification. If the deltas are small enough, and people review your work, and ideally reproduce it (same experiments, or other experiments), eventually there's a consensus that starts forming around your model. Congrats, the model is validated.

      So, people start to use your model. They do exactly the opposite of what you did when you formulated your hypothesis: they don't try to come up with a model from preliminary observations, they assume the model works, and they use it to predict the future.

      Until Einstein comes :-). And stumble upon a black hole. An observation that doesn't match. Then your model gets refined (with limits and restrictions) or even "deprecated".

      But yeah, physicists model the world after the observations they make, not the contrary. Otherwise they are doing something else. Math, maybe, or philosophy, or whatever. It's just that designing the model is only the beginning, you have to check that it works… with carefully selected observations… before it can be validated.

      With this asker-vs-guesser thing, we don't have convincing work that provides the validation step. This means asker-vs-guesser is an hypothesis, at best (at best, because we don't know if things required to formulate a scientific hypothesis have been respected).