Comment by pdonis
2 days ago
> Scientific disagreements are intricate matters that require the attention of highly trained experts.
Actually this isn't true, at least as far as anything the public needs to care about is concerned. There is a simple test the public can use for any scientific model: does it make accurate predictions, or not? You don't need to understand how a model works to test that. The model can use whatever intricate math it wants, and whatever other stuff it wants, internally--it could involve reading tea leaves and chicken entrails for all you know. But its output is predictions that you can test against actual experiments.
The biggest problem I see with "establishment" science today is that it doesn't work this way. There is no mechanism for having an independent record that the public can access of predictions vs. reality. It's all tied up in esoteric papers.
> There is a simple test the public can use for any scientific model: does it make accurate predictions, or not? You don't need to understand how a model works to test that.
It's quite obvious from your position on this matter that you're not a practicing scientist, so it's very unfortunate that your position is so assertive, as it's mostly wrong.
To understand the predictions, as it were, you do have to understand the experiments; if you don't, you have no way of knowing if the predictions actually match the outcomes. Most publications involve some form of hypothesis-prediction-experiment-result profile, and it is the training and expertise (and corroboration by other experiments, and time) that help determine which of those papers establish new science, and which ones go out with last week's trash. The findings in these areas are seldom accessible until the field is very advanced and/or in practical use, as with the example of GPS you gave elsewhere.
> The biggest problem I see with "establishment" science today is that it doesn't work this way. There is no mechanism for having an independent record that the public can access of predictions vs. reality.
There is; it's called a textbook.
> It's quite obvious from your position on this matter that you're not a practicing scientist
You're correct, I'm not. But I'm also not scientifically ignorant. For example, I actually do understand how GPS works, because I've read and understood technical treatments of the subject. But I also know that I don't have to have any of that knowledge to know that my smartphone can use GPS to tell me where I am accurately.
In other words, it's quite obvious from your position that you haven't actually thought through what the test I described actually means.
> To understand the predictions, as it were, you do have to understand the experiments; if you don't, you have no way of knowing if the predictions actually match the outcomes.
Sure you do. See my examples of GPS and astronomers' predictions of comet trajectories downthread in response to MengerSponge.
It's true that for predictions of things that the general public doesn't actually have to care about, often it's not really possible to check them without a fairly detailed knowledge of the subject. But those predictions aren't the kind I'm talking about--because they're about things the general public doesn't actually have to care about.
> There is; it's called a textbook.
Textbooks aren't independent. They're written by scientists.
I'm talking about a record that's independent of scientists. For example, being able to verify that GPS works by seeing that your smartphone shows you where you are accurately.
An example of this ideal can go horribly wrong is CERN.
There's one apparatus (of each type) and each "experiment" ends up with its own team. Each team develops their own terminology, publishes in one set of papers, and the peer reviews are by... themselves.
I don't work at CERN, but that criticism was from someone who does.
They were complaining that they could not understand the papers published by a team down the hall from them. Not on some wildly unrelated area of science, but about the same particles they were studying in a similar manner!
If nobody else can understand the research, if nobody else can reproduce it, then it's not useful science!
Note that this isn't exactly the same as Sabine's criticism of CERN and future supercolliders, but it's related.
I'm surprised by what you say, it is not at all my experience. Are you sure you are not over-interpreting what your friend said, or that your friend's experience was not unusual?
1) People at CERN publish papers in "normal" physics journals, which do the usual peer review. Few articles that I've myself per-reviewed were not from my own experiment. There is, of course, also an internal reviewing for each collaboration, but it is to improve the quality and something totally natural and obvious if you want to have a collaboration (by definition, a collaboration is a place where people read each other work and feedback to each others). But it is totally different from "the work is only reviewed by the collaboration".
2) I've worked ~5 years in one experiment, and ~5 years in another, and I did not notice any different terminology. In both experiments, I've very rapidly met and learned the name of people of other experiments working on similar subject. I don't know any workshop or conference where the invited scientists are not from different experiment. During these events, there are a lot of exchanges.
3) What is true, and it is maybe the reason of your misunderstanding, is that you are strongly advised to not share non-cross-checked material outside of the collaboration. The goal is to avoid biasing the independent experiments: if you notice a strange phenomena that will later turn out to be a statistical fluctuation or if you use a new methodology that will later turn out to have unnoticed systematical biases, if you mention this to the other experiment, you will "contaminate" them: they may focus their research or adopt the flawed methodology. But this is only for non-cross-checked and it does not make any sense to pretend that it has a negative impact (a lot of scientists, in collaboration or not, towards all history, don't like to share their preliminary results before they acquired a good confidence that what they saw it reliable).
4) Do you have example of things that one could not understand while it was done down the hall from them? I don't recall "not being able to understand" (the point of a publication is to explain, so people care about making it understandable). I do recall "harder to understand", but it was often from people from the same collaboration, and the reason was because of they needed to use some mathematical tools I did not know and that there was not really any other way.
I'm sure there are cases where two groups end up diverging and it makes the collaboration more challenging. But I really doubt it is not something exceptional, and that everyone in the collaborations will try to mitigate.
Your comment makes me wonder to which extend the outsiders of CERN don't have plenty of crazy myths totally disconnected from the reality. I guess it is a good example why people like Hossenfelder are a problem: they feed on these myths and cultivate them.
1 reply →
While we try to make things accessible to the public, the determination of what is "good" is ultimately made by experts.
"The public" has a level of science literacy that is somewhat medieval (as in pre-Newtonian, and increasingly pre-germ theory), and while it's important to maintain political support, it's not reasonable to expect Joe Schmoe to be able to track the latest experimental results from CERN.
In fact, it's not reasonable to expect a very smart lay person to do the same. The problem is basically that the information that gets encoded in papers and public datasets is not spanning! There's a shocking amount of fiddly details that don't get transmitted for one reason or another. Say what you want about how things "should" be done, but that's how they are done. If you want things done differently you can encourage that behavior by rubbing cash on the problem.
> While we try to make things accessible to the public, the determination of what is "good" is ultimately made by experts.
No, it isn't. It's determined by whether the models make accurate predictions. The fact that in our society, science is viewed as an authority, where Scientists can pontificate as "experts" without having to back up their claims with a predictive track record, is a bug, not a feature.
> "The public" has a level of science literacy that is somewhat medieval
The public doesn't care about "science literacy" in terms of understanding the models. Nor does the public have to. If the models make good predictions, that will be obvious to the public if it's something the public cares about.
A good example is GPS. "The public" has no clue how GPS actually works, and doesn't understand all the nuances that had to be carefully considered in order to get it to work as accurately and reliably as it does. Building and maintaining the system requires experts, yes. But knowing that GPS works is simple: does your smartphone show you where you are accurately? The fact that it does is strong evidence that GPS works, since GPS is what your smartphone uses to do that. (Yes, I know there are other things involved as well, like your smartphone having access to accurate maps. Your smartphone being able to tell you accurately where you are is also strong evidence that the people who produced those maps were doing it right.) And "the public" can make this simple observation without having to know anything about the details of how GPS does what it does.
> it's not reasonable to expect Joe Schmoe to be able to track the latest experimental results from CERN.
Nor does Joe Schmoe have to. Joe Schmoe doesn't care. The cutting edge physics experiments being done at CERN have no practical impact on anything in anyone's daily life, unless you're one of the people who has to analyze the data.
But if you come and tell Joe Schmoe that hey, this new discovery they just made at CERN means everyone has to suddenly turn their entire lives upside down, then Joe Schmoe is going to want to see the predictive track record that backs that up. And it better be a strong track record, of predictions that affect people's daily lives, not just what tracks are going to be observed in CERN's detectors.
Here's another example: prediction of possible impacts on Earth by comets and asteroids. Astronomers have an extensive track record of being able to predict, years in advance, the trajectories of such objects, with an accuracy much smaller than one Earth radius--i.e., accurately enough to be able to distinguish an actual impact from a close approach. So if astronomers ever come out in public and say, we're tracking this comet and it's going to hit the Earth 29 years, 3 months, and 7 days from now, and here's the region where it's going to hit, and we'd better start planning to either alter its trajectory or set ourselves up to withstand the hit, yes, they can make that claim credibly because of their track record. But most public claims by scientists, even "experts", don't achieve that high bar--and that means the public is perfectly justified in just ignoring them.
> It's determined by whether the models make accurate predictions.
And it's experts who speak the language well enough to understand what is being said. Fortunately, it's not a priesthood that is linked to your family or a caste or some wildly selective process. All you have to do is spend a few years studying (2-6 depending on the particularities). You can learn the language and basically that makes you an expert too.
What society do you live in where scientists' expertise is taken on face value and acted on without substantial pushback and criticism? I'd like to live there, maybe.
> means everyone has to suddenly turn their entire lives upside down
This happened. Starting over a century ago, and continuing ever since with increasing loudness, urgency, and accuracy. And yet. The US is making it harder to build solar and wind power.
2 replies →
> The biggest problem I see with "establishment" science today is that it doesn't work this way [i.e., make accurate predictions].
This is a gross over-generalization imo. I would say at least the hard sciences are characterized by their extremely accurate predictive models. Are you thinking of maybe string theory specifically? Because that's a minority part of even the field of physics, and exceptional in many ways, so it's not right to generalise from it to the whole of physics, let alone all current science
How can you determine whether it makes accurate predictions? This isn't always as trivial as you make it seem. Even the data's trustworthiness requires proxy measures like provenance and criticism of figures one takes as trustworthy. And even then you have to be able to evaluate the data to determine whether it predictive, which itself requires skills and domain knowledge.
The idea that we can live without authority is nonsense. We can't. So, when dealing with subjects where we are out of our depth, we must learn ways to discern who is likely to be more trustworthy, and this often requires using proxies. Institutions exist to help makes this possible, even if they are not infallible, and they alone do not suffice: basic reasoning and tradition also factor in.