← Back to context

Comment by krunck

2 days ago

"Scientific disagreements are intricate matters that require the attention of highly trained experts. However, for laypersons to be able to make up their own minds on such issues, they have to rely on proxies for credibility such as persuasiveness and conviction. This is the vulnerability that contrarians exploit, as they are often skilled in crafting the optics and rhetoric to support their case."

Touché.

> Scientific disagreements are intricate matters that require the attention of highly trained experts.

Actually this isn't true, at least as far as anything the public needs to care about is concerned. There is a simple test the public can use for any scientific model: does it make accurate predictions, or not? You don't need to understand how a model works to test that. The model can use whatever intricate math it wants, and whatever other stuff it wants, internally--it could involve reading tea leaves and chicken entrails for all you know. But its output is predictions that you can test against actual experiments.

The biggest problem I see with "establishment" science today is that it doesn't work this way. There is no mechanism for having an independent record that the public can access of predictions vs. reality. It's all tied up in esoteric papers.

  • > There is a simple test the public can use for any scientific model: does it make accurate predictions, or not? You don't need to understand how a model works to test that.

    It's quite obvious from your position on this matter that you're not a practicing scientist, so it's very unfortunate that your position is so assertive, as it's mostly wrong.

    To understand the predictions, as it were, you do have to understand the experiments; if you don't, you have no way of knowing if the predictions actually match the outcomes. Most publications involve some form of hypothesis-prediction-experiment-result profile, and it is the training and expertise (and corroboration by other experiments, and time) that help determine which of those papers establish new science, and which ones go out with last week's trash. The findings in these areas are seldom accessible until the field is very advanced and/or in practical use, as with the example of GPS you gave elsewhere.

    > The biggest problem I see with "establishment" science today is that it doesn't work this way. There is no mechanism for having an independent record that the public can access of predictions vs. reality.

    There is; it's called a textbook.

    • > It's quite obvious from your position on this matter that you're not a practicing scientist

      You're correct, I'm not. But I'm also not scientifically ignorant. For example, I actually do understand how GPS works, because I've read and understood technical treatments of the subject. But I also know that I don't have to have any of that knowledge to know that my smartphone can use GPS to tell me where I am accurately.

      In other words, it's quite obvious from your position that you haven't actually thought through what the test I described actually means.

      > To understand the predictions, as it were, you do have to understand the experiments; if you don't, you have no way of knowing if the predictions actually match the outcomes.

      Sure you do. See my examples of GPS and astronomers' predictions of comet trajectories downthread in response to MengerSponge.

      It's true that for predictions of things that the general public doesn't actually have to care about, often it's not really possible to check them without a fairly detailed knowledge of the subject. But those predictions aren't the kind I'm talking about--because they're about things the general public doesn't actually have to care about.

      > There is; it's called a textbook.

      Textbooks aren't independent. They're written by scientists.

      I'm talking about a record that's independent of scientists. For example, being able to verify that GPS works by seeing that your smartphone shows you where you are accurately.

    • An example of this ideal can go horribly wrong is CERN.

      There's one apparatus (of each type) and each "experiment" ends up with its own team. Each team develops their own terminology, publishes in one set of papers, and the peer reviews are by... themselves.

      I don't work at CERN, but that criticism was from someone who does.

      They were complaining that they could not understand the papers published by a team down the hall from them. Not on some wildly unrelated area of science, but about the same particles they were studying in a similar manner!

      If nobody else can understand the research, if nobody else can reproduce it, then it's not useful science!

      Note that this isn't exactly the same as Sabine's criticism of CERN and future supercolliders, but it's related.

      2 replies →

  • While we try to make things accessible to the public, the determination of what is "good" is ultimately made by experts.

    "The public" has a level of science literacy that is somewhat medieval (as in pre-Newtonian, and increasingly pre-germ theory), and while it's important to maintain political support, it's not reasonable to expect Joe Schmoe to be able to track the latest experimental results from CERN.

    In fact, it's not reasonable to expect a very smart lay person to do the same. The problem is basically that the information that gets encoded in papers and public datasets is not spanning! There's a shocking amount of fiddly details that don't get transmitted for one reason or another. Say what you want about how things "should" be done, but that's how they are done. If you want things done differently you can encourage that behavior by rubbing cash on the problem.

    • > While we try to make things accessible to the public, the determination of what is "good" is ultimately made by experts.

      No, it isn't. It's determined by whether the models make accurate predictions. The fact that in our society, science is viewed as an authority, where Scientists can pontificate as "experts" without having to back up their claims with a predictive track record, is a bug, not a feature.

      > "The public" has a level of science literacy that is somewhat medieval

      The public doesn't care about "science literacy" in terms of understanding the models. Nor does the public have to. If the models make good predictions, that will be obvious to the public if it's something the public cares about.

      A good example is GPS. "The public" has no clue how GPS actually works, and doesn't understand all the nuances that had to be carefully considered in order to get it to work as accurately and reliably as it does. Building and maintaining the system requires experts, yes. But knowing that GPS works is simple: does your smartphone show you where you are accurately? The fact that it does is strong evidence that GPS works, since GPS is what your smartphone uses to do that. (Yes, I know there are other things involved as well, like your smartphone having access to accurate maps. Your smartphone being able to tell you accurately where you are is also strong evidence that the people who produced those maps were doing it right.) And "the public" can make this simple observation without having to know anything about the details of how GPS does what it does.

      > it's not reasonable to expect Joe Schmoe to be able to track the latest experimental results from CERN.

      Nor does Joe Schmoe have to. Joe Schmoe doesn't care. The cutting edge physics experiments being done at CERN have no practical impact on anything in anyone's daily life, unless you're one of the people who has to analyze the data.

      But if you come and tell Joe Schmoe that hey, this new discovery they just made at CERN means everyone has to suddenly turn their entire lives upside down, then Joe Schmoe is going to want to see the predictive track record that backs that up. And it better be a strong track record, of predictions that affect people's daily lives, not just what tracks are going to be observed in CERN's detectors.

      Here's another example: prediction of possible impacts on Earth by comets and asteroids. Astronomers have an extensive track record of being able to predict, years in advance, the trajectories of such objects, with an accuracy much smaller than one Earth radius--i.e., accurately enough to be able to distinguish an actual impact from a close approach. So if astronomers ever come out in public and say, we're tracking this comet and it's going to hit the Earth 29 years, 3 months, and 7 days from now, and here's the region where it's going to hit, and we'd better start planning to either alter its trajectory or set ourselves up to withstand the hit, yes, they can make that claim credibly because of their track record. But most public claims by scientists, even "experts", don't achieve that high bar--and that means the public is perfectly justified in just ignoring them.

      3 replies →

  • > The biggest problem I see with "establishment" science today is that it doesn't work this way [i.e., make accurate predictions].

    This is a gross over-generalization imo. I would say at least the hard sciences are characterized by their extremely accurate predictive models. Are you thinking of maybe string theory specifically? Because that's a minority part of even the field of physics, and exceptional in many ways, so it's not right to generalise from it to the whole of physics, let alone all current science

  • How can you determine whether it makes accurate predictions? This isn't always as trivial as you make it seem. Even the data's trustworthiness requires proxy measures like provenance and criticism of figures one takes as trustworthy. And even then you have to be able to evaluate the data to determine whether it predictive, which itself requires skills and domain knowledge.

    The idea that we can live without authority is nonsense. We can't. So, when dealing with subjects where we are out of our depth, we must learn ways to discern who is likely to be more trustworthy, and this often requires using proxies. Institutions exist to help makes this possible, even if they are not infallible, and they alone do not suffice: basic reasoning and tradition also factor in.

This is why science communicators need to master the art of to-scale visualizations, animated diagrams, and put working code into slides and presentations. Shit shovellevers are marked by a smokescreen of words and hand waving and pictures of real phenomena help separate the wheat from the chaff. It takes real balls to spend time faking graphs, while horseshit sentences are cheap and deniable. Fake data and fake graphs are real offenses with a real record. Talk talk is always weasely.

The only thing that will fix the mess is accountability. That accountability is the exact opposite of pretty much all algorithmic boosts today: you should get your knob turned down to zero for being a goddamned liar.