← Back to context

Comment by imgabe

6 days ago

Sure. Research by definition deals with areas that are not settled, so different people can have different theories, and they might disregard scholars who don't like their preferred theory. On the other hand, some academics welcome debate and differing viewpoints more than most people.

Like, if you were a physics professor and you were applying to a department where everyone was a string theorist, and your position was that string theory is a bunch of bullshit, you might not get that job. Or you might, if your work is otherwise solid, you never know.

But that's a disagreement about physics, which is a perfectly reasonable thing to evaluate a physics professor on. It's not about how enthusiastically they endorse some ideological dogma that has nothing to do with physics.

If people can be ranked differently as candidates by different universities and the people at them how can you ever be sure that a person who got the job because of DEI was the worse candidate?

  • There are objective measures like quantity of publications, quality of the journals published in, # of citations, awards won, books published, things like that. Every academic could tell you the top 5 journals in their field that are the most competitive to get published in and are the most respected, someone with a lot of publications in those journals would be objectively better than someone with no publications or with publications in crappy no-name journals that claim they are "peer-reviewed" but basically publish anything that gets submitted.

    We're not talking about roughly equal candidates with similar qualifications and one getting the edge because of race. I'm telling you there are cases where PhD candidates with zero publications, people who have not even finished and defended their dissertation yet, are hired for tenure-track positions over other candidates who have had their degree for several years, published in top journals, won highly competitive fellowships, etc, because universities want someone of a particular race. It's not subtle.

    You may not be able to say that one candidate is the unequivocal best when there are many qualified candidates, but you can definitely say that a particular candidate is unqualified or not even close to other candidates when, for example, they have not published at all.

    • >There are objective measures like quantity of publications, quality of the journals published in, # of citations, awards won, books published, things like that

      None of these are objective measures of quality.

      1. The more papers you write the more likely you'll be published more. This is connected to time and desire.

      2. Judging yhe quality of a journal is subjective therefore can't be used as an objective measurement for something else

      3. If you write a paper that more people have access to, is about a more popular subject, is the only paper for a subject, or is published in more popular journals it would increase your citations outside of the paper quality.

      4. Awards are a subjective judgement

      Of course all of these increase the probability of quality but it's not a guarantee.

      > for example, they have not published at all.

      I don't think anyone going for a position as a professor hasn't published since most PHds require it. This point probably adds more weight but I think it would be rare between candidates for job.

      4 replies →

    • >I'm telling you there are cases where PhD candidates with zero publications, people who have not even finished and defended their dissertation yet, are hired for tenure-track positions over other candidates who have had their degree for several years, published in top journals, won highly competitive fellowships, etc, because universities want someone of a particular race. It's not subtle.

      Give me examples then because how could you know this?

    • I've worked in academic publishing for a long while, and I can tell you from experience that:

      - "quantity of publications" is a problem and directly leads to bad science, so is on aggregate a measure of anti-quality

      - "quality of the journals published in" is all in the mind; prestigious journals with high impact factor have been repeatedly found not to have the best research. The rigour of the editing process is more important, but few researchers know that, and importantly they are heavily incentivised by funders to go for high impact factor, completely muddying the waters of who's a good researcher by that metric.

      - number of citations would be a better measure, but unfortunately is directly linked to impact factor, in practice and in perception.

      - awards won, books published - too niche and random to matter much.

      - "every academic could tell you the top 5 journals in their field" haha, no, you'd be as surprised as I was when doing that research.

      Academic publishers have been considering the measuring problem for decades, and no one has found a solution yet.

      There is no good measure of the quality of a paper until many years after publication. It's easy to identify some true positives (high impact, no retraction), it's quasi-impossible by definition to identify false negatives (unfairly ignored papers), and most importantly this emphasis on prestige research is terribly harmful to Science. Science needs researchers who are happy to replicate studies, people who publish disappointing results, and people who study otherwise unglamorous topics, otherwise Science fails.

      TLDR: measuring how 'good' a researcher is by their prestige is extremely destructive to Science. You can't do that.

      4 replies →