← Back to context

Comment by kenjackson

2 months ago

The study did show it. The author of this critique properly notes that Table 4 is not an apples to apples comparison. The author of the study notes that expanding the pool of women as used in Table 4 likely brought in less talented musicians disproportionately.

Table 5 does the more apples to apples comparison. The critique notes that sample size is too small, but it captures 445 blind women, 816 blind men, 599 non-blind women, and 1102 non-blind men auditions. That's certainly sufficient for a study like this.

The study also does reflect how when a population feel like there is less bias against them in a system they are more likely to participate -- even if that means on average the level of "merit" might go down, but those that make it through the filter will better reflect actual meritocracy -- and that's what this study showed as well.

No, it doesn't. This is a dramatic reach and complete misunderstanding of the stats. The data in table 5 is not statistically significant.

If you go down to table 6 (which is also incredibly weak), it shows the opposite: men are advancing at a higher rate than women in blind auditions.

Andrew Gelman reviewed the link as well and agreed:

https://statmodeling.stat.columbia.edu/2019/05/11/did-blind-...

  • Table 5 is stat sig. There’s not a p-value given but the effect sizes are large. The knit place it’s not is the semi-final and final rounds with their smaller sizes.

    And table 6 shows blind auditions significantly increased the chances of women advancing from the preliminary round and winning in the final round. However women were less likely to advance past semifinals when auditions were blind. But still a net win.

    Gellman is focused on the “several fold” and “50% claims” it made. But the paper shows 11.6 and 14.8 point jumps, which are supported by the paper.