Comment by function_seven
4 years ago
Spot on.
I kinda feel sorry for psychology and related social science fields. They have an immense hurdle to clear when designing experiments. Both protocol and statistical analysis.
50 or 100 years ago, a study participant might have gone in oblivious to the possibility of subterfuge. Totally unaware that the "taste test" they're participating in for the "marketing majors" was really a study on how political party affiliation affects choices between lemon cake and chocolate chip cookies. Or whatever.
But I have a feeling that college students are much more aware of how these things go today. The experiment is tainted from the get-go by all the participants looking for the "real" data being collected.
I know for damn sure that if I'm recruited for an experiment where I'm taking some sort of test, when a "fellow student" suggests we cheat, that this is an honesty test. Or maybe if the clock runs out before I'm done, I'm being watched for how I handle stress. Wait, is it kind of cold in here? Ah, they must be gauging performance as a function of comfort.
And of course, study participants are way too often 18-24 year olds who happen to go to college. Such a tiny slice of the general population.
So I could see myself placing bets on the "40%" outcome. I wonder if the coordinators straight up told the participants, "Look, we're really testing your betting decisions. This coin really has a 60/40 bias. This isn't a ruse. Please treat this info as true; we're not doing deception testing here" if that would eliminate the kind of second-guessing we're talking about. (I guess we need to study that:) But if that became a norm, then it would further highlight the deceptive tests when that statement is missing.
I feel sorry for social science experimenters.
And of course, study participants are way too often 18-24 year olds who happen to go to college. Such a tiny slice of the general population.
It gets worse. Typically 18-24 year olds who happen to go to the same college as the researcher is working at. So, for example, if this is a large state school then it is a population selected for having SAT scores in a range. Namely above the cutoff to get into the school, but below the cutoff for more desirable schools.
Now suppose that you're doing ability testing. You should expect that any pair of unrelated abilities that help you on SATs will be inversely correlated, because being good at the one thing but landing in that range means you have to be worse at something else. And sometimes that will be the other thing you're looking at.
Several years ago I remember running into a bunch of popular science articles that I found dubious. I tracked down the paper and decided that their analysis suffered from exactly that flaw.