Comment by tialaramex

4 years ago

To be fair, as a participant in psychology experiments I go in aware that it's plausible, even likely that I am being misled about what's really going on. That's even necessary in some experiments. Maybe I'm not technically lied to but if deliberately engineering a false impression is the goal, psychologists are the people to do it in a controlled experiment. The experimenters aren't (ethically) allowed to cause you harm, and they'll probably tell you exactly what was really going on afterwards at least if you ask, but during the experiment everything is potentially suspect. Maybe the task you're focused on was just a distraction and they really care whether you notice the clocks in the room are running too fast so that "five minutes" to do the task is really only 250 seconds - but equally maybe the apparent "time pressure" to complete the task is the distraction and they really care whether you lie about completing it properly given an opportunity to cheat.

So if the experimenter in a psych experiment tells me the coin is biased 60% heads, I don't consider that the same way I would if the friend I play board games with says it.

As a result chances are my first few dozen bets are confirming this unusual claim about the world. Biased coins are hard to make, is this coin really biased? Maybe I try fifty bets in rapid succession, $1 on heads each time. Apparently that's expected to take about five minutes of my half an hour, and before that's done I won't feel comfortable even assuming it's really 60% heads.

And at the end of those five minutes on average I turn $25 into $35 and feel comfortable it's really 60% heads or that I can't tell what's wrong.

Now, why gamble on tails? Well like I said, Psychologists mislead you intentionally during experimentation. Maybe the experimenter tells you it's 60% likely to be Heads. If the gamer told me that, I believe it's 40% likely to be Tails because that's logical, but when an experimenter tells me that, I wonder if it's also 60% likely to be Tails if I bet on Tails, and I might be tempted to check.

Spot on.

I kinda feel sorry for psychology and related social science fields. They have an immense hurdle to clear when designing experiments. Both protocol and statistical analysis.

50 or 100 years ago, a study participant might have gone in oblivious to the possibility of subterfuge. Totally unaware that the "taste test" they're participating in for the "marketing majors" was really a study on how political party affiliation affects choices between lemon cake and chocolate chip cookies. Or whatever.

But I have a feeling that college students are much more aware of how these things go today. The experiment is tainted from the get-go by all the participants looking for the "real" data being collected.

I know for damn sure that if I'm recruited for an experiment where I'm taking some sort of test, when a "fellow student" suggests we cheat, that this is an honesty test. Or maybe if the clock runs out before I'm done, I'm being watched for how I handle stress. Wait, is it kind of cold in here? Ah, they must be gauging performance as a function of comfort.

And of course, study participants are way too often 18-24 year olds who happen to go to college. Such a tiny slice of the general population.

So I could see myself placing bets on the "40%" outcome. I wonder if the coordinators straight up told the participants, "Look, we're really testing your betting decisions. This coin really has a 60/40 bias. This isn't a ruse. Please treat this info as true; we're not doing deception testing here" if that would eliminate the kind of second-guessing we're talking about. (I guess we need to study that:) But if that became a norm, then it would further highlight the deceptive tests when that statement is missing.

I feel sorry for social science experimenters.

  • And of course, study participants are way too often 18-24 year olds who happen to go to college. Such a tiny slice of the general population.

    It gets worse. Typically 18-24 year olds who happen to go to the same college as the researcher is working at. So, for example, if this is a large state school then it is a population selected for having SAT scores in a range. Namely above the cutoff to get into the school, but below the cutoff for more desirable schools.

    Now suppose that you're doing ability testing. You should expect that any pair of unrelated abilities that help you on SATs will be inversely correlated, because being good at the one thing but landing in that range means you have to be worse at something else. And sometimes that will be the other thing you're looking at.

    Several years ago I remember running into a bunch of popular science articles that I found dubious. I tracked down the paper and decided that their analysis suffered from exactly that flaw.

Maybe once you've started to perceive the meta-patterns between psych experiments, you've taken too many tests to be a good subject.

"I wonder if it's also 60% likely to be Tails if I bet on Tails, and I might be tempted to check."

Only if you were clueless, or perhaps if the experimenter said "if you bet on heads it has a 60% chance of winning". Being unstated what would happen if you bet on tails, you might forget that the coin has know knowledge of how you bet, thus making it impossible for there to be any different outcome than a 60% chance of loss by betting on tails.

Even worse, the experimenters didn't actually provide real coins. They just sent around links to a website that they said was simulating a biased coin. Participants presumably had no actual way to know whether the flips were actually 60% biased towards heads, whether the results were truly independent from one flip to the next, or even whether their bet might impact the outcome.

  • All those sources of uncertainty of the actual probabilities are, while in some cases not typical of a real coin (although uncertainty about actual bias one has been informed of certainly is), fairly typical all of real-world situations in which people face, so I’m not at all certain that that invalidates any application of the results to real-world situations.

Biased coins are *impossible" to make if the coin is flipped not spun.

I doubt any story about a biased coins in the real world.

  • If the coin was made from a thin magnet, and being flipped onto a weak magnetic plate, couldn't you bias the result? If the landing pad was a strong magnet, then you could trivially make it a "100% heads" coin. Just weaken the magnetic field so it's not strong enough to flip a coin flat at rest, but has enough oomph to take a coin landing near its edge to the preferred result.

    • If you don't flip the coin within any reasonable definition of flip, sure.

      But if you flip a coin and it turns about N times, you can't make the sum (over all k) of the probability of N+2k turns substantially more likely than thr sum of probability of N+2k+1 turns.

      2 replies →

Sometimes an experiment to see if you can go five minutes without eating the marshmallow is just an experiment to see if you can go five minutes without eating the marshmallow, and not a trick to see what happens if they give you three marshmallows after eating the first one.

  • Sometimes, but they have a habit of lying about the purpose.

    • Yes, this is what every very smart person who underperforms or behaves illogically in a study says. Well, actually, I didn't choose wrong, I was testing the experiment. I chose to eat the marshmallow because I wanted to force them to reveal what would happen next, and then they told me the experiment was over, exactly as I predicted. I win again.

Here's a related yet totally different take: your comment demonstrates flawlessly the reason why sufficiently intelligent people must be weeded out of these experiments (or at least the results). And that in turn helps explain why we end up with people who bet tails.

(Note that the thrill of gambling is another explanation; I'm not claiming "those people are less intelligent, it's the only explanation" but rather "a bias against a certain kind of intelligence could lead to an increase in the observed outcome".)