Comment by an0malous
3 months ago
Are you serious? That video is nothing like the tests in the telepathy tapes, they have kids who don’t need to be touched at all and spell completely independently.
I hate this whole “skeptic” culture so much, it’s just as religious as the people who believe things without evidence. You have a preconceived agenda about things you were told are wacky and you don’t even bother to review the evidence before drawing a conclusion.
There is more rigorous peer-reviewed, published evidence for psychic abilities than most sociological and biological fields.
The Ganz experiment has been reproduced 78 times by 46 different researchers, but sure it’s all fake and a blogger and an one obviously fake strawman instagram video are suddenly enough evidence for the skeptics who constantly demand the highest standard of academic rigor from the other side.
> There is more rigorous peer-reviewed, published evidence for psychic abilities than most sociological and biological fields.
If this proved that such things were real, there would be a corporation exploiting it by now.
That is folk logic, not rational thinking.
First, the US and Russian governments did spend decades researching these abilities and in the US they CIA, DIA, Army, and other agencies all used psychics. Perhaps they are still using them, or maybe they found satellites and drones easier to work with.
The other issue is that psi seems to have strong statistical significance but low effect size. It’s not a crystal ball, the inconsistent results make it inconvenient for things like stock markets and espionage.
Finally, maybe people do use psychic abilities but they lack the introspection abilities to recognize it. Where do your thoughts really come from? If you’re a stock trader and you have a hunch or insight about an opportunity, where did it come from? Sure you likely gathered data and evidence to support your investment, but didn’t you first have a hunch or a fuzzy idea that seemingly came from nowhere?
We have much to learn about consciousness. I’m not asking anyone to accept psi on faith, but this culture of skepticism boils down to “psi isn’t real because psi can’t be real” and I believe it’s holding back our understanding of reality to be so close minded.
> First, the US and Russian governments did spend decades researching these abilities and in the US they CIA, DIA, Army, and other agencies all used psychics. Perhaps they are still using them, or maybe they found satellites and drones easier to work with.
Yeah, and they all stopped using "psychics" because, shockingly, they were worse than useless. (Please don't tell me the US military has somehow managed to keep actual psychics a secret for multiple decades despite people like trump being involved).
> but this culture of skepticism boils down to “psi isn’t real because psi can’t be real”
No, it boils down to asking for proof when people make claims. And extraordinary claims require extraordinary proofs.
Like, you can believe that stock brokers use psychic powers to get stock tips all you want, people believe in gods and demons and all sorts of things, but if you want this to actually be useful you need to find some way to demonstrate it in a repeatable way.
2 replies →
I was curious and read through the paper you linked. Here's my shot at rational thinking. A few things stood out:
1. Arbitrary prior
In the peer-review notes on p.26, a reviewer questions the basis of their bayesian prior: "they never clearly wrote down ... that the theoretical GZ effect size would be "Z/sqrt(N) = 0.1"
The authors reply: "The use of this prior in the Bayesian meta-analysis is an arbitrary choice based on the overall frequentist meta-analysis, and the previous meta-analyses e.g. Storm & Tressoldi, 2010."
That's a problem because a bayesian prior represents your initial belief about the true effect before looking at the current data. It's supposed to come from independent evidence or theoretical reasoning. Using the same dataset or past analyses of the same studies to set the prior is just circular reasoning. In other words, they assumed from the start that the true effect size was roughly 0.1, then unsurprisingly "found" an effect size around 0.08–0.1.
2. Publication bias
On p. 10, the authors admit that "for publication bias to attenuate (to "explain away") the observed overall effect size, affirmative results would need to be at least four-fold more likely to be published than non-affirmative results."
A modest 4x preference to publish positive results would erase the significance.
They do claim "the similarity of effect size between the two levels of peer-review add further support to the hypothesis that the 'file drawer' is empty"
But that's faulty reasoning. publication bias concerns which studies get published at all; comparing conferences vs. journals only looks at already published work.
Additionally, their own inclusion criteria are "peer reviewed and not peer-reviewed studies e.g., published in proceedings excluding dissertations." They explicitly removed dissertations and other gray literature, the most common source of null findings, further increasing the prior for the true publication bias in their dataset.
4. My analysis
With the already tiny effect size they report of Z/sqrt(N) = 0.08 (CI .04-.12) on p.1 and p.7, the above issues are significant. An arbitrary prior and a modest, unacknowledged publication bias could easily turn a negligible signal into an apparently "statistically significant" effect. And because the median statistical power of their dataset on p.10 is only 0.088, nearly all included studies were too weak to detect any real effect even if one existed. In that regime, small analytic or publication biases dominate the outcome.
Under more careful scrutiny, what looks like evidence for psi is just the echo of their own assumptions amplified by selective visibility.