A flawed paper in management science has been cited more than 6k times

15 hours ago (statmodeling.stat.columbia.edu)

I developed and maintain a large and very widely used open source agent-based modeling toolkit. It's designed to be very highly efficient: that's its calling card. But it's old: I released its first version around 2003 and have been updating it ever since.

Recently I was made aware by colleagues of a publication by authors of a new agent-based modeling toolkit in a different, hipper programming language. They compared their system to others, including mine, and made kind of a big checklist of who's better in what, and no surprise, theirs came out on top. But digging deeper, it quickly became clear that they didn't understand how to run my software correctly; and in many other places they bent over backwards to cherry-pick, and made a lot of bold and completely wrong claims. Correcting the record would place their software far below mine.

Mind you, I'm VERY happy to see newer toolkits which are better than mine -- I wrote this thing over 20 years ago after all, and have since moved on. But several colleagues demanded I do so. After a lot of back-and-forth however, it became clear that the journal's editor was too embarrassed and didn't want to require a retraction or revision. And the authors kept coming up with excuses for their errors. So the journal quietly dropped the complaint.

I'm afraid that this is very common.

  • A while back I wrote a piece of (academic) software. A couple of years ago I was asked to review a paper prior to publication, and it was about a piece of software that did the same-ish thing as mine, where they had benchmarked against a set of older software, including mine, and of course they found that theirs was the best. However, their testing methodology was fundamentally flawed, not least because there is no "true" answer that the software's output can be compared to. So they had used a different process to produce a "truth", then trained their software (machine learning, of course) to produce results that match this (very flawed) "truth", and then of course their software was the best because it was the one that produced results closest to the "truth", whereas the other software might have been closer to the actual truth.

    I recommended that the journal not publish the paper, and gave them a good list of improvements to give to the authors that should be made before re-submitting. The journal agreed with me, and rejected the paper.

    A couple of months later, I saw it had been published unchanged in a different journal. It wasn't even a lower-quality journal, if I recall the impact factor was actually higher than the original one.

    I despair of the scientific process.

    • If it makes you feel any better, the problem you’re describing is as old as peer review. The authors of a paper only have to get accepted once, and they have a lot more incentive to do so than you do to reject their work as an editor or reviewer.

      This is one of the reasons you should never accept a single publication at face value. But this isn’t a bug — it’s part of the algorithm. It’s just that most muggles don’t know how science actually works. Once you read enough papers in an area, you have a good sense of what’s in the norm of the distribution of knowledge, and if some flashy new result comes over the transom, you might be curious, but you’re not going to accept it without a lot more evidence.

      This situation is different, because it’s a case where an extremely popular bit of accepted wisdom is both wrong, and the system itself appears to be unwilling to acknowledge the error.

    • It seems that the failure of the scientific process is 'profit'.

      Schools should be using these kinds of examples in order to teach critical thinking. Unfortunately the other side of the lesson is how easy it is to push an agenda when you've got a little bit of private backing.

    • Many people do not know that Impact Factor is gameable. Unethical publications have gamed it. Therefore a higher IF may or may not indicate higher prominence. Use Scimago journal rankings for non-gameable scores.

      1 reply →

  • If you’re the same Sean Luke I’m thinking of:

    I was an undergraduate at the University of Maryland when you were a graduate student there in the mid nineties. A lot of what you had to say shaped the way I think about computer science. Thank you.

  • This reminds me of my former college who asked me to check some code from a study, which I did not know it was published, and told him I hope he did not write it since it likely produced the wrong results. They claimed some process was too complicated to do because it was post O(2^n) in complexity, decided to do some major simplification of the problem, and took that as the truth in their answer. End result was the original algorithm was just quadratic, not worse, given the data set was easily doable in minutes at best (and not days as claimed) and the end result did not support their conclusions one tiny bit.

    Our conclusion was to never trust psychology majors with computer code. And like with any other expertise field they should have shown their idea and/or code to some CS majors at the very least before publishing.

  • When I was a grad student I contacted a journal to tell them my PI had falsified their data. The journal never responded. I also contacted my university's legal department. They invited me in for an hour, said they would talk to me again soon, and never spoke to me or responded to my calls again after that. This was in a Top-10-in-the-USA CS program. I have close to zero trust in academia. This is why we have a "reproducibility crisis".

    • PSA for any grad student in this situation: get a lawyer, ASAP, to protect your own career.

      Universities care about money and reputation. Individuals at universities care about their careers.

      With exceptions of some saintly individual faculty members, a university is like a big for-profit corporation, only with less accountability.

      Faculty bring in money, are strongly linked to reputation (scandal news articles may even say the university name in headlines rather than the person's name), and faculty are hard to get rid of.

      Students are completely disposable, there will always be undamaged replacements standing by, and turnover means that soon hardly anyone at the university will even have heard of the student or internal scandal.

      Unless you're really lucky, the university's position will be to suppress the messenger.

      But if you go in with a lawyer, the lawyer may help your whistleblowing to be taken more seriously, and may also help you negotiate a deal to save your career. (For example of help, you need the university's/department's help in switching advisors gracefully, with funding, even as the uni/dept is trying to minimize the number of people who know about the scandal.)

  • > it became clear that the journal's editor was too embarrassed

    How sad. Admitting and correcting a mistake may feel difficult, but it makes you credible.

    As a reader, I would have much greater trust in a journal that solicited criticism and readily published corrections and retractions when warranted.

    • Unfortunately, academia is subject to the same sorts of social things that anything else is. I regularly see people still bring up a hoax article sent to a journal in 1996 as a reason to dismiss the entire field that one journal publishes in.

      Personally, I would agree with you. That's how these things are supposed to work. In practice, people are still people.

  • I think the publish or perish academic culture makes it extremely susceptible to glossing over things like this - especially for statistical analysis. Sharing data, algorithms, code and methods for scientific publications will help. For papers above a certain citation count, which makes them seem "significant", I'm hoping google scholar can provide an annotation of whether the paper is reproducible and to what degree. While it won't avoid situations like what the author is talking about, it may force journal editors to take rebuttals and revisions more seriously.

    From the perspective of the academic community, there will be lower incentive to publish incorrect results if data and code is shared.

  • I take the occasion to say that I helped making/rewriting a comparison between various agent-based modelling software at https://github.com/JuliaDynamics/ABMFrameworksComparison, not sure if this correctly represents all of them fairly enough, but if anyone wants to chime in to improve the code of any of the frameworks involved, I would be really happy to accept any improvement

  • Is this the kind of thing that retractions are typically issued for, or would it simply be your responsibility to submit a new paper correcting the record? I don't know how these things work. Thanks.

Nowadays high citation numbers don't mean anymore what they used to. I've seen too many highly cited papers with issues that keep getting referenced, probably because people don't really read the sources anymore and just copy-paste the citations.

On my side-project todo list, I have an idea for a scientific service that overlays a "trust" network over the citation graph. Papers that uncritically cite other work that contains well-known issues should get tagged as "potentially tainted". Authors and institutions that accumulate too many of such sketchy works should be labeled equally. Over time this would provide an additional useful signal vs. just raw citation numbers. You could also look for citation rings and tag them. I think that could be quite useful but requires a bit of work.

  • I explored this question a bit a few years ago when GPT-3 was brand new. It's tempting to look for technological solutions to social problems. It was COVID so public health papers were the focus.

    The idea failed a simple sanity check: just going to Google Scholar, doing a generic search and reading randomly selected papers from within the past 15 years or so. It turned out most of them were bogus in some obvious way. A lot of ideas for science reform take as axiomatic that the bad stuff is rare and just needs to be filtered out. Once you engage with some field's literatures in a systematic way, it becomes clear that it's more like searching for diamonds in the rough than filtering out occasional corruption.

    But at that point you wonder, why bother? There is no alchemical algorithm that can convert intellectual lead into gold. If a field is 90% bogus then it just shouldn't be engaged with at all.

    • There is in fact a method, and it got us quite far until we abandoned it for the peer review plus publish or perish death spiral in the mid 1900s. It's quite simple:

      1) Anyone publishes anything they want, whenever they want, as much or as little as the want. Publishing does not say anything about your quality as a researcher, since anyone can do it.

      2) Being published doesn't mean it's right, or even credible. No one is filtering the stream, so there's no cachet to being published.

      We then let memetic evolution run its course. This is the system that got us Newton, Einstein, Darwin, Mendeleev, Euler, etc. It works, but it's slow, sometimes ugly to watch, and hard to game so some people would much rather use the "Approved by A Council of Peers" nonsense we're presently mired in.

    • I think that the solution is very simple, remove the citation metric. Citations don't mean correctness. What we want is correctness.

  • Interesting idea. How do you distinguish between critical and uncritical citation? It’s also a little thorny—if your related work section is just describing published work (which is a common form of reviewer-proofing), is that a critical or uncritical citation? It seems a little harsh to ding a paper for that.

    • That's one of the issues that causes a bit of work. Citations would need to be judged with context. Let's say paper X is nowadays known to be tainted. If a tainted work is cited just for completeness, it's not an issue, e.g. "the method has been used in [a,b,c,d,x]" If the tainted work is cited critically, even better: e.g. "X claimed to show that..., but y and z could not replicate the results". But if it is just taken for granted at face value, then the taint-label should propagate: e.g. ".. has been previously proved by x and thus our results are very important...".

    • "Uncritically" might be the wrong criteria, but you should definitely understand the related work you are citing to a decent extent.

  • Going to conferences seeing researchers who've built a career doing subpar (sometimes blatantly 'fake') work has made me grow increasingly wary of experts. Worst is lots of people just seem to go along with it.

    Still I'm skeptical about any sort of system trying to figure out 'trust'. There's too much on the line for researchers/students/... to the point where anything will eventually be gamed. Just too many people trying to get into the system (and getting in is the most important part).

    • The worse system is already getting gamed. There's already too much on the line for researchers/students, so they don't admit any wrong doing or retract anything. What's the worse that could happen by adding a layer of trust in the h-index ?

      1 reply →

  • Maybe there should be a different way to calculate h-index. Where for an h-index of n, you also need n replications.

Pretty much all fields have shit papers, but if you ever feel the need to develop a superiority complex, take a vacation from your STEM field and have a look at what your university offers under the "business"-anything label. If anyone in those fields manages to produce anything of quality, they're defying the odds and should be considered one of the greats along the line of Euclid, Galileo Galilei, or Isaac Newton - because they surely didn't have many shoulders to stand on either.

  • This is exactly how I felt when studying management as part of ostensibly an Engineering / Econ / Management degree.

    When you added it up, most of the hard parts were Engineering, and a bit Econ. You would really struggle to work through tough questions in engineering, spend a lot of time on economic theory, and then read the management stuff like you were reading a newspaper.

    Management you could spot a mile away as being soft. There's certainly some interesting ideas, but even as students we could smell it was lacking something. It's just a bit too much like a History Channel documentary. Entertaining, certainly, but it felt like false enlightenment.

  • I suppose it's to be expected, the business department is built around the art of generating profit from cheap inputs. It's business thinking in action!

> Stop citing single studies as definitive. They are not. Check if the ones you are reading or citing have been replicated.

And from the comments:

> From my experience in social science, including some experience in managment studies specifically, researchers regularly belief things – and will even give policy advice based on those beliefs – that have not even been seriously tested, or have straight up been refuted.

Sometimes people use fewer than one non replicatable studies. They invent studies and use that! An example is the "Harvard Goal Study" that is often trotted out at self-review time at companies. The supposed study suggests that people who write down their goals are more likely to achieve them than people who do not. However, Harvard itself cannot find such a study existing:

https://ask.library.harvard.edu/faq/82314

  • Definitely ignore single studies, no matter how prestigious the journal or numerous the citations.

    Straight-up replications are rare, but if a finding is real, other PIs will partially replicate and build upon it, typically as a smaller step in a related study. (E.g., a new finding about memory comes out, my field is emotion, I might do a new study looking at how emotion and your memory finding interact.)

    If the effect is replicable, it will end up used in other studies (subject to randomness and the file drawer effect, anyway). But if an effect is rarely mentioned in the literature afterwards...run far, FAR away, and don't base your research off it.

    A good advisor will be able to warn you off lost causes like this.

The root of the problem is referred to implicitly: publish or perish. To get tenure, you need publications, preferably highly cited, and money, which comes from grants that your peers (mostly from other institutions) decide on. So the mutual back scratching begins, and the publication mill keeps churning out papers whose main value is the career of the author and --through citation-- influential peers, truth be damned.

  • Citations being the only metric is one problem. Maybe an improved rating/ranking system would be helpful.

    Ranking 1 to 3 - 1 being the best - 3 the bare minimum for publication.

    3. Citations only

    2. Citations + full disclosure of data.

    1. Citations + full disclosure of data + replicated

  • The same dynamics from school carry over into adulthood: early on it’s about grades and whether you get into a “good” school; later it becomes the adult version of that treadmill : publish or perish.

The problem is in parts, how confirmatory statistics work, and how journals work. Most journals wouldn’t publish „we really tried very hard to get significance that x causes y but found nothing. Probably, and contrary to our prior beliefs, y is completely independent of x.“

Even if nobody would cheat and massage data, we would still have studies that do not replicate on new data. 95 % confidence means that one in twenty surveys finds an effect that is only noise. The reporting of failed hypothesis testing would really help to find these cases.

So pre-registration helps, and it would also help to establish the standard that everything needed to replicate must be published, if not in the article itself, then in an accompanying repository.

But in the brutal fight for promotion and resources, of course labs won’t share all their tricks and process knowledge. Same problem if there is an interest in using the results commercially. E.g. in EE often the method is described in general but crucial parts of the code or circuit design are held back.

There is a surprisingly large amount of bad science out there. And we know it. One of my favourite writeup on the subject: John P. A. Ioannidis: Why Most Published Research Findings Are False

https://pmc.ncbi.nlm.nih.gov/articles/PMC1182327/pdf/pmed.00...

  • This is a great paper but, in my experience, most people in tech love this paper because it allows them to say "To hell with pursuing reality. Here is MY reality".

  • John Ioannidis is a weird case. His work on the replication crisis across many domains was seminal and important. His contrarian, even conspiratorial take on COVID-19 not so much.

    • He made a famous career, to being a professor and a director in Stanford University, about meta-research on the quality of other people's research, and critiquing the methodology of other people's studies. Then during Covid he tried to do a bit of original empirical research of his own, and his own methods and statistical data analysis were even worse than what he has critiqued in other people's work.

    • Ugh, wow, somehow I missed all this. I guess he joins the ranks of the scientists who made important contributions and then leveraged that recognition into a platform for unhinged diatribes.

      14 replies →

> I’ve been in the car with some drunk drivers, some dangerous drivers, who could easily have killed people: that’s a bad thing to do, but I wouldn’t say these were bad people.

If this isn't bad people, then who can ever be called bad people? The word "bad" loses its meaning if you explain away every bad deed by such people as something else. Putting other people's lives at risk by deciding to drive when you are drunk sounds like very bad people to me.

> They’re living in a world in which doing the bad thing–covering up error, refusing to admit they don’t have the evidence to back up their conclusions–is easy, whereas doing the good thing is hard.

I don't understand this line of reasoning. So if people do bad things because they know they can get away with it, they aren't bad people? How does this make sense?

> As researchers they’ve been trained to never back down, to dodge all criticism.

Exactly the opposite is taught. These people are deciding not to back down and admit wrong doing out of their own accord. Not because of some "training".

  • labelling a person as "bad" is usually black and white thinking. it's too reductive, most people are both good and bad

    > because they know they can get away with it

    the point is that the paved paths lead to bad behavior

    well designed systems make it easy to do good

    > Exactly the opposite is taught.

    "trained" doesn't mean "taught". most things are learned but not taught

  • As writers often say: there’s no such thing as a synonym.

    “That’s a bad thing to do…”

    Maybe should be: “That’s a stupid thing to do…”

    Or: reckless, irresponsible, selfish, etc.

    In other words, maybe it has nothing to do with morals and ethics. Bad is kind of a lame word with limited impact.

    • It's a broad and simple word but it's also a useful word because of its generality. It's nice to have such a word that can apply to so many kinds and degrees of actions, and saves so many pointless arguments about whether something is more narrowly evil, for example. Applied empirically to people, it has predictive power and can eliminate surprise because the actions of bad people are correlated with bad actions in many different ways. A bad person does something very stupid today, very irresponsible tomorrow, and will unsurprisingly continue to do bad things of all sorts of kinds even if they stay clear of some kinds.

  • When everyone else does it, it's extremely hard to be righteous. I did it long ago... everyone did it back then. We knew the danger and thought we were different, we thought we could drive safely no matter our state. Lots of tragedies happen because people disastrously misjudge their own abilities, and when alcohol is involved doubly so. They are not bad people, they're people who live in a flawed culture where alcohol is seen as acceptable and who cannot avoid falling for the many human fallacies... in this case caused by the Dunning Kruger effect. If you think people who fall for fallacies are bad, then being human is inherently bad in your opinion.

    • I don't think being human is inherently bad. But you have to draw the line to consider someone as "bad" somewhere, right? If you don't draw a line, then nobody in the world is a bad person. So my question is where exactly is that line?

      You guys are saying that drink driving does not make someone a bad person. Ok. Let's say I grant you that. Where do you draw the line for someone being a bad person?

      I mean with this line of reasoning you can "explain way" every bad deed and then nobody is a bad person. So do you guys consider someone to be actually a bad person and what did they have to do to cross that line where you can't explain away their bad deed anymore and you really consider them to be bad?

The webpage of the journal [1] only says 109 citations of the original article, this count only "indexed" journals, that are not guaranty to be ultra high quality but at least filter the worse "pay us to publish crap" journals.

ResearchGate says 3936 citations. I'm not sure what they are counting, probably all the pdf uploaded to ResearchGate

I'm not sure how they count 6000 citations, but I guess they are counting everything, including quotes by the vicepresident. Probably 6001 after my comment.

Quoted in the article:

>> 1. Journals should disclose comments, complaints, corrections, and retraction requests. Universities should report research integrity complaints and outcomes.

All comments, complaints, corrections, and retraction requests? Unmoderated? Einstein articles will be full of comments explaining why he is wrong, from racist to people that can spell Minkowski to save their lives. In /newest there is like one post per week from someone that discover a new physics theory with the help of ChatGPT. Sometimes it's the same guy, sometimes it's a new one.

[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1964011

[2] https://www.researchgate.net/publication/279944386_The_Impac...

  • > I'm not sure how they count 6000 citations, but I guess they are counting everything, including quotes by the vicepresident. Probably 6001 after my comment.

    The number appears to be from Google Scholar, which currently reports 6269 citations for the paper

  • > All comments, complaints, corrections, and retraction requests? Unmoderated? Einstein articles will be full of comments explaining why he is wrong, from racist to people that can spell Minkowski to save their lives. In /newest there is like one post per week from someone that discover a new physics theory with the help of ChatGPT. Sometimes it's the same guy, sometimes it's a new one.

    Judging from PubPeer, which allows people to post all of the above anonymously and with minimal moderation, this is not an issue in practice.

Sounds like the Watergate Scandal. The crime was one thing, but it was the cover-up that caused the most damage.

Once something enters The Canon, it becomes “untouchable,” and no one wants to question it. Fairly classic human nature.

> "The most erroneous stories are those we think we know best -and therefore never scrutinize or question."

-Stephen Jay Gould

Being practical, and understanding the gamification of citation counts and research metrics today, instead of going for a replication study and trying to prove a negative, I'd instead go for contrarian research which shows a different result (or possibly excludes the original result; or possibly doesn't even if it does not confirm it).

These probably have bigger chance of being published as you are providing a "novel" result, instead of fighting the get-along culture (which is, honestly, present in the workplace as well). But ultimately, they are (research-wise! but not politically) harder to do because they possibly mean you have figured out an actual thing.

Not saying this is the "right" approach, but it might be a cheaper, more practical way to get a paper turned around.

Whether we can work this out in research in a proper way is linked to whether we can work this out everywhere else? How many times have you seen people tap each other on the back despite lousy performance and no results? It's just easier to switch private positions vs research positions, so you'll have more of them not afraid to highlight bad job, and well, there's this profit that needs to pay your salary too.

  • Most of these studies get published based on elaborate constructions of essentially t-tests for differences in means between groups. Showing the opposite means showing no statistical difference, which is almost impossible to get published, for very human reasons.

    • My point was exactly not to do that (which is really an unsuccesfull replication), but instead to find the actual, live correlation between the same input rigourously documented and justified, and new "positive" conclusion.

      As I said, harder from a research perspective, but if you can show, for instance, that sustainable companies are less profitable with a better study, you have basically contradicted the original one.

I don't understand why it has been acceptable to not upload a tarball of your data with the paper in the internet age. Maybe the Asset4 database is only available with license and they can't publish too much. However, the key concern with the method is a pairwise matching of companies which is an invention of the paper authors and should be totally clear to publish. The number of stories I've heard from people forensically investigating PDF plots to uncover key data from a paper is absurd.

Of course doing so is not free and it takes time. A paper represents at least months of work in data collection, analysis, writing, and editing though. A tarball seems like a relatively small amount of effort to provide an huge increase in confidence for the result.

  • This. I did my dissertation in the early '90s, so very early days of the internet. All of my data and code was online.

    IMHO this should be expected for any, literally any publication. If you have secrets, or proprietary information, fine - but then, you don't get to publish.

I appreciate the convenience of having the original text on hand, as opppsed to having to download it of Dropbox of all places.

But if you're going to quote the whole thing it seems easier to just say so rather than quoting it bit by bit interspersed with "King continues" and annotating each I with [King].

"We should distinguish the person from the deed"

No, we shouldn't. Research fraud is committed by people, who must be held accountable. In this specific case, if the issues had truly been accidental, the author's would have responded and revised their paper. They did not, ergo their false claims were likely deliberate.

That the school and the journal show no interest - equally bad, and deserving of public shaming.

Of course, this is also a consequence of "publish or perish."

It's harder to do social/human science because it's just easier to make mistakes that leads to bias. It's harder to do in maths, physics, biology, medecine, astronomy, etc.

I often say that "hard sciences" have often progressed much more than social/human sciences.

  • Funny you say that, as medicine is one of the epicenters of the replication crisis[1].

    [1] https://en.wikipedia.org/wiki/Replication_crisis#In_medicine

    • you get a replication crisis on the bleeding edge between replication being possible and impossible. There’s never going to be a replication crisis in linear algebra, there’s never going to be a replication crisis in theology, there definitely was a replication crisis in psych and a replication crisis in nutrition science is distinctly plausible and would be extremely good news for the field as it moves through the edge.

      1 reply →

  • I agree. Most of the time people think STEM is harder but it is not. Yes, it is harder to understand some concepts, but in social sciences we don't even know what the correct concepts are. There hasn't been so much progress in social sciences in the last centuries as there was for STEM.

    • I'm not sure if you're correct. In fact there has been a revolution in some areas of social science in the last two decades due to the availability of online behavioural data.

> Because published articles frequently omit key details

This is a frustrating aspect of studies. You have to contact the authors for full datasets. I can see why it would not be possible to publish them in the past due to limited space in printed publications. In today's world though every paper should be required to have their full datasets published to a website for others to have access to in order to verify and replicate.

The discussion has mostly revolved around the scientific system (it definitely has plenty of problems), but how about ethics?

The paper in question shows - credibly or not - that companies focusing on sustainability perform better in a variety of metrics, including generating revenue. In other words: Not only can you have companies that do less harm, but these ethically superior companies also make more money. You can have your cake and eat it too. It likely has given many people a way to align their moral compass with their need to gain status and perform well within our system.

Even if the paper is a completely fabrication, I'm convinced it has made the world a better a place. I can't help but wonder if Gelman and King paused to consider the possible repercussions of their actions, and of what kinds of motivations they might have had. The linked post briefly dips into ethics, benevolently proclaiming that the original authors of the paper are not necessarily bad people.

Which feels ironic, as it seems to me that Gelman and King are doing the wrong here.

Social fame is fundamentally unscalable, as it operates in limited room on the scene and even less in the few spot lights.

Benefits we can get from collective works, including scientific endeavors, are indefinitely large, as in far more important than what can be held in the head of any individual.

Incitives are just irrelevant as far as global social good is concerned.

Isn't at least part of the problem with replication that journals are businesses. They're selling in part based on limited human focus, and on desire to see something novel, to see progress in one's chosen field. Replications don't fit a commercial publications goals.

Institutions could do something, surely. Require one-in-n papers be a replication. Only give prizes to replicated studies. Award prize monies split between the first two or three independent groups demonstrating a result.

The 6k citations though ... I suspect most of those instances would just assert the result if a citation wasn't available.

  • Journals aren't really businesses in the conventional sense. They're extensions of the universities: their primary customers and often only customers are university libraries, their primary service is creating a reputation economy for academics to decide promotions.

    If the flow of tax, student debt and philanthropic money were cut off, the journals would all be wiped out because there's no organic demand for what they're doing.

  • Not in academia myself, but I suspect the basic issue is simply that academics are judged by the number of papers they publish.

    They are pushed to publish a lot, which means journals have to review a lot of stuff (and they cannot replicate findings on their own). Once a paper is published on a decent journal, other researchers may not "waste time" replicating all findings, because they also want to publish a lot. The result is papers getting popular even if no one has actually bothered to replicate the results, especially if those papers are quoted by a lot of people and/or are written by otherwise reputable people or universities.

This is simply a case of appeal to authority. No reviewer or editor would reject a paper from either HBS or LBS, let alone a joint paper between the two. Doing so would be akin to career suicide.

And therein lies the uncomfortable truth: Collaborative opportunities take priority over veracity in publications every time.

  • That's why double-blind review shohld be the norm. It's wild to me that single-blind is still the norm in kost disciplines.

Not even surprised. My daughter tried to reproduce a well-cited paper a couple of years back as part of her research project. It was not possible. They pushed for a retraction but university don't want to do it because it would cause political issues as one of the peer-reviewers is tenured at another closely associated university. She almost immediately fucked off and went to work in the private sector.

  • > They pushed for a retraction ...

    That's not right; retractions should only be for research misconduct cases. It is a problem with the article's recommendations too. Even if a correction is published that the results may not hold, the article should stay where it is.

    But I agree with the point about replications, which are much needed. That was also the best part in the article, i.e. "stop citing single studies as definitive".

    • I will add it's a little more complicated than I wanted to let on here as I don't identify it in the process. But it definitely was misconduct on this one.

      I read the paper as well. My background is mathematics and statistics and the data was quite frankly synthesised.

      4 replies →

  • It’s much much more likely that she did something wrong trying to replicate it than the paper was wrong. Did she try to contact the authors, discuss with her advisor?

    Pushing for retraction just like that and going off to private sector is…idk it’s a decision.

    • It went on for a few months. The source data for the paper was synthesised and it was like trying to get blood out of a stone trying to get hold of it, clearly because they knew they were in trouble. Lots of research money was wasted trying to reproduce it.

      She was just done with it then and a pharma company said "hey you fed up with this shit and like money?" and she was and does.

      edit: as per the other comment, my background is mathematics and statistics after engineering. I went into software but still have connections back to academia which I left many years ago because it was a political mess more than anything. Oh and I also like money.

This likely represents only a fragment of a larger pattern. Research contradicting prevailing political narratives faces significant professional obstacles, and as this article shows, so does critiques of research that don't.

>There’s a horrible sort of comfort in thinking that whatever you’ve published is already written and can’t be changed. Sometimes this is viewed as a forward-looking stance, but science that can’t be fixed isn’t past science; it’s dead science.

Actually it’s not science at all.

> They intended to type “not significant” but omitted the word “not.”

This one is pretty egregious.

  • Once, back around 2011 or 2012, I was using Google Translate for a speech I was to deliver in church. It was shorter than one page printed out.

    I only needed the Spanish translation. Now I am proficient in spoken and written Spanish, and I can perfectly understand what is said, and yet I still ran the English through Google Translate and printed it out without really checking through it.

    I got to the podium and there was a line where I said "electricity is in the air" (a metaphor, obviously) and the Spanish translation said "electricidad no está en el aire" and I was able to correct that on-the-fly, but I was pissed at Translate, and I badmouthed it for months. And sure, it was my fault for not proofing and vetting the entire output, but come on!

Not enough is understood about the replication crisis in the social sciences. Or indeed in the hard sciences. I do wonder whether this is something that AI will rectify.

  • How would AI do anything to rectify it?

    • The same way it would correct typos in a text. It's just a tool, you tell it to find inconsistencies, see what results that yields, and optimize it for verification of claims.

Does it bug anyone else when your article has so many quotes it’s practically all italics? Change the formatting style so we don’t have to read pages of italic quotes

  • This drove me nuts, but also the authors should like get to the point about what was wrong instead of dancing around it for page after page.

We’ve developed a “leaning tower of science.” Someday, it’s going to fall.

Family member tried to do work relying on previous results from a biotech lab. Couldn’t do it. Tried to reproduce. Doesn’t work. Checked work carefully. Faked. Switched labs and research subject. Risky career move, but. Now has a career. Old lab is in mental black box. Never to be touched again.

Talked about it years ago https://news.ycombinator.com/item?id=26125867

Others said they’d never seen it. So maybe it’s rare. But no one will tell you even if they encounter. Guaranteed career blackball.

  • I haven't identified an outright fake one but in my experience (mainly in sensor development) most papers are at the very least optimistic or are glossing over some major limitations in the approach. They should be treated as a source of ideas to try instead of counted on.

    I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)

  • For original research, a researcher is supposed to replicate studies that form the building blocks of their research. For example, if a drug is reported to increase expression of some mRNA in a cell, and your research derives from that, you will start by replicating that step, but it will just be a note in your introduction and not published as a finding on its own.

    When a junior researcher, e.g. a grad student, fails to replicate a study, they assume it's technique. If they can't get it after many tries, they just move on, and try some other research approach. If they claim it's because the original study is flawed, people will just assume they don't have the skills to replicate it.

    One of the problems is that science doesn't have great collaborative infrastructure. The only way to learn that nobody can reproduce a finding is to go to conferences and have informal chats with people about the paper. Or maybe if you're lucky there's an email list for people in your field where they routinely troubleshoot each other's technique. But most of the time there's just not enough time to waste chasing these things down.

    I can't speak to whether people get blackballed. There's a lot of strong personalities in science, but mostly people are direct and efficient. You can ask pretty pointed questions in a session and get pretty direct answers. But accusing someone of fraud is a serious accusation and you probably don't want to get a reputation for being an accuser, FWIW.

  • I've read of a few cases like this on Hacker News. There's often that assumption, sometimes unstated: if a junior scientist discovers clear evidence of academic misconduct by a senior scientist, it would be career suicide for the junior scientist to make their discovery public.

    The replication crisis is largely particular to psychology, but I wonder about the scope of the don't rock the boat issue.

Maybe that's why it gets cited? People starting with an answer and backfilling?

Could you also provide your critical appraisal of the article so this can be more of a journal club for discussion vs just a paper link? I have no expertise in this field so would be good for some insights.

I will not go into the details of the topic but the "What to do" is the most obvious thing. If a paper that is impactful cannot be backed by other works that should be a smell

And thus all citing, have fatally flawed there paper if its central to the thesis, thus, he who proofs the root is rotten, should gain there funding from this point forward.

  • I see this approach as a win win for science. Debunking bad science becomes a for profit enterprise, rigorous science becomes the only one sustainable, the paper churn gets reduced, as even producing a good one becomes a financial risk, when it becomes foundational and gets debunked later.

There’s no such thing as management “science”.

Social “sciences” are completely bastardizing the word science. Then, they come complaining that “society doesn’t trust science anymore”. They, the social “scientists”, the ones responsible for removing all meaning from the word science,

> This doesn’t mean that the authors of that paper are bad people!

> We should distinguish the person from the deed. We all know good people who do bad things

> They were just in situations where it was easier to do the bad thing than the good thing

I can't believe I just read that. What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"

In this case, it seems not owning up to the issues is the bad part. That's a choice they made. Actually, multiple choices at different times, it seems. If you keep choosing the easy path instead of the path that is right for those that depend on you, it's easier for me to just label you a bad person.

  • Labeling people as villains (as opposed to condemning acts), in particular those you don’t know personally, is almost always an unhelpful oversimplification of reality. It obscures the root causes of why the bad things are happening, and stands in the way of effective remedy.

    • In this case they hadn’t labeled anyone as villains, though. They could have omitted that section entirely.

      I happen to agree that labeling them as villains wouldn’t have been helpful to this story, but they didn’t do that.

      > It obscures the root causes of why the bad things are happening, and stands in the way of effective remedy.

      There’s a toxic idea built into this statement: It implies that the real root cause is external to the people and therefore the solution must be a systemic change.

      This hits a nerve for me because I’ve seen this specific mindset used to avoid removing obviously problematic people, instead always searching for a “root cause” that required us all to ignore the obvious human choices at the center of the problem.

      Like blameless postmortems taken to a comical extreme where one person is always doing some careless that causes problems and we all have to brainstorm a way to pretend that the system failed, not the person who continues to cause us problems.

      24 replies →

    • I'm not sure the problems we have at the moment are a lack of accountability. I mean, I think let's go a little overboard on holding people to account first, then wind it back when that happens. The crisis at the moment is mangeralism across all of our institutions which serves to displace accountability .

    • Questions:

      1. Who is responsible for adding guardrails to ensure all papers coming in are thoroughly checked & reviewed?

      2. Who review these papers? Shouldn’t they own responsibility for accuracy?

      3. How are we going to ensure this is not repeated by others?

      2 replies →

    • Just to add on, armchair quarterbacking is a thing, it’s easy in hindsight to label decisions as the result of bad intentions. This is completely different than whatever might have been at play in the moment and retrospective judgement is often unrealistic.

      2 replies →

    • It is possible that the root cause is an individual person being bad. This hasn't been as common recently because people were told not to be villains and to dislike villains, so root causes of the remaining problems were often found buried in the machinery of complex social systems.

      However if we stop teaching people that villains are bad and they shouldn't be villains, we'll end up with a whole lot more problems of the "yeah that guy is just bad" variety.

    • Bad acts are in the past, and may be situational or isolated.

      Labelling a person as bad has predictive power - you should expect them to do bad acts again.

      It might be preferable to instead label them as “a person with a consistent history of bad acts, draw your own conclusion, but we are all capable of both sin and redemption and who knows what the future holds”. I’d just call them a bad person.

      That said, I do think we are often too quick to label people as bad based one bad act.

    • As with anything, it's just highly subjective. What some call an heinous act is another person's heroic act. Likewise, where I draw the line between an unlucky person and a villain is going to be different from someone else.

      Personally, I do believe that there are benefits to labelling others as villains if a certain threshold is met. It cognitively reduces strain by allowing us to blanket-label all of their acts as evil [0] (although with the drawback of occasionally accidentally labelling acts of good as evil), allowing us to prioritise more important things in life than the actions of what we call villains.

      [0]: https://en.wikipedia.org/wiki/Halo_effect#The_reverse_halo_e...

      4 replies →

    • I would argue that villainy and "bad people" is an overcomplication of ignorance.

      If we equate being bad to being ignorant, then those people are ignorant/bad (with the implication that if people knew better, they wouldn't do bad things)

      I'm sure I'm over simplifying something, looking forward to reading responses.

    • What if the root cause is that because we stopped labeling villains, they no longer fear being labeled as such. The consequences for the average lying academic have never been lower (in fact they usually don’t get caught and benefit from their lie).

      8 replies →

    • You presumably read the piece. There was no remedy. In fact the lavishly generous appreciation of all those complexities arguably is part of the reason there was no remedy. (Or vice versa, i.e. each person's foregone conclusion that there will be no remedy for whatever reason, might've later been justified/rationalized via an appeal to those complexities.)

      The act itself, of saying something other than the truth, is always more complex than saying the truth. ← It took more words to describe the act in that very sentence. Because there are two ideas, the truth and not the truth. If the two things match, you have a single idea. Simple.

      Speaking personally, if someone's very first contact with me is a lie, they are to be avoided and disregarded. I don't even care what "kind of person" they are. In my world, they're instantly declared worthless. It works pretty well. I could of course be wrong, but I don't think I'm missing out on any rich life experiences by avoiding obvious liars. And getting to the root cause of their stuff or rehabilitating them is not a priority for me; that's their own job. They might amaze me tomorrow, who knows. But it's called judgment for a reason. Such is life in the high-pressure world of impressing rdiddly.

    • It’s possible to take two opposing and flawed views here, of course.

      On the one hand, it is possible to become judgmental, habitually jumping to unwarranted and even unfair conclusions about the moral character of another person. On the other, we can habitually externalize the “root causes” instead of recognizing the vice and bad choices of the other.

      The latter (externalization) is obvious when people habitually blame “systems” to rationalize misbehavior. This is the same logic that underpins the fantastically silly and flawed belief that under the “right system”, misbehavior would simply evaporate and utopia would be achieved. Sure, pathological systems can create perverse incentives, even ones that put extraordinary pressure on people, but moral character is not just some deterministic mechanical response to incentive. Murder doesn’t become okay because you had a “hard life”, for example. And even under “perfect conditions”, people would misbehave. In fact, they may even misbehave more in certain ways (think of the pathologies characteristic of the materially prosperous first world).

      So, yes, we ought to condemn acts, we ought to be charitable, but we should also recognize human vice and the need for justice. Justly determined responsibility should affect someone’s reputation. In some cases, it would even be harmful to society not to harm the reputations of certain people.

      1 reply →

    • > Labeling people as villains is almost always an unhelpful oversimplification of reality

      This is effectively denying the existence of bad actors.

      We can introspect into the exact motives behind bad behaviour once the paper is retracted. Until then, there is ongoing harm to public science.

      15 replies →

    • One thing that stands in the way of other people choosing the wrong path is the perception of consequences. Minimal consequences by milquetoast critics who just want to understand is a bug not a feature.

      People are on average both bad and stupid and function without a framework of consequences and expectations where they expect to suffer and feel shame. They didn't make a mistake they stood in front of all their professional colleagues and published effectively what they knew were lies. The fact that they can publish lies and others are happy to build on lies ind indicates the whole community is a cancer. The fact that the community rejects calls for correction indicates its metastasized and at least as far as that particular community the patient is dead and there is nothing left to save.

      They ought to be properly ridiculed and anyone who has published obvious trash should have any public funds yanked and become ineligible for life. People should watch their public ruin and consider their own future action.

      If you consider the sheer amount of science that has turned out to be outright fraud in the last decade this is a crisis.

    • That comment sounds like the environment causes bad behavior. That's a liberal theory refuted consistently by all the people in bad environments who choose to not join in on the bad behavior, even at a personal loss.

      God gave us free will to choose good or evil in various circumstances. We need to recognize that in our assessments. We must reward good choices and address bad ones (eg the study authors'). We should also change environments to promote good and oppose evil so the pressures are pushing in the right direction.

  • People are afraid to sound too critical. It's very noticeable how every article that points out a mistake anywhere in a subject that's even slightly politically charged, has to emphasize "of course I believe X, I absolutely agree that Y is a bad thing", before they make their point. Criticising an unreplicable paper is the same thing. Clearly these people are afraid that if they sound too harsh, they'll be ignored altogether as a crank.

    • > Clearly these people are afraid that if they sound too harsh, they'll be ignored altogether as a crank.

      This is true though, and one of those awkward times where good ideals like science and critical feedback brush up against potentially ugly human things like pride and ego.

      I read a quote recently, and I don't like it, but it's stuck with me because it feels like it's dancing around the same awkward truth:

      "tact is the art of make a point without making an enemy"

      I guess part of being human is accepting that we're all human and will occasionally fail to be a perfect human.

      Sometimes we'll make mistakes in conducting research. Sometimes we'll make mistakes in handling mistakes we or others made. Sometimes these mistakes will chain together to create situations like the post describes.

      Making mistakes is easy - it's such a part of being human we often don't even notice we do it. Learning you've made a mistake is the hard part, and correcting that mistake is often even harder. Providing critical feedback, as necessary as it might be, typically involves putting someone else through hardship. I think we should all be at least slightly afraid and apprehensive of doing that, even if it's for a greater good.

      12 replies →

    • That's a legitimate fear though - it's exactly what happened in this case. "The reviewers did not address the substance of my comment; they objected to my tone".

    • In general Western society has effectively outlawed "shame" as an effective social tool for shaping behavior. We used to shame people for bad behavior, which was quite effective in incentivizing people to be good people (this is overly reductive but you get the point). Nowadays no one is ever at fault for doing anything because "don't hate the player hate the game".

      A blameless organization can work, so long as people within it police themselves. As a society this does not happen, thus making people more steadfast in their anti-social behavior

  • > I can't believe I just read that. What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"

    This actually doesn't surprise much. I've seen a lot of variety in the ethical standards that people will publicly espouse.

  • I was just following orders comes to mind.

    Yes, the complicity is normal. No the complicity isn't right.

    The banality of evil.

    • It's interesting to talk about 'banality of evil' in the comment section about flawed papers. Her portrayal of Eichmann was very wrong, Arendt had an idea in her head of how he should be and didn't care too much about the facts and the process. Not that I totally disagree with the idea.

  • There are extremely competent coworkers I wouldn't like them as neighbours. Some of my great neighborhoods would make very sloppy and annoying coworkers.

    These people are terrible at their job, perhaps a bit malicious too. They may be great people as friends and colleagues.

  • I think the writer might enjoy Vonnegut's Mother Night.

    > Vonnegut is not, I believe, talking about mere inauthenticity. He is talking about engaging in activities which do not agree with what we ourselves feel are our own core morals while telling ourselves, “This is not who I really am. I am just going along with this on the outside to get by.” Vonnegut’s message is that the separation I just described between how we act externally and who we really are is imaginary.

    https://thewisdomdaily.com/mother-night-we-are-what-we-prete...

  • Connecting people's characters to their deed is a double edged sword. It's not that it's necessarily mistaken, but you have to choose your victories. Maybe today you get some satisfaction from condemning the culprits, but you also pay for it by making it even more difficult to get cooperation from the system in the future. We all have friends, family and colleagues that we believe to be good. They're all still capable of questionable actions. If we systematically tie bad deeds to bad people, then surely those people we love and know to be good are incapable of what they're being accused. That's part of how closing ranks works. I think King recognizes this too, which is why he recommends that Penalties should reflect the severity of the violation, not be all-or-nothing.

    • The entire point of recognizing bad people is to make it harder for them to work with or affect you in the future.

      > If we systematically tie bad deeds to bad people, then surely those people we love and know to be good are incapable of what they're being accused.

      A strong claim that needs to be supported and actually the question who’s nuances are being discussed in this thread.

      1 reply →

  • It is like in organisational error management (aka. error culture), there are three levels here:

    1) errors happen, basically accidents.

    2) errors are made, wrong or unexpected result for different intention.

    3) errors are caused, the error case is the intended outcome. This is where "bad people" dwell.

    Knowing and keeping silent about 1) and 2) makes any error 3). I think, we are on 2) in TFA. This needs to be addressed, most obviously through system change, esp. if actors seem to act rationally in the system (as the authors do) with broken outcomes.

  • I guess there isn't much utility in categorizing people as "good" and "bad," arguably. Better to think about the incentives/punishments in the system and adjust them until people behave well.

  • Never qualify the person, only the deed. Because we are all capable of the same actions, some of us have just not done them. But we all have the same capacity.

    And yes, I am saying that I have the same capacity for wrong as the person you are thinking about, mon semblable, mon frère.

    • > Because we are all capable of the same actions, some of us have just not done them

      > And yes, I am saying that I have the same capacity for wrong as the person you are thinking about...

      No one is disputing any of this. The person who is capable, and who has chosen to do, the bad deed is morally blameworthy (subject to mitigating circumstances).

      2 replies →

  • I think calling someone a "bad person" (which is itself a horribly vague term) for one situation where you don't have all the context is something most people should be loath to do. People are complicated and in general normal people do a lot of bad things for petty reasons.

    Other than just the label being difficult to apply, these factors also make the argument over who is a "bad person" not really productive and I will put those sorts of caveats into my writings because I just don't want to waste my time arguing the point. Like what does "bad person" even mean and is it even consistent across people? I think it makes a lot more sense to label them clearer labels which we have a lot more evidence for, like "untrustworthy scientist" (which you might think is a bad person inherently or not).

  • > I can't believe I just read that. What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"

    For starters, the bar should be way higher than accusations from a random person.

    For me,there's a red flag in the story: posting reviews and criticism of other papers is very mundane in academia. Some Nobel laureates even authored papers rejecting established theories. The very nature of peer review involves challenging claims.

    So where is the author's paper featuring commentaries and letters, subjecting the author's own criticism to peer review?

  • "It was easier for me to just follow orders than do the right thing." – Fictional SS officer, 1945. Not a bad person.

    /s

    • But he shoveled the neighbor sidewalks when it snowed.

      I have a relative who lives in Memphis, Tennessee. A few years ago some guy got out of prison, went to a fellow's home to buy a car, shot the car owner dead, stole the car and drove it around until he got killed by the police.

      One of the neighbors said, I kid you not, "he's a good kid"

  • Seems fair in the frame of what is responded.

    But there is a concern which goes out of the "they" here. Actually, "they" could just as well not exist, and all narrative in the article be some LLM hallucination, we are still training ourself how we respond to this or that behavior we can observe and influence how we will act in the future.

    If we go with the easy path labeling people as root cause, that's the habit we are forging for ourself. We are missing the opportunity to hone our sense of nuance and critical thought about the wider context which might be a better starting point to tackle the underlying issue.

    Of course, name and shame is still there in the rhetorical toolbox, and everyone and their dog is able to use it even when rage and despair is all that stay in control of one mouth. Using it with relevant parcimony however is not going to happen from mere reactive habits.

  • It's 2026, and social media brigading and harassment is a well-known phenomenon. In light of that, trying to preemptively de-escalate seems like a Good Thing.

Anyone know the VP who referenced the paper? Doesn't seem to be mentioned. My best guess is Gore.

Living VPs Joe Biden — VP 2009–2017 (became President in 2021; after that he’s called a former VP and former president)

Not likely the one referenced after 2017 because he became president in 2021, so later citations would likely call him a former president instead of former VP.

Dan Quayle — VP 1989–1993, alive through 2026

Al Gore — VP 1993–2001, alive through 2026

Mike Pence — VP 2017–2021, alive through 2026

Kamala Harris — VP 2021–2025, alive through 2026

J.D. Vance — VP 2025–present (as of 2026)

Creators of Studies reflect their own human flaws and shortcomings.

This can directly undermine the scientific process.

There has to be a better path forward.

Google Scholar citation numbers are unreliable and and cannot be used in bibliometric evaluation. They are auto generated and are not limited to the journal literature. This critique is completely unserious. At the same time bad papers also tend to get more citations on average than middling papers, because they are cited in critiques. This effect should be even larger in a dataset that includes more than the citations from journal papers. This blog post will in time also add to the Google Scholar citation count.

Citation studies are problematic and can and their use should be criticized. But this here is just warm air build on a fundamental misunderstanding of how to measure and interpret citation data.

The paper publishing industry has a tragedy of the commons problem. Individual authors benefit from fake or misrepresented research. Over time more and more people roll their eyes when they hear “a study found…” Over a long period it depreciates science and elevates superstition.

For example, look at how people interact with LLMs. Lots of superstition (take a deep breath) not much reading about the underlying architecture.

For all the outrage at Trump, RFK, and their Know-Nothing posture toward the world, we should recognize that the ground for their rise was fertilized by manure produced in academia.

I was young once too.

“Your email is too long.”

This whole thing is filled with “yeah, no s**” and lmao.

More seriously, pretty sure the whole ESG thing has been debunked already, and those who care to know the truth already know it.

A good rule of thumb is to be skeptical of results that make you feel good because they “prove” what you want them to.

The gatekeepers were able to convince the American public of such heinous things like circumcision at birth based on "science" and now they're having to deal with the corruption. People like RFK Jr. are able to be put into top positions because what they're spewing has no less scientific merit than what's accepted and recommended. The state of scientific literature is incredibly sad and mainly a factor of politics and money than of scientific evidence.

I studied a Masters from Cambridge Judge Business School, and my takeaway is that “Management Science” is to Science what “Software Engineering” is to Engineering.

I think what these papers prove is my newer theory that organized science isn't scientific at all. It's mostly unverified claims by people rewarded for throwing papers out that look scientific, have novelty, and achieve policy goals of specific groups. There's also little review with dissent banned in many places. We've been calling it scientism since it's like a self-reinforcing religion.

We need to throw all of this out by default. From public policy to courtrooms, we need to treat it like any other eyewitness claim. We shouldn't beleive anything unless it has strong arguments or data backing it. For science, we need the scientific method applied with skeptical review and/or replication. Our tools, like statistical methods and programs, must be vetted.

Like with logic, we shouldn't allow them to go beyond what's proven in this way. So, only the vetted claims are allowed as building blocks (premises) in newly-vetted work. The premises must be used how they were used before. If not, they are re-checked for the new circumstances. Then, the conclusions are stated with their preconditions and limitations to only he applied that way.

I imagine many non-scientists and taxpayers assumed what I described is how all these "scientific facts" and "consensus" vlaims were done. The opposite was true in most cases. So, we need to not onoy redo it but apply scientific method to the institutions themselves assessing their reliability. If they don't get reliable, they loose their funding and quickly.

(Note: There are groups in many fields doing real research and experimental science. We should highlight them as exemplars. Maybe let them take the lead in consulting for how to fix these problems.)

  • I have a Growing Concern with our legal systems.

    > We need to throw all of this out by default. From public policy to courtrooms, we need to treat it like any other eyewitness claim.

    If you can't trust eyewitness claims, if you can't trust video or photographic or audio evidence, then how does one Find Truth? Nobody really seems to have a solid answer to this.

    • It's specific segments of people saying we can't trust eyewitness claims. They actually work well enough that we run on them from childhood to adulthood. Accepting that truth is the first step.

      Next, we need to understand why that is, which should be trusted, and which can't be. Also, what methods to use in what contexts. We need to develop education for people about how humanity actually works. We can improve steadily over time.

      On my end, I've been collecting resources that might be helpful. That includes Christ-centered theology with real-world application, philosophies of knowledge with guides on each one, differences between real vs organized science, biological impact on these, dealing with media bias (eg AllSides), worldview analyses, critical thinking (logic), statistical analyses (esp error spotting), writing correct code, and so on.

      One day, I might try to put it together into a series that equips people to navigate all of this stuff. For right now, I'm using it as a refresher to improve my own abilities ahead of entering the Data Science field.

      1 reply →

The problem with academia is that it's often more about politics and reputation than seeking the truth. There are multiple examples of researchers making a career out flawed papers and never retracking or even admitting a mistake.

All the talks they were invited to give, all the followers they had, all the courses they sold and impact factor they have built. They are not going to came forward and say "I misinterpreted the data and made long reaching conclusions that are nonsense, sorry for misleading you and thousands of others".

The process protects them as well. Someone can publish another paper, make different conclusions. There is 0 effort get to the truth, to tell people what is and what isn't current consensus and what is reasonable to believe. Even if it's clear for anyone who digs a bit deeper it will not be communicated to the audience the academia is supposed to serve. The consensus will just quietly shift while the heavily quoted paper is still there. The talks are still out there, the false information is still propagated while the author enjoys all the benefits and suffers non of the negative consequences.

If it functions like that I don't think it's fair that tax payer funds it. It's there to serve the population not to exist in its own world and play its own politics and power games.

Conservatives very concerned about academic reproducability* (*except when the paper helps their agenda)

In the past the elite would rule the plebs by saying "God says so, so you must do this".

Today the elites rule the plebs by saying "Science sasy so, so you must do this".

Author doesn't seem to understand this, the purpose of research papers is to be gospel, something to be believed, not scrutinized.

  • In fact, religious ideas (at least in Europe) were often in opposition to the ruling elite (and still are) and even inspired rebellion: https://en.wikipedia.org/wiki/John_Ball_(priest)

    There is a reason scriptures were kept away from the oppressed, or only made available to them in a heavily censored form (e.g. the Slaves Bible).

  • That's a very good point. Some of what's called "science" today, in popular media and coming from governments, is religion. "We know all, do not question us." It's the common problem of headlines along the lines of "scientists say" or "The Science says", which should always be a red flag - but the majority of people believe it.

  • A little more complicated than that.

    In the past, the elites said "don't read the religious texts, WE will tell you what's in them."

    • That's a misunderstanding. There were plenty of ancient and medieval translations of the Bible, but the Bible itself wasn't as central as it is today.

      Catholic and Orthodox Christianity do not focus as much on the Bible as Protestant Christianity. They are based on the tradition, of which the Bible is only a part, while the Protestant Reformation elevated the Bible above the tradition. (By a tortured analogy, you could say that Catholicism and Orthodoxy are common law Christianity, while Protestantism is civil law Christianity.)

      From a Catholic or Orthodox perspective, there is a living tradition from the days of Jesus and the Apostles to present day. Some parts of it were written down and became the New Testament, but the parts that were left out were equally important. You cannot therefore understand the Bible without understanding the tradition, because it's only a partial account.

    • Scientists say that today too, it's a standard response if people outside of academia critique their work. "That person is not an expert" - totally normal response, it's taken to be a killer rebuttal by journalists and politicians.

      2 replies →

Do people actually take papers in "management science" seriously?

  • They do and there is nothing wrong with that. The papers published in this journal are peer-reviewed and go through multiple rounds of review. Also, note that Andrew King could carry out the replication because the data is publicly available.

  • Yes, that's the problem, many do, and they swear by these oversimplified ideas and one-liners that litter the field of popular management books, fully believing it's all "scientific" and they'll laugh at you for questioning it. It's nuts.

    • There is a difference between popular management books and academic publications.

      For example there is a long history of studies of the relationship between working hours and productivity which is one of the few things that challenges the idea that longer hours means more output.

      2 replies →

Welcome Ideological science published to support the regime. There's a lot more where this came from .

The paper touches on a point (“sustainability “) that is a sacred cow for many people.

Even if you support sustainability, criticizing the paper will be treated as heresy by many.

Despite our idealistic vision of Science(tm), it is a human process done by humans with human motivations and human weaknesses.

From Galileo to today, we have repeatedly seen the enthusiastic willingness by majorities of scientists to crucify heretics (or sit by in silence) and to set aside scientific thinking and scientific process when it clashes against belief or orthodoxy or when it makes the difference whether you get tenure or publication.