Bad Carrot! Bad Carrot!

Large, distributed enterprises require the careful selection of appropriate incentives in order to entice the humans who happen to inhabit them to, well, not pee in the corners. Dr. Stuart Ritchie has been looking into the twin problems of sloppy experiments and reproducibility, and talks about it in this interview in NewScientist (22 August 2020) – but, first, this:

WHEN Stuart Ritchie was a graduate student in Edinburgh, UK, in 2011, he was involved in an incident that shook his faith in science. With two colleagues, he tried and failed to replicate a famous experiment on precognition, the ability to see the future. They sent their results to the journal that published the original research and received an immediate rejection on the grounds that the journal didn’t accept studies that repeated previous experiments.

Which leaves me wondering just what this nameless journal is doing if it won’t publish retrospective studies. Since we’re talking precognition, a noted delusion, we can be fairly sure the experiment should have found nothing; if it did find something, then there was something wrong about it, and a careful survey of the study could be informative.

My sneaking suspicion is that the journals inhabiting the junk science part of the landscape are run by biased people – those who are convinced they’re studying real phenomenon, and whose livelihoods depend on that assertion.

But back to Ritchie:

How did we let the rot set in?

It crept in due to a system of perverse incentives. There’s huge pressure to publish papers and huge pressure to bring in grants – which is an incentive to publish papers and apply for grants, but not an incentive to discover the truth. We focus far too much on rewarding people who have brought in big grants or published papers in prestigious journals, which isn’t necessarily getting us what we want.

But there are checks and balances, like peer review, where independent experts vet papers before they are published…

It works in some cases, but it’s nowhere near the filter it needs to be. Some of the worst papers ever went through the peer-review system of the world’s best journals.

For instance, the system isn’t set up in a way that peer reviewers can easily get raw data. It’s absurd. The people who are supposed to be checking whether the analysis is correct rarely see the data that the claims are based on.

And so a lot of our knowledge of reality is, well, even more contingent than it should be.

What’s to be done? Perhaps every relevant discipline could support an Institute of Reproducibility (IoR), and every fresh post-doc could be expected to put in a year or two at their IoR. Papers lacking a notation in their IoR’s public database are considered dubious.

But, in a way, this is a patch on a systemic problem. This is really a fault of the institutes of higher learning that mostly exist to support research and teaching, and excellence in the former is proven by publishing – not in excellent publishing, but just publishing. As Ritchie points out, publishing is considered a pure good, when it should be considered a contingent good. The problem in this competitive system – for that’s how science is performed these days, and has, to a greater or lesser extent, since the Renaissance, whether it be in terms of prestige, individual and national, or profit-driven concerns – is that, incredibly, little credence is given to the possibility that someone’s study is wrong – despite the fact that RetractionWatch is flooded with news related to study retractions.

Oh, sure, there are noted retractions, such as the disgraced Dr. Wakefield’s notorious study erroneously connecting autism with vaccines, published in the top-flight journal Nature – a disastrous move that has cost many people their lives, for it was taken seriously for far too long by credulous people[1]. While many folks would hold up its eventual retraction as proof that the system works, I’ll suggest that it was nothing more than the spasmodic reaction of a dying animal that happened to kick the conquering predator in the teeth – a symbolic victory, at best.

We need a sea-change in all of the culture surrounding science, including both the scientific community, and the lay community which benefits from science, to understand that a single, unconfirmed study is the moral equivalent of probable bullshit. Reproduce a study’s results in two or three repetitions, and then we know we may have something. And maybe a collection of IoRs is part of that change.

But, first, we have to impregnate society with that proper skepticism, and I don’t know how to get there.


1 Although it doesn’t qualify as the start of the anti-vaxxer movement. My understanding is that the anti-vaccine movement more or less began with the introduction of the first vaccine, at least in Western culture.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.