Dimensionless Numbers

When it comes to fallacious academic papers, they don’t come much more notorious than the Wakefield paper which claimed to find a link between autism and the MMR vaccine. Consisting of twelve subjects, leading to inappropriate rejection of vaccines by members of a public which has, as a whole, accepted the importance of public health methodologies, it was damn near a crime that it was published at all, much less by a prestigious academic journal such as the Lancet.

It was retracted roughly 12 years later in 2010.

As it happens, one of the most important measures of the significance of academic papers is the number of citations it receives from other papers. Surely a paper that has been retracted on the grounds of fraud would not be considered significant, at least in the classic sense, no?

Sadly, no. Retraction Watch has published a short interview with a group of librarians that has studied those papers citing the Wakefield autism paper. Here’s a question and answer that I think illuminates how we humans tend to have poorly thought out plans for computers:

RW: What role do continued citations of this paper play in public perceptions of vaccine safety? Are they similar to the role that a 1980 paper in NEJM — and that earned an editor’s note decades later — that downplayed the risk of opioid addiction has played over the years?

[Corresponding author Elizabeth Suelzer]: My group read the letter by Leung et al with great interest, and we use it as an example when we teach evidence-based medicine. Our study was inspired by it.

We feel that the majority of researchers understand the importance of vaccines and can easily articulate why the Wakefield study was so flawed. But for those unfamiliar with the research such as students, those from other disciplines, and the public, the number of citations this retracted study receives can be misleading. There seems to be a disconnect between what occurs within the scientific community and how it is communicated and shared with the general public via social media. This is also evident in public perceptions of the threat of global warming and gun violence. Scientists and researchers need to do a better job of making their research findings easier to understand, emphasizing its relevance to the general public, and making it meme-worthy for social media.

While most of the references to the Wakefield article are negative, each new citation is noted in databases like Google Scholar, Web of Science and Scopus. As citation counts continue to play a role in determining the significance or importance of an article (for better or worse), even negative citations will ensure that an article gets a higher rank in databases when the results are sorted by citation count. We accept the irony of conducting a study on Wakefield’s paper and adding yet another count to its cited-by number.

The obvious next iteration in the evolution towards a proper design of the statistical analysis of citations would be to record the nature of the citation: negative, positive, or neutral. After that comes the question of citations from papers that are themselves retracted.

So if my reader ever runs across some individual who simply, and honestly, observes a high number of citations of some paper, and then claims that proves, well, something, it’s worth remembering that, at least currently, these tend to be dimensionless numbers that are, without further analysis, lacking in real meaning.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.