One Key Statistic Is Missing

The Canadian Journal of Anesthesia has published a study on study retractions in the field of anesthesiology. From the abstract:

Methods

Based on a reproducible search strategy, two independent reviewers searched MEDLINE, EMBASE, and the Retraction Watch website to identify retracted anesthesiology articles. Extracted data included: author names, year of publication, year of the retracted article, journal name, journal five-year impact factor, research type (clinical, basic science, or review), reason for article retraction, number of citations, and presence of a watermark indicating article retraction.

Results

Three hundred and fifty articles were included for data extraction. Reasons for article retraction could be grouped into six broad categories. The most common reason for retraction was fraud (data fabrication or manipulation), which accounted for nearly half (49.4%) of all retractions, followed by lack of appropriate ethical approval (28%). Other reasons for retraction included publication issues (e.g., duplicate publications), plagiarism, and studies with methodologic or other non-fraud data issues. Four authors were associated with most of the retracted articles (59%). The majority (69%) of publications utilized a watermark on the original article to indicate that the article was retracted. Journal Citation Reports journal impact factors ranged from 0.9 to 48.1 (median [interquartile range (IQR)], 3.6 [2.5–4.0]), and the most cited article was referenced 197 times (median [IQR], 13 [5–26]). Most retracted articles (66%) were cited at least once by other journal articles after having been withdrawn.

This is 350 articles vs a total of … yeah. And we also don’t get a sense of what range of years is involved. Now, perhaps this is in the study’s body, but I wasn’t in the mood to poke $40 to the publisher just to see important summary data that should be in the abstract.

But, assuming the review was comprehensive and unbiased, it’s interesting that half the retractions are due to fraud, although the implications are problematic. Does it mean that the field’s researchers are so good that they’re either doing excellent research or cheating? Put that way, it seems unlikely. They’re plagued with frauds? Again, unlikely, or the field would be thrust into the realm of quackery.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.