From Retraction Watch:
35,000 papers may need to be retracted for image doctoring, says new paper
Yes, you read that headline right.
In a new preprint posted to bioRxiv, image sleuths scanned hundreds of papers published over a seven-year period in Molecular and Cellular Biology (MCB), published by the American Society for Microbiology (ASM). The researchers — Arturo Casadevall of Johns Hopkins University, Elisabeth Bik of uBiome, Ferric Fang of the University of Washington (also on the board of directors of our parent non-profit organization), Roger Davis of the University of Massachusetts (and former MCB editor), and Amy Kullas, ASM’s publication ethics manager — found 59 potentially problematic papers, of which five were retracted. Extrapolating from these findings and those of another paper that scanned duplication rates, the researchers propose that tens of thousands of papers might need to be purged from the literature. That 35,000 figure is double the amount of retractions we’ve tallied so far in our database, which goes back to the 1970s.
Wow. I’d like some more explicit information. Ah, here we go:
RW: You extrapolate that if 10% of the MCB papers needed to be retracted for image duplication, then 35,000 papers throughout the literature may need the same. How did you perform that calculation, and what assumptions is it based on?
EB: We extrapolated the results from previous studies to the rest of the literature. In our previous study, in which we analyzed 20,000 papers, we found that 3.8% contained duplicated images. We know that the percentage of duplicated images varies per journal, because of a wide variety of reasons (different editorial processes, variable levels of peer review, different demographics of the authors). Since this percentage was calculated on papers from 40 different journals with different impact factors, this percentage serves as a reasonable representation of the whole body of biomedical literature. The 10.1 % is the percentage of papers that were retracted in the MCB dataset. Granted, this was a much smaller dataset than the one from the mBio paper, but it was a set that was seriously looked at.
If there are 8,778,928 biomedical publications indexed in PubMed from 2009-2016, and 3.8% contain a problematic image, and 10.6% (CI 1.5- 19.8%) of that group contain images of sufficient concern to warrant retraction, then we can estimate that approximately 35,000 (CI 6,584-86,911) papers are candidates for retraction due to image duplication.
And one of the researchers on this subject, Elizabeth Bik, commented:
Errors can be found anywhere, not just in scientific papers. It is reassuring to know that most are the result of errors, not science misconduct. Studies like ours are also meant to raise awareness among editors and peer reviewers. Catching these errors before publication is a much better strategy than after publication. In this current study we show that investing some additional time during the editorial process to screen for image problems is worth the effort, and can save time down the road, in case duplications are discovered after publication. I hope that our study will result in more journals following in the footsteps of ASM by starting to pay attention to these duplications and other image problems, before they publish their papers.
So this really is a matter of carelessness OR working on a very difficult enterprise with either inadequate tools or methodologies.
It’s still a big number.