A study shows less than a quarter of retractions were the result of honest errors.
A study shows less than a quarter of retractions were the result of honest errors. The Lancet
When there's something really wrong with a published study, the journal can retract it, much like a carmaker recalling a flawed automobile.
But are the errors that lead to retractions honest mistakes or something more problematic?
A newly published analysis finds that more than two-thirds of biomedical papers retracted over the past four decades were the result of misconduct, not error. That's much higher than previous studies of retractions had found.
"We found something that is very disturbing," Dr. Arturo Casadevall, the co-author of a paper looking into this phenomenon that was published Monday by Proceedings of the National Academy of Science, tells Shots. "This kind of stuff has the potential to do damage to science. But we need to expose it to clean our own house."
Casadevall, a microbiologist and immunologist, and his partners looked at the more than 2,000 retracted biomedical research papers since 1977. They found that more than 67 percent had to be retracted because of fraud, suspected fraud, duplicate publication or plagiarism. Only 21 percent of the retractions they looked at were the result of error.
Casadevall says the reason his team's findings differ so much from previous retraction studies is that his team independently verified why each paper had been retracted.
He says the previous research had relied on retraction notices — explanations published in journals about why studies are being retracted. But, Casadevall says, "when you retract a paper, most journals allow the authors to write the notice." That gives the authors the chance to spin the message.
For example, the authors of a 1993 study published in Science were found to have falsified and fabricated their data. Their retraction notice makes no mention of this, only stating that "some experiments have not been reproducible."
That might technically be true, but it leaves out the fact that the authors' original findings may not have even been producible in the first place.
Casadevall's team didn't take the authors' words for it. They brought in information from the federal Office of Research Integrity, as well as from independent media reports. Not only did they find that two-thirds of retracted articles involved misconduct, they found that the more highly influential a journal is the more of its retracted articles involved fraud or suspected fraud.
Casadevall's team verified some of their retraction notices with help from the blog Retraction Watch, created two years ago by health journalists Adam Marcus and Ivan Oransky.
While the economic pressures of conducting biomedical research will always lead some scientists to cut corners, Oransky says journals need to force those scientists to own up to their mistakes. "These unclear, opaque notices really distort the scientific literature," he says. "They don't allow for a full picture of what's happening in science."
The bloggers behind Retraction Watch have seen, perhaps as well as anyone, how scientists can get things wrong. But Oransky says he's optimistic that Casadevall's study will bring about change.
"It's one thing for bloggers to bang on about something and make the same conclusion every week," he says. "But it's another for the peer-reviewed literature with a carefully done, well-constructed study to do the same thing. It's harder to ignore."