Scientists are admitting that a scientific finding that seemed too good to be true was too good to be true. The researchers are retracting a study that claimed you could use genetic tools to predict people's likelihood of living to 100.
Paola Sebastiani and Thomas Perls are both at Boston University. Perls studies centenarians — people who live to be 100 or older. He's not a geneticist, but he is convinced that genes play an important role in how long someone will live, because longevity clearly runs in families. So Perls teamed up with geneticist Sebastiani.
She developed a computer model that used 150 genetic markers — specific bits of DNA scattered around the 23 pairs of human chromosomes — to predict who would be able to join the centenarian club. As they reported in the journal Science, the accuracy of this model was 77 percent.
That was a year ago. Now, Sebastiani and Perls have retracted the paper making that claim. They have admitted a mistake, an honest mistake, in the way they collected their data.
Greg Cooper, a geneticist at the HudsonAlpha Institute for Biotechnology in Huntsville, Ala., uses genetic analysis similar to what Sebastiani used. "I think this should have been caught at the peer review stage," Cooper says.
Peer review is the process scientific journals use to vet research papers. The journal sends submitted manuscripts to scientists who work in the field and ask them to evaluate the work — not to see whether it is right or wrong, but to make sure it was done according to proper scientific procedures.
A Problem With Peer Review?
Cooper saw the paper right after it was initially published and was instantly suspicious.
"Anybody with significant amounts of experience with analyzing SNP array data — which were the primary source for the raw data that were generated for that study — the first reaction should have been, 'Wow, these SNPs are really interesting. Can we look at the raw data? We need to make sure there's nothing goofy about it,' " Cooper says.
In fact, there was something goofy about the results. One of the systems used to analyze the DNA SNPs had a flaw — a flaw known to most geneticists in the field.
So what happened? The Boston University researchers issued a statement acknowledging technical errors but asserting that they believe their main results are still valid. Science, the journal that published the paper originally, issued a statement noting that it publishes some 800 articles a year and only a handful are retracted.
But this retraction underscores a bigger problem.
In an email, Science editor-in-chief Bruce Alberts points out that research papers are built on a wide variety of new, highly complex technologies. Finding a team of reviewers with all of the needed expertise is tricky. And how many reviewers are enough to be sure nothing slips through? The answer, Alberts says, is not always clear.
Duke University geneticist David Goldstein is sympathetic to the dilemma of getting peer review right. "I think everybody recognizes that the review process is far from perfect. By and large it works, but it doesn't work every single time," he says.
Goldstein says when the system fails, it usually gets discovered and ultimately corrected.