STEVE INSKEEP, host:
And now we have a story that puts the credibility of scientific papers on the line. Today, scientists are admitting that a finding that seemed too good to be true was in fact too good to be true. The researchers are retracting a study that claimed you could use genetic tools to predict people's likelihood of living to 100 years old.
NPR's Joe Palca explains how it happened.
JOE PALCA: A year ago I did a story about the research of Boston University scientists Paola Sebastiani and Thomas Perls. Perls studies centenarians -people who live to be 100 or more. He's not a geneticist, but he's convinced that genes play an important role in how long someone will live. So Perls teamed up with geneticist Sebastiani. And as he told me a year ago...
Dr. THOMAS PERLS (Scientist): Left it up to Dr. Sebastiani to solve the genetic puzzle.
PALCA: OK. So Dr. Sebastiani, did you solve it?
(Soundbite of laughter)
Dr. PAOLA SEBASTIANI (Geneticist): I think we went a step ahead. Maybe we haven't solved it completely.
PALCA: Sebastiani developed a computer model that used 150 genetic markers -specific bits of DNA scattered around the 23 pairs of human chromosomes - to predict who would be able to join the centenarian club.
Ms. SEBASTIANI: And the accuracy of this model is 77 percent.
PALCA: That was a year ago. Now Sebastiani and Perls have retracted the paper making that claim. They have admitted a mistake, an honest mistake, in the way they collected their data.
The researchers aren't talking, so I called Greg Cooper. He's a geneticist at the HudsonAlpha Institute for Biotechnology in Huntsville. Cooper uses genetic analyses similar to the ones Sebastiani used.
Dr. GREG COOPER (HudsonAlpha Institute for Biotechnology): I think that this should have been caught at the peer review stage.
PALCA: Peer review is the process scientific journals use to vet research papers. The journal sends submitted manuscripts to scientists who work in the field and asks them to evaluate the work - not to see if it's right or wrong, but to make sure that it was done according to proper scientific procedures.
Cooper saw the paper after it was published a year ago and was instantly suspicious.
Mr. COOPER: Anybody with significant amounts of experience of analyzing SNP array data, the first reaction should have been, wow, these SNPs are really interesting - can we look at the raw data? We need to make sure that there's nothing goofy about it.
PALCA: In fact, there was something goofy about the results. One of the systems used to analyze the DNA SNPs, as they're called, had a flaw, a flaw known to most geneticists in the field.
So what happened? Well, as I said, the researchers aren't talking, but they issued a statement acknowledging technical errors, but asserting they believe their main results are still valid. Science, the prestigious journal that published the study originally, issued a statement noting that it publishes some 800 articles a year and only a handful are retracted.
But this retraction underscores a bigger problem. In an email, Science editor-in-chief Bruce Alberts points out that research papers are built on a wide variety of new, highly complex technologies. Finding a team of reviewers with all the needed expertise is tricky. And how many reviewers are enough to be sure nothing slips through? The answer, says Alberts, is not always clear.
Duke University geneticist David Goldstein is sympathetic to the dilemma of getting peer review right.
Dr. DAVID GOLDSTEIN (Duke University): The review process is far from perfect. By and large it works, but it doesn't work every single time.
PALCA: Goldstein says when the system fails, it usually gets discovered, and ultimately corrected.
Joe Palca, NPR News, Washington.
NPR transcripts are created on a rush deadline by a contractor for NPR, and accuracy and availability may vary. This text may not be in its final form and may be updated or revised in the future. Please be aware that the authoritative record of NPR's programming is the audio.