Scientists Aren't Good At Predicting Which Research Will Pan Out : Shots - Health News A scientist tested his peers' ability to pick which cancer experiments would pan out. They failed more often than not, which doesn't say much for intuition or efficiency in the scientific process.
NPR logo

Scientists Are Not So Hot At Predicting Which Cancer Studies Will Succeed

  • Download
  • <iframe src="https://www.npr.org/player/embed/535412346/535660708" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Scientists Are Not So Hot At Predicting Which Cancer Studies Will Succeed

Scientists Are Not So Hot At Predicting Which Cancer Studies Will Succeed

  • Download
  • <iframe src="https://www.npr.org/player/embed/535412346/535660708" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

ROBERT SIEGEL, HOST:

Science relies on the careful collection and analysis of facts. Science also benefits from human judgments. But that intuition isn't necessarily reliable. A new study finds that scientists did a poor job forecasting whether a successful experiment would work on a second try. NPR's Richard Harris has the story.

RICHARD HARRIS, BYLINE: Part of the art of science is reading someone else's work and deciding what's likely to be true and what's likely to be a mistake. Scientists who chase down bum leads are often wasting their time, so Jonathan Kimmelman at McGill University says it would be great if scientists could more reliably pick out winning ideas.

JONATHAN KIMMELMAN: There are lots of different candidates for drugs that you might develop or different research programs you might want to invest in. And what you want is some way to discriminate between those investments that are going to pay off down the road and those that are just going to fizzle.

HARRIS: Kimmelman realized he had a great opportunity to study scientific forecasting. Other researchers are in the midst of a huge project to replicate dozens of high-profile cancer experiments to see if they are accurate. They've written down the exact protocols they are using.

KIMMELMAN: This was really an extraordinary opportunity where we had experiments that were pretty much locked down in terms of design so that when we ask people to predict what the results were going to be, we could look at whether or not their beliefs where concordant with what the results showed.

HARRIS: Kimmelman and colleagues asked nearly 200 professors, postdoctoral fellows and graduate students to forecast the results from six of those repeated experiments. The studies have now been done, and the results are in. How'd the 200 scientists do - according to a report published in PLOS Biology, not so hot.

KIMMELMAN: Most researchers overestimated the ability of repetition studies to have effects that were as significant as the original study.

HARRIS: And that overoptimism was pretty much across the board.

KIMMELMAN: There wasn't really a big difference between trainees and experts.

HARRIS: So what do you make of that? What do you think is going on here?

KIMMELMAN: It's hard to know exactly what's going on. It's possible that scientists overestimate the truth or the veracity of those original reports.

HARRIS: Or it's possible that the scientists were simply too optimistic that independent labs would be able to follow an experimental protocol and get it to work properly. Clearly optimism is an important trait for a scientist since most experiments on most days actually don't yield exciting results. But optimism is not the best trait for scientific forecasters, certainly in this particular experiment. I called up Taylor Sells, a second-year graduate student at Yale University, to find out how she ended up being one of the very best forecasters in Kimmelman's study.

Do you think you have some special mojo?

TAYLOR SELLS: I don't think so. I don't know if I believe in special mojo.

HARRIS: Actually, she said her technique was quite simple.

SELLS: Inherently as a scientist, we're kind of taught to be very skeptical of even published results. And reproducibility has been a very important topic in science in general. So I kind of approached it from a very skeptical point of view.

HARRIS: And it's not as though she had special insights into the actual experiments. In fact she hadn't done those sorts of experiments herself.

SELLS: Completely out of my normal range of science for sure.

HARRIS: What Sells draws from her experience in the lab is simply knowing how hard it is to get the same results in repeated experiments.

SELLS: We often joke about the situations under which things do work. It's like, oh, it has to be raining and it's a Tuesday for it to work properly, and that is something we think about a lot.

HARRIS: And that's something she thinks about a lot when she read other scientists' papers and makes a judgment about how much to trust the results. The beauty of science is that truth comes out in the long run. But the process could be more efficient if scientists could do a better job up front, picking the diamonds from the dross. Richard Harris, NPR News.

(SOUNDBITE OF MIEUX'S "RUST")

Copyright © 2017 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Shots - Health News

Shots

Health News From NPR

About