Psychology Studies Often Can't Be Reproduced : Shots - Health News Many social sciences experiments couldn't be reproduced in a new study, thus calling into question their findings. The field of social science is pushing hard to improve its scientific rigor.
NPR logo

In Psychology And Other Social Sciences, Many Studies Fail The Reproducibility Test

  • Download
  • <iframe src="https://www.npr.org/player/embed/642218377/642356885" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
In Psychology And Other Social Sciences, Many Studies Fail The Reproducibility Test

In Psychology And Other Social Sciences, Many Studies Fail The Reproducibility Test

  • Download
  • <iframe src="https://www.npr.org/player/embed/642218377/642356885" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

ARI SHAPIRO, HOST:

The world of social science got a rude awakening a few years ago when researchers concluded that many of the studies in this area appear to have deep flaws. Some of those same researchers now report that it's a problem even for the most prestigious scientific journals. But their study also finds that some scientists are surprisingly good at anticipating which studies are likely to stand the test of time. NPR's Richard Harris reports.

RICHARD HARRIS, BYLINE: Science is a process of exploring the unknown. So Brian Nosek at the Center for Open Science and the University of Virginia says we should not expect every result to be repeatable in someone else's lab. The challenge is separating the good from the bad.

BRIAN NOSEK: A substantial portion of the literature is reproducible. We are getting evidence that someone can independently replicate. And there is a surprising number that fail to replicate.

HARRIS: Nosek wanted to see how that plays out in the journals where scientists often take their flashiest and most provocative findings, Science and Nature. Nosek and his far-flung colleagues now report that of the 21 social science papers published in those journals over a recent five-year span, 13 checked out, and eight apparently did not.

One of the eight studies that failed this test came from the lab of Will Gervais. He and a colleague at the University of British Columbia ran a series of experiments to see whether people who are more analytical are less likely to hold religious beliefs. In one test, undergraduates looked at pictures of statues.

WILL GERVAIS: So half of our participants looked at a picture of the sculpture "The Thinker" where you know, here's this guy engaged in deep, reflective thought. And in our control condition, they'd look at the famous statue of a guy throwing a discus.

HARRIS: People who saw The Thinker expressed more religious disbelief. But Gervais, now on the faculty of the University of Kentucky, recognizes that his experiment was really quite weak.

GERVAIS: Our study in hindsight was outright silly.

HARRIS: But what interested him most in the new study involved an interesting twist. Several hundred social scientists were asked in advance to predict which studies would pan out and which ones wouldn't.

ANNA DREBER: They're taking bets with each other against us.

HARRIS: Anna Dreber is at the Stockholm School of Economics and co-author of the new analysis, which is published in Nature Human Behavior. She says those forecasts were spot on.

DREBER: So these researchers were very good at predicting which studies would replicate. So I think that's great news for science.

HARRIS: Dreber says if you can get panels of experts to weigh in on exciting new results, the field might be able to spend less time chasing faulty conclusions known as false positives.

DREBER: A false positive result could make other researchers and the original researcher spend lots of time and energy and money on results that turnout to not hold. And that's kind of wasteful for resources and inefficient. So the sooner we find out that a result doesn't hold, the better.

HARRIS: This is a very intriguing idea, but Jonathan Kimmelman at McGill University says it may be limited to a group of scientists with particular skills. When he's asked medical researchers to make predictions about studies, the forecasts have generally flopped.

JONATHAN KIMMELMAN: That's probably not a skill that is widespread in medicine.

HARRIS: Better than detecting problems in already completed research, scientists like Will Gervais are thinking hard about the incentives that encourage them to do weak, small studies in the first place.

GERVAIS: The way to get ahead and get a job and get tenure is by publishing lots and lots of papers, and it's hard to do that if you're able to run fewer studies. But in the end, I think that's the better way to go - is to kind of slow down our science and be more rigorous up front.

HARRIS: He says that's the approach he is taking. And he sees it as part of a broader cultural change in social science that's aiming to make the field more robust. Richard Harris, NPR News.

Copyright © 2018 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

About