The Scientific Process | Hidden Brain Lots of psychology studies fail to produce the same results when they are repeated. Does that mean we shouldn't trust science?

When Great Minds Think Unalike: Inside Science's 'Replication Crisis'

When Great Minds Think Unalike: Inside Science's 'Replication Crisis'

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Daniel Fishel for NPR
Hidden Brain takes a look at the &#039;Replication Crisis&#039; in scientific experimentation.
Daniel Fishel for NPR

It started with a report published last year entitled, "Estimating the Reproducibility of Psychological Science." It's a rather unassuming title given the amount of hand wringing, head scratching, and eye rolling it's incited in what's come to be known as psychology's "replication crisis."

The report was authored by psychologist Brian Nosek and hundreds of other researchers. Together, they set out to replicate 100 psychology experiments published in three of the discipline's tippy-top journals. Their question: How many studies would hold up when someone else ran the same experiment?

After following the steps of the original scientists—recruiting subjects, administering tests, running statistical analyses—they came up with an unsettling figure: just 39 of the 100 experiments they ran produced the same results as in the originals. In other words, nearly two-thirds of psychology studies—on topics ranging from fear to teaching math—failed to replicate.

The so-called "crisis" seemed to be rejected as soon as it was declared with a swell of articles coming to psychology's defense with headlines like "Failure Is Moving Science Forwards," "The Crisis In Social Psychology That Isn't," and, most bluntly, "Psychology Is Not In Crisis."

What's going on? This week, Hidden Brain looks at the "replication crisis" by zooming in on one seminal paper that was the focus of two replication efforts. One succeeded in replicating the original finding, the other failed.

The original study, authored by Margaret Shih, Todd Pittinsky, and Nalini Ambady in 1999, found that Asian women performed worse on a math test when primed to think about their female identity, but better when they were primed to think about their Asian identity.

Nearly two decades later, Nosek and the Reproducibility Project noticed that this study, which by then had been widely disseminated in textbooks and psychology education, had never itself been replicated. So he assigned two teams to run it again—one in Georgia and the other in California. They came back with different results. And this gets at one of the biggest questions explored in this episode: when scientific studies come to different conclusions, what should we think of as true?

Shankar talks to psychologists Dan Gilbert and "mathematical social scientist" Eric Bradlow about what we can learn from the replication crisis and how to think about scientific truths.

The Hidden Brain Podcast is hosted by Shankar Vedantam and produced by Kara McGuirk-Alison, Maggie Penman and Max Nesterak. Special thanks this week to Daniel Shuhkin. To subscribe to our newsletter, click here. You can also follow us on Twitter @hiddenbrain, @karamcguirk, @maggiepenman and @maxnesterak, and listen for Hidden Brain stories every week on your local public radio station.