Catholics, Contraception and the Consequences of Poor Poll Reporting : NPR Public Editor An NPR report failed to cite Planned Parenthood as the sponsor of a poll on the birth control insurance mandate and interpreted the results questionably. A second report repeated an error in a Guttmacher press release on birth control use by Catholic women and never cited Guttmacher. Critics charged liberal bias. What happened? What's the impact?
NPR logo Catholics, Contraception and the Consequences of Poor Poll Reporting

Catholics, Contraception and the Consequences of Poor Poll Reporting

We are in a politically polarized age of alternative realities: biases often count more than facts. So when NPR does a story about a politically sensitive topic, even the smallest error or omission is taken by one group or another as proof of NPR's own bias.

This is the case of two recent stories about Catholics, contraceptives and government policy. Social conservatives in the blogosphere were right in catching NPR mistakes in the citing of a public opinion poll by Planned Parenthood and of a study by the Guttmacher Institute, both on contraception and Catholics.

I don't find "liberal bias," as some critics charged. But I did find unacceptably loose citing of polls, aggravated by repeating an error in a Guttmacher press release. Both organizations are known for their service and research, but because they also advocate, reporters need to be as rigorous in reporting about their studies as with any other.

The stories dealt with the confrontation between the Obama administration and Catholic bishops over a planned mandate requiring church-affiliated universities and hospitals to offer free birth control as part of their health care plans. The requirement extends to most employers' health plans. The administration has since compromised on the church institutions, but the debate pitting religious freedom against women's access to birth control rages on. In a close vote last week, the Senate rejected a measure to exempt any employer who claimed moral opposition to the insurance requirement.

On All Things Considered Feb. 7, reporter Scott Horsley cited a poll released that day that said 53 percent of Catholic voters supported the free contraception mandate. Horsley reported that the survey was conducted by Public Policy Polling. What he didn't add on air was that it was sponsored by Planned Parenthood, which backs the mandate.

Having paid for many polls myself on behalf of The Wall Street Journal, I know that the questions are worked out between the client and the professional polling firm, giving the sponsor final say over exactly what is being measured. Influence over the question obviously gives you influence over the answer. I am not suggesting that Planned Parenthood loaded the dice in this poll, but its role should have been made clear.

Horsley readily admits that he was in error. "I don't want to call it inadvertent or an oversight, which suggests I didn't notice Planned Parenthood's involvement at the time," he wrote to me in an email. "But on further reflection, I agree that's salient information."

Bias can be exhibited by a journalist failing to question information from sources she or he favors. But Horsley, a highly respected professional, told me that three factors were what influenced him. One, Public Policy Polling is reputable. Two, the question seemed "neutral" to him. (See below.) And three, a second poll released at the same time seemed to have similar results. This poll, by the Public Religion Research Institute, a non-profit polling institute that does not do advocacy, found that 52 percent of Catholics agreed that religiously affiliated colleges and hospitals should be subject to the mandate.

"I would take issue with anyone who tries to argue that Planned Parenthood's sponsorship somehow negates the findings of a reputable polling organization, especially when those findings are consistent with other polls," Horsley said. "I simply acknowledge that it would be better to let the listener, armed with all relevant information, make that decision for himself/herself."

Truth in advertising: I myself am Catholic. I also have no reason to believe that Horsley committed anything more than an honest mistake.'s online versions of the story, meanwhile, correctly named Planned Parenthood's role. There clearly was no pre-ordained agenda to hide Planned Parenthood's role.

But hardly helping to make that case was an overlooked weakness in the poll itself.

The sample was too small to hang much on. According to Public Policy Polling, its poll results for Planned Parenthood were based on a sample of 359 Catholic voters and had a margin of error of +/ - 5.2 percent, which is large. The respondents favored the insurance mandate on Catholic hospitals and universities by a margin of 53 percent to 45 percent. This may look like a wide gap. But remember, a +/- 5 percent margin of error means that the results could potentially be reversed: 48 percent in favor of the mandate, 50 percent against it.

Horsley, to his credit, used fudge language by saying that the poll "suggests" most Catholic voters agree. But he then interviewed the poll director, Tom Jensen, who said on air of both Catholics and women: "These are obviously groups that are going to be really key for the election this fall — swing voter groups. And they're all quite supportive of the birth control benefit."

It's not clear that Catholic voters, in fact, are. Let's look again at the second poll commissioned by the Public Religion Research Institute. While a slight majority of Catholic respondents in general favored the insurance mandate for religiously affiliated colleges and hospitals, 52 percent of the ones identified as Catholic voters actually opposed the mandate. This is the opposite of the Planned Parenthood result, which also measured voters. The margin of error in this second poll is +/- 7.5 percent.

I took these differences to one of the leading authorities on health policy polling, Robert Blendon, a professor and head of the Opinion Research Program at the Harvard School of Public Health. We were once colleagues together at Harvard.

He responded: "When results are close to the margin of error they should be reported as 'people being divided.' We also suggest that you should always report the margin of error or confidence level for the entire survey, and if you are focusing your discussion on a particular subgroup, you should report the margin of errors for that subgroup."

Blendon and his team said they prefer the design of a poll on the same subject released one week later, on Feb. 14, by Pew Research Center.

It found that among the respondents who have heard of the issue, 55 percent of Catholics—in general, not just voters—say that the religious institutions should be exempted from the insurance mandate to give free birth control. This is contrary to the finding of the Public Religion Research Institute about all Catholics. And if Catholic voters are even more opposed to the mandate than Catholics overall, as the religion institute poll indicates, then the difference with the Planned Parenthood poll is a gaping sinkhole.

A direct comparison of the three polls is unfair, because they were done at different times with different methodologies. Still, the difference underlines the importance of margin of error. Pew, for example, was still careful to conclude that Catholic opinion was "divided," correctly taking into account its large margin of error of 6.5 percent.

Then, there is the wording of the question. This obviously makes a difference. But designing questions is as much an art form as a science, and many experts can disagree on which wording is most fair. So, after toiling for days without coming to a clear conclusion on the questions, I am going to punt and let you decide for yourself. I have posted the relevant questions from the three polls. I will join you in the discussion.

The second disputed story on the topic of Catholics and birth control aired Feb. 10 on Morning Edition. Reporter Don Gonyea turned to the related question of actual birth control use by Catholic women. He said, citing no one: "In fact, 98 percent of Catholic women use birth control at some point in their lifetimes."

The source for that "fact" is the Guttmacher Institute. Many other reporters throughout the news media cited the same number in more or less the same way. But if you think about it—if you think about virgins, the extremely devout, older women, etc.—the 98 percent figure seems highly unlikely. And it is. It comes from an error in the press release of the report's findings.

The data inside the report itself shows that the 98 percent figure refers to Catholic women in the childbearing years between the ages 15 and 44 who have had vaginal intercourse at least once in their lives. It did not include data for Catholic women in general. The authors of the study have since issued a clarification.

The data itself is mined from the National Survey of Family Growth, a large, detailed, face-to-face survey poll done between 2006 and 2008 by the federal government's Center for Disease Control, a gold standard of sources. The margin of error on this particular question was +/- 1 percent, according to Guttmacher researcher Rachel Jones.

Ron Elving, the supervising senior editor who oversees the Washington desk, where Gonyea works, recognized the repeat of the press release mistake: "It would have been better to say 'an overwhelming majority of Catholic women of child-bearing age have used contraception.' This would not have the ring of the specific statistic but would not raise the questions of methodology or cast doubt on its accuracy."

The study, too, should have been cited. And as a poll with a margin of error. One percent is a small margin, but as Rick Edmonds, a polling expert at the Poynter Institute, a journalism training center in St. Petersburg, FL, told me, "A survey is not a fact."

Still, nothing indicates bad intention. Gonyea's context, moreover, was fair. Immediately before citing the Guttmacher number on air, he played a soundbite from a Catholic doctor at a Republican political rally. The doctor, eloquently sharing the bishops' position, said: "It goes down to the very center of theology, Catholic theology, or teaching about the human person."

But, wait, there is still a separate consideration in all this that has nothing to do with bias. Did the errors have any meaningful consequence?

For the first story, for example, did NPR mislead Catholics on public opinion such that it fueled a bandwagon effect supporting the birth control mandate in opposition to the bishops?

We will never know for sure. But the later Pew study suggests that the Catholic tide of opinion moved in favor of the bishops' side anyway, though the margin of error prevents any hard conclusion.

As for the story mentioning the Guttmacher study, a second, unreported part of that study is instructive. This part looked at Catholic women between the ages of 15 and 44 who are sexually active currently. This means that they say that they have had vaginal intercourse at least once in the last three months. The study further separated out women who were pregnant, trying to become pregnant, or were in post partum. The sample was reduced, in other words, to those women who are at risk of an unintended pregnancy—meaning the very women about whom the whole debate swirls.

The results showed that between 83 percent and 87 percent of these Catholic women used artificial birth control at the time of the study (2006-2008). The difference depends mostly on how to classify penis withdrawal by the male partner before ejaculation. Neither number is 98 percent, but both are so overwhelming that the difference hardly matters.

In case you're curious, the breakdown on women, who responded yes-no to different birth control methods, was as follows: sterilization (32 percent), pills (31 percent), condoms (15 percent), IUD (five percent), and other, mostly withdrawal (4 percent). A mysterious 11 percent said they used no method but still didn't want to get pregnant. Two percent specifically said they used "natural family planning." This latter is the church's officially prescribed method, and mostly includes rhythm and temperature methods.

The complete CDC questions used by Guttmacher also are available for you to tear apart if you wish.

What does all the above mean? To be fair, I have to admit that I have been going round for days trying to fully understand these many polls, and I still fear I have gotten something wrong. Reporters on deadline don't have that time luxury. But as we go deeper into an election year, polls will only play a bigger role in politics and the coverage. Some polls are manipulative by design, and almost all press releases on polls exaggerate by trying to draw headline-making conclusions. Reporters and editors have to as careful as possible to define just what is being measured, how it compares to other measures, and what the margin of error is.

This, too, must also be remembered: Polls are indications of public sentiment, not hard facts or predictions. Proceed with caution.

Stephannie Stokes and Andrew Maddocks contributed to this post.

More from the Ombudsman:

The Contraception Mandate: Where Are The Women?

On-Air Warnings: Sex, Violence, Children and Common Sense

Continue the discussion with me on Twitter or at my Facebook page.