Here's a common image of science: Sometimes science gets things wrong, but the scientific process is self-correcting.
The reality is a little messier, of course, but science is still the best system we've got for learning about the natural world. In the long run, the scientific community is likely to converge on scientific theories that are roughly true — or, perhaps, "empirically adequate" or some other (typically valuable) alternative. At least, that's my working assumption, and it might even be true — or at least empirically adequate.
Can the same be said for the blogosphere? In other words, is the online community of bloggers and comment writers likely to converge on views that are true, or reasonable, or useful, or valuable by some other metric? Are errors self-correcting?
This is my 100th post for 13.7, so the role of blogs in public discourse has been on my mind. Unfortunately, my experience so far suggests the answer is "probably not." The blogosphere doesn't seem to be self-correcting. In fact, it often seems to encourage and magnify outliers, with content passed along because we "like" it, not because its careful documentation and rigorous argumentation make us swoon.
It isn't all bad. One study investigating the accuracy of claims about breast cancer on Internet message boards, for example, found that posts were overwhelmingly accurate — fewer than 1 percent of those examined contained false or misleading statements, and of those, 7 out of 10 were corrected by other users in under five hours. "Fact checking" of other kinds also, arguably, has been improved by the rise of the Internet, though the spread of misinformation typically outruns its correction.
Yet, verifying the accuracy of factual claims is one thing and converging on good theories is another — whether the theories are scientific, political, moral or from some other domain of human interest.
One worry is this: It's extreme and strongly expressed views that often get the page views, motivate comments and drive the content, not moderate or nuanced views expressed with qualifications and reserve. The psychologist Paul Bloom signaled this point in prefacing a remark on the wonderfully provocative podcast Very Bad Wizards: "I know for anything that ends up on the Internet, it's really weird to say that I don't know the answer."
He went on to do something radical — confess his uncertainty (in this case, about the relationship between free will and moral responsibility).
In science, there's a parallel concern that research might be driven by citation counts and other forms of recognition. Bold claims typically draw more attention, and it's the new and unexpected that usually makes it into the highest-profile journals. This can incentivize certain kinds of research and publication over others, not always for the good. In fact, there's some evidence that journals with a higher "impact factor" — which reflects how frequently the papers in that journal are cited — are more likely to publish papers that are eventually retracted (though it's worth emphasizing that the frequency of retractions is low across the board, so while such journals have a higher rate of retraction, it's still a very low one).
On the one hand, this hints at the dangers of bold and exciting claims: They may be more likely to be fraudulent or wrong. On the other hand, this illustrates scientific self-correction at work: Retraction is a way to correct errors. Science is ultimately answerable not only to human scientists and to the incentive structure of science, but also to the data. Not so for most of what happens in the blogosphere.
A second worry about the blogosophere's ability to self-correct comes from the fact that the propagation of ideas is often driven by agreement, not by assessments of quality. We "like" things on Facebook, favorite them on Twitter, upvote them in comments, share or retweet them in assorted ways, and so on — this is a public signal that's typically taken as an endorsement and as a reflection of our online identity, not as a mark reflecting how compelling we found an argument or how strong we found the evidence.
The difference between "liking" and communicating something about quality may seem inconsequential, but formal models of cumulative cultural evolution — which aim to characterize the conditions under which knowledge accrues as it's passed on from one generation to another — suggest it could be key. One way to guarantee that successive generations get closer to the truth is if each generation can pass on its beliefs — its hypotheses with their corresponding probabilities — to the next generation. Just getting to observe how the previous generation behaved — what they "liked," for example — isn't enough.
The blogosphere, like science, is an evolving human institution. It may change. And it may be that I've mischaracterized it here. In fact, I hope I'm being much too pessimistic. If that's so, I expect to see the error corrected soon.
Tania Lombrozo is a psychology professor at the University of California Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what Tania is thinking on Twitter: @TaniaLombrozo