Consider two very different views about the human mind.
In the first view, people are like scientists. They go about the world gathering data, constructing theories and using those theories to guide their interactions with the world. As new evidence comes in, they revise their beliefs accordingly.
In the second view, people are more like trial lawyers. They already know what they want to conclude (innocent or guilty, pro or con), and they go about seeking and construing evidence to favor that conclusion. Rather than matching their beliefs to the evidence, they match the evidence to their beliefs.
There's research to support each of these views. For instance, we know that even young children conduct "experiments" and gather "data" to update their beliefs about the way the world works and that, under some conditions, adults are pretty sophisticated probabilistic reasoners.
Yet, we also know that people engage in motivated reasoning: They let the conclusions they hope to reach influence their decisions. Among other things, this can contribute to confirmation bias: the tendency to seek and favor evidence that supports what one already believes.
These phenomena and others have led some to question the idea that people are "naturally" scientific. One alternative view claims that human reasoning isn't first and foremost about getting things right, but is instead about persuading others to share our views. Others propose that people are "moralists," driven to preserve their sacred beliefs or affirm their cultural identities — even when the data point in a different direction.
So which view is right: scientist or trial lawyer?
For better and for worse, the answer is probably "both." People are capable of impressive scientific thought (indeed, the best scientists are all people!). And at the same time, people are quite capable of fooling themselves into thinking the evidence does support their beliefs, even when a more objective assessment would suggest otherwise. Pessimists can readily point to ways in which these tendencies besmirch politics, hinder policy and interfere with everyday decision making.
But optimists can point to a critical respect in which we might just be like scientists. Science is continually reinventing itself: We develop new statistical tools, better practices and better theories. Science isn't only self-correcting in the sense that false claims are eventually weeded out, but also in the sense that scientific methods are themselves under scrutiny and subject to continual revision.
As one example, some scientists have recently adopted a practice called "blind analysis," which guards precisely against the kinds of confirmation biases that could otherwise influence scientific research. With blind analysis, scientists make all decisions about how data will be analyzed prior to seeing the data itself. As a consequence, these decisions can't be guided by their expectations or preferences about the conclusions the data will ultimately support.
If we're like scientists in this sense — in the sense that we can recognize problems with our thinking and develop better practices, at least some of the time — then there's room for a more optimistic view of the human mind. Instead of fooling ourselves into believing what we expect or want to believe, we can develop better ways to get ourselves to make the best decisions — be it through education, artful nudges or better technology.
What would politics, policy and everyday decision-making look like if we succeeded?
Tania Lombrozo is a psychology professor at the University of California, Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what she is thinking on Twitter: @TaniaLombrozo