How do you figure out if a social program to prevent homelessness does any good?
If you're New York City, you take a group of people who qualify for the program, and randomly assign them to two groups: One group gets to participate in the program (described here), the other does not.
If fewer people in the participating group wind up homeless, you can safely say the program works.
That process -- described in a front-page story in this morning's NYT -- is, not surprisingly, controversial.
"They should immediately stop this experiment," the Manhattan borough president told the NYT. "The city shouldn’t be making guinea pigs out of its most vulnerable."
This sort of randomized test is central to the way medicine works. If you have an unproven drug, you take a bunch of patients into two groups. One gets the drug, one gets the placebo.
Of course, the case for doing this kind of test seems more compelling in the case of a drug than in the case of a social program: The chances of an an unproven drug causing harm to patients seem greater than the chances of an untested social program causing harm to people at risk of being homeless.
Still, this idea of doing studies to test social programs is gaining ground. The Department of Housing and Urban Development is randomly assigning people in some homeless shelters to different programs, in an effort to figure out which programs work best.
Economists, too, are moving toward doing more controlled experiments.
We did a story earlier this year on economist did an experiment to try to figure out how to reduce corruption among road builders in Indonesia.
Perhaps the most famous experimental economist is Esther Duflo of MIT. Here's an excerpt from a recent New Yorker profile of Duflo:
...Duflo and her colleagues are sometimes referred to as the randomistas. They have borrowed, from medicine, what Duflo calls a "very robust and very simple tool": they subject social-policy ideas to randomized control trials, as one would use in testing a drug. This approach filters out statistical noise; it connects cause and effect. The policy question might be: Does microfinance work? Or: Can you incentivize teachers to turn up to class? Or: When trying to prevent very poor people from contracting malaria, is it more effective to give them protective bed nets, or to sell the nets at a low price, on the presumption that people are more likely to use something that they've paid for?
...Randomization "takes the guesswork, the wizardry, the technical prowess, the intuition, out of finding out whether something makes a difference," she told me. And so: in the Kenya trial, the best price for bed nets was free.