We talk a lot about the public value of social scientific research, but sometimes it seems we’re either preaching to the choir or our sermons are falling on deaf ears. Perhaps what we really need is ongoing dialog and debate between the true believers and the skeptics. For a piece that could help push toward that kind of exchange, check out this recent New York Times “Opinionator” piece from Gary Gutting, a philosophy professor at Notre Dame.
As the title suggests, Gutting’s piece poses the question of how reliable social scientific research is when it comes to informing real-world, public policy. Not as much as we might think or wish. Part of the problem is that we often fail to distinguish between early, preliminary tests and more definitive studies. Far more problematic is that fact that the knowledge and information in the social sciences is not as reliable as we might hope. Worse, prediction is where the social sciences really struggle. At the root of our inability to guide and predict from our research, according to Gutting, is the fact that the social world is so complex it doesn’t lend itself to the kind of randomized, controlled experimentation that is the hallmark of so much of the best research in the natural and physical sciences.
These ideas are inspired and informed by a new book called Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society by Jim Manzi. While I haven’t read the book yet (and am a bit skeptical about trying to imitate the natural science model), I’m especially interested to see what my editorial partner Chris Uggen thinks. Chris is, after all, constantly pushing the value of controlled and/or randomized experiments in our field.
Anyway, since that is to come, I’ll give the last word for the moment, to Gutting, in the hope that it will be the first step to further reflection and exchange:
My conclusion is not that our policy discussions should simply ignore social scientific research. We should, as Manzi himself proposes, find ways of injecting more experimental data into government decisions. But above all, we need to develop a much better sense of the severely limited reliability of social scientific results. Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do.