Social media feeds are like carnival money booths: we snatch away greedily as the links swirl past, but we’re rarely enriched by the experience. In the rush to process so much so quickly, we’ve become lousy filters for one another – recommending “great articles” that ain’t so great by social science standards.
Many rapidly-circulating stories offer grand assertions but paltry evidence about the social world. It seems silly to direct much intellectual horsepower at every li’l item whooshing past (why, that Upworthy post needs an interrupted time-series design!). So people just hit the “thumbs up” button if they like the sentiment and send it down the line. Passing along such blurbs can seem like a modern equivalent to the kindly/nosy relative who sent us Dear Abby clippings in the newsprint era. Yet there’s a danger to indiscriminate recommendations that can subvert our authority as experts. In my case, I’ve developed a set of policy preferences on crime and economic issues, which I adjust in response to new evidence. If I start endorsing weak studies just because they affirm my preferences or prejudices, then I’d rightly be considered a hack.
As conservatives like to remind progressives — from the comfort of their thin-paned glass houses — there’s a big honking gap between the truth about the world and the truth we’d like to believe about the world. Accordingly, there’s a big honking gap between a “great study” and a “great sentiment” that neatly aligns with our views. And, unlike your kindly/nosy relative, good social scientists have a real responsibility to evaluate the quality of the evidence we cite – especially when we claim to be experts on a matter.
Sometimes we forget that social science provides mighty tools and deep training in evaluating evidence. For example, any good sociologist should have a pretty good sense of whether a given sample is likely to be representative; whether a design is best suited for making causal, descriptive, or interpretive claims; whether to gather data from individuals, groups, or nations in making such claims; and, how to make sense of complex processes that unfold dynamically across all these levels. But while we might closely and carefully scrutinize research methods in our professional work, we seem to get beer goggles whenever a sexy story flits past on Facebook.
When I suspect I might be playing too fast and loose with such stories, I use a three-step approach to consider the evidence:
- Restate the central empirical claim (e.g., raising the minimum wage reduces crime)
- Identify the theory and evidence cited to support that claim (e.g., a simple plot showing lower crime rates in states with higher minimum wage levels)
- Evaluate the design rather than the finding. Is the design so elegant and convincing that I would have believed the results had they gone the other way? Or would I have simply dismissed it as shoddy work? (e.g., a simple plot showing higher crime rates in states with higher minimum wage levels).
Depending on the direction of the wage-crime relationship, my reaction would have changed from “See! This shows I was right all along” to “Bah! These fools didn’t even control for income and poverty rates!” Of course, few of the stories flitting past can withstand the strict scrutiny of a top peer-reviewed journal article. But while I might still circulate them for descriptive or entertainment value, I’m now making fewer unqualified personal recommendations. I’d rather reserve the term “great study” for designs that are so spine-crushingly beautiful that they might actually change my mind on an issue. Researchers know that winning over skeptics is way more fun — and way more important — than preaching to the converted. At TheSocietyPages, this process always animates our board meetings, in lively debates about the research evidence that merits highlighting in our podcasts, citings, TROTS, reading list, and feature sections.
As Clay Shirky says, “It’s not information overload. It’s filter failure.” At TSP, we’ll do our best to screen for solid evidence and big ideas about the social world, in hopes that we can all grab something worthwhile from the information swirl.
Comments 3
Arturo — February 27, 2014
great post and I like the distinction between "great study" and "great sentiment"....lately, I have been seeing a variety of stories on my feed about how Utah has "solved homelessness" with the "simple idea of giving the homeless homes"....it's a great sentiment and one that I completely endorse....it's also conducive to a sociological perspective that homelessness is underpinned by "structural factors" like the access to affordable housing...
...but the evidence that homelessness has been solved in Utah is quite nuanced; the evidence is positive that housing first programs have had beneficial impacts but not definitive…indeed the estimated number of homeless has actually increased in Utah during the last few years
on the other hand, don’t you think we have to be similarly careful about dismissing “weak evidence” given that most social science is fixated on concerns of type 1 errors but rarely discusses type 2 errors….that is, saying that the evidence is weak that a social policy is working…is not the same thing as having evidence that it’s not working
Does giving a thumbs up only on news stories that adhere to the highest social science standards, not inadvertently raises the risk that we’re dismissing/ignoring issues/policies that are in their nature hard to prove. Or am I missing the point.
Friday Roundup: February 28, 2014 » The Editors' Desk — February 28, 2014
[…] “Screens for Glass Houses,” by Chris Uggen. Social media makes us all magpies, quick to Tweet and “like” shiny new studies that fit with our worldview. But the good science and the sexy story aren’t often the same. […]
Chris Uggen — March 1, 2014
Thanks, Arturo. I think some of the most useful voices today are those who set up a good internal screen or filter before getting behind a piece -- and then recognize how far the findings can be responsibly pushed. We try to do this at TSP, as do the organizations with whom we like to work (e.g., the Scholars Strategy Network and Council on Contemporary Families). We all tend to draw directly from good peer-reviewed research (e.g., in TROTS and Reading List) and authoritative experts (e.g., in podcasts and roundtables) in making claims.
I love your point about assessing social programs. It is analogous to concluding "don't exercise" when one small-scale study fails to detect a significant relation between biking and heart disease. We sometimes set a mile-high bar to call a program "successful," then quickly call it a failure when the evaluation fails to clear that bar. Sometimes such labels are politically motivated, but sometimes it is a simple matter of research design (e.g., statistical power, lousy measurement on the dependent variable, inadequate follow-up periods). In teaching, I like to show students what real failure looks like (e.g., Petrosino et al.'s Scared Straight meta-analysis, which shows a delinquency prevention program that significantly *increases* delinquency). I've been writing a few pieces reanalyzing experimental data from the so-called "failed social programs" of the 1970s -- and approaches like subsidized employment often look a lot more successful when multiple outcomes and subgroups are analyzed with the more sensitive methods available today. The same is true, of course, for prison rehabilitation programs -- when it suits their agenda, people are way too quick to conclude that "nothing works."