Creative Commons Image courtesy of we creative people

In social science, as elsewhere, an elegant design makes all the difference. When I hear a great talk or read a first-rate article, I’m geeked up both by the new discovery and by precisely how the discovery was made.  And while I try to stay on top of the latest and greatest methodological techniques,  I most appreciate social scientists who can responsibly render the world’s complexity in a simple and comprehensible manner.

Design should never say, ‘Look at me.’ It should always say, ‘Look at this’. – David Craib

I doubt that designer David Craib attends a lot of social science presentations, but he might have liked a talk I heard at the American Society of Criminology meetings last week. Patrick Sharkey of New York University spoke about how exposure to violence might affect kids at school. The answer is important both for assessing the social costs of crime and for understanding the sources of persistent educational inequalities. And since an ethical researcher would never want to experimentally manipulate a child’s exposure to violence, we need to be especially creative in making good use of the available “observational” data. Professor Sharkey has been pursuing such questions for several years and he’s now assembled a lot of evidence from different cities using different methodologies. Last week’s talk matched test score data from New York City public schools with very precise information about the dates and places in which violent crime was occurring throughout the city. To isolate the effect of violence, he compared kids who experienced violent crime on their block just before the scheduled tests with kids who experienced violent crime on their block just after the test date. I’d never considered such a design, but was immediately attracted to the idea of using time’s arrow in this way. By talk’s end, I was convinced that recent exposure to violent crime reduces performance on reading and language tests.

This design is a lot cleaner than trying to name, measure and statistically “control for” everything under the sun that might influence both test scores and neighborhood violence (e.g., poverty, gang activity, lead exposure …). Another powerful approach in such situations is to use each student as his or her own control, testing whether test scores drop below student-specific average scores after exposure to violence. Professor Sharkey (along with Nicole Tirado-Strayer, Andrew V. Papachristos, and C. Cybele Raver) employ this technique as well as the pre/post-exposure design in a new American Journal of Public Health article. Each method has its advantages, but they are especially convincing in combination. I’d imagine that the pre/post-exposure comparison would be especially helpful in situations in which the researcher lacks a long series of repeated measurements on the same individuals. Since I often find myself or my advisees in such situations, I’m sure I’ll be borrowing this idea before too long.

Every designers’ dirty little secret is that they copy other designers’ work. They see work they like, and they imitate it. Rather cheekily, they call this inspiration. — Aaron Russell

Maybe these results seem obvious to you (if so, does it also seem obvious that the effects of violence would be much weaker for math tests?), but conclusively nailing down such relationships is extraordinarily difficult.  Or maybe a design comparing data collected “ten days before” with that collected “ten days after” just seems too simple mathematically to make a convincing case. I’d disagree, as would many designers.

Math is easy; design is hard. — Jeffrey Veen