woods

Latest in the arsenal of moralpanic studies of digital technologies is a recent article published in the journal Computers in Human Behavior, written by psychologists and education scholars from UCLA.  The piece, entitled: “Five Days at Education Camp without Screens Improves Preteen Skills with Nonverbal Emotion Cues,” announces the study’s ultimate thesis: engagement with digital technologies diminishes face-to-face social skills. Unsurprisingly, the article and its’ findings have been making the rounds on mainstream media outlets over the past week. Here is the abstract:

A field experiment examined whether increasing opportunities for face-to-face interaction while eliminating the use of screen-based media and communication tools improved nonverbal emotion–cue recognition in preteens. Fifty-one preteens spent five days at an overnight nature camp where television, computers and mobile phones were not allowed; this group was compared with school-based matched controls (n = 54) that retained usual media practices. Both groups took pre- and post-tests that required participants to infer emotional states from photographs of facial expressions and videotaped scenes with verbal cues removed. Change scores for the two groups were compared using gender, ethnicity, media use, and age as covariates. After five days interacting face-to-face without the use of any screen-based media, preteens’ recognition of nonverbal emotion cues improved significantly more than that of the control group for both facial expressions and videotaped scenes. Implications are that the short-term effects of increased opportunities for social interaction, combined with time away from screen-based media and digital communication tools, improves a preteen’s understanding of nonverbal emotional cues.

Having read the original article and the mainstream interpretation(s) of it, I’m left frustrated that we haven’t moved beyond “good” versus “bad” arguments, and even more frustrated by the way good v. bad arguments cast critics—in this case, myself—into the role of technological utopianist.

The heavy handedness of the authors’ argument, coupled with a problem-ridden research design, does a true disservice to public and scholarly understandings of the nuanced relationship between digital social technologies and experiences of social engagement.

Let’s begin with the design problems— I’ve summarized them into two points:

1) As the authors acknowledge in their conclusion, it is impossible to disentangle the effects of technology, the nature camp itself, and experiences with the group. That is, although the authors frame this as a study about the (negative) effects of screen-time, they conflate the screen-time variable with other variables (namely, the experience of being isolated with a small group at camp and engaging in highly social activities), to say nothing of the conflation of all digitally mediated activities into a single variable.

 

2) Results were based on change scores between a pre and post-test, but pretests and posttests were given to participants under different conditions.  The experimental group took the pretest just after arriving at camp and posttest just before leaving, while the control group took the pretest at school. Differences in pretest scores highlight the way this likely affected the data. Pretest scores for the experimental group were lower than the control group on each of the two emotional cues tests. In the end of the study, however, scores were very close between the two groups (see Table 1 below). This means that change scores, which the authors used as the dependent variable, were significantly higher for the experimental group than the control group, even though end scores were similar. Results were interpreted—I think incorrectly—as a result of the intervention rather than inconsistencies in administration of the pretest. In particular, I would argue that kids in a new environment, distracted by the excitement and nervousness of a week in the woods, would not score as well as those taking the test in a comfortable and familiar environment. This alternative interpretation would account for initial differences, making change scores ineffective for predicting the effects of screen-time on social skills.

Table 1: Pretest, Posttest, and Change Scores

Task Condition Pretest Score Posttest Error Reduction in Error
1 Experimental 14.02 9.41 4.61
1 Control 12.24 9.81 2.43
2 Experimental 26% 31% 5%
2 Control 28% 28% 0%

*Each task operated on a separate metric. Task 1 calculated number of errors; Task 2 calculated percent correct.

But let’s put these design problems in our pocket for just a second. The bigger issue, for me, is the research question itself. The question of “good” versus “bad” is unproductive in its own right through its inherent digital dualism and insistence on dichotomization. The answer to research questions such as these are over simplified and obscure more than they reveal, solidifying a view of reality in which a particular kind of technology either helps or it hurts. This is a narrow conversation that restricts future inquiries rather than fostering intellectual outgrowth and diversity.

More beneficial research questions address how technological developments alter the social landscape, and what this means in practice. They address how necessary skill sets change, who is best equipped to manage the newly required skill sets, and the conditions under which communication and interaction are hindered or improved.

These more useful research questions open dialogues, rather than pit Pollyanna’s against Naysayers in a debate neither intended to have. However, these more useful research question make for tedious reading. They produce a kind of  TL;DR  scholarship that rarely, if ever, finds its way into public discourse.

Follow Jenny Davis on Twitter @Jenny_L_Davis

Headline Pic: Source