The phrase “No Child Left Behind” added a tinge of wartime drama to education, conjuring up images of embattled teachers in the trenches of America’s schools. In the years since this reform, new high pressure testing strategies have led to accusations of “educational triage”— when teachers focus only on the students close to earning “proficiency” and leave both their high and low achieving classmates behind.

To test whether such triage is actually happening, Jennifer Jennings and Heeju Sohn analyzed four years of student testing data from the Houston Independent School District. The data, which ranged from 2001 to 2004, allowed researchers to look at student performance both before and after the No Child Left Behind school reform effort and on two different kinds of tests– a “high stakes” test which determined whether schools made adequate yearly progress on NCLB and a “low stakes” test that was not tied to performance evaluations or teachers’ pay.

When Jennings and Sohn compared scores on the high stakes tests, the found that in math, higher performing students did better later, while early low performers did worse. In reading, the higher performing students did worse later, and lower performers did better. These differences, according to Jennings and Sohn,  can be explained by the fact that teachers focused on students close to the cutoff point to get as many passing as possible. On reading, a test that more students passed, this meant the higher achievers got left out of instruction to pull more students up to proficiency. In math, which fewer students passed, the low performing students got left behind while teachers focused on keeping the already-talented ready for exam day. Or, in other words, educational triage. In fact, these patterns did not show up at all in the low stakes test results.

Both the subject matter and the degree of difficulty of a test can change who gets the instruction, who gets labeled as struggling or successful, and even how the media and policymakers get their measures of educational inequality. “Policy makers,” Jennings and Sohn conclude, “face a series of difficult normative questions when they decide where to set the cut score for proficiency.” For now it looks like the tests themselves may be digging the trenches.