In August 2010, The American Council of Trustees and Alumni (ACTA), a self-described “independent, non-profit organization committed to academic freedom, excellence, and accountability at America’s colleges and universities” assigned letter grades A – F to universities nationwide. Johns Hopkins University received an F; St. John’s College of Annapolis, an A. The reasoning: Hopkins and many of its elite peers “don’t do a good job of providing their students with a coherent core,” ACTA President Anne Neal told The Washington Post. Students from St. Johns, on the other hand, “are perfectly capable of coming up to someone at a cocktail party and talking about their soul,” St. John’s senior faculty member Eva Brann gloated to the Post in response to her institution’s superior grade.
Hopkins fired back at its ACTA grade: “Everything we teach constitutes essential human knowledge, but that’s a huge range of territory, and we encourage students to make some serious choices about what they specialize in.” This discourse represents the college classroom in two oppressively simplified ways: 1. The college classroom as a space for banking “essential human knowledge”; 2. The college classroom as preparation for stimulating conversation at a cocktail party. And in doing so, this discourse fails to question its own assumptions of what constitutes this “essential human knowledge” – it wraps itself in what Henry Giroux calls “chauvinism dressed up in the lingo of Great Books” (2005: 19). But perhaps even more significantly, it says little of how students might engage with this knowledge beyond receiving, like empty vessels, the knowledge that will fill them up and make them interesting enough for that cocktail party.
What do students make of the classroom thus-conceived? When Harvard University began to plan a major overhaul of its general education requirements in 2006, national news media paid attention. The NPR program Morning Edition began a report:
At Harvard, they’re known as the universe through a grain of sand courses, those classes that are extremely narrow and focused, and deep in detail, taught by scholars who’ve spent a career on something like Chinese imaginary space, the rise and fall of the samurai, or gladiatorial combat in ancient Roman games.
One student interviewed in the story—a freshman currently enrolled in the aforementioned ancient Roman games course in order to earn a history credit for his general education requirements—commented: “I mean like, I’m always going to tell my kids I took a class on gladiators, you know? I mean how cool is that? I can’t really tell you why, why this is important.” His classmate, however, responded somewhat less enthusiastically: “[I]t’s maybe actually really fascinating for the professor whose job is to research that particular thing, but it doesn’t necessarily have a relevancy to what the students will be doing in the future.” Or, in words heard many times a day on many a college campus: “When will I ever have to use this?”
For Richard Arum and Josipa Roksa (2011) this question is indicative of the “limited learning” of contemporary undergraduate students. Because college administrators and faculty members allow undergraduate education to slip below research and administrative priorities, academic rigor atrophies, and college students lose out on the support they need to develop skills of problem solving and critical thinking. In their book Academically Adrift: Limited Learning on College Campuses, sociologists Arum and Roksa argue that undergraduate students in fact seem to learn very little in college, and that moreover they (Arum and Roksa) can show just how much those undergraduates are learning by bringing their own quantitative data set Determinants of College Learning (DCL)—which surveys over 2,300 full time students at 24 four-year institutions on questions of family background, high school grades, and college experiences—together with scores from the Collegiate Learning Assessment (CLA), a standardized test that analyzes “core outcomes espoused by all of higher education”: critical thinking, problem solving, and writing. These “objective measures” Arum and Roksa continue, will hold institutions accountable and will fight the enemy they have named “limited learning,” or more precisely, the “absence of growth in CLA performance” (122). Arum and Roksa call for testing practices that monitor learning in the college classroom akin to those practices in place for primary and secondary students (55). But the students who arrive on college campuses have more than likely spent up to twelve years in those very classrooms that are being so thoroughly monitored. Thus college students’ learning experiences –and their ideas about classroom learning — cannot not have been shaped by these monitoring practices.
For sociologist Michael Burawoy, sociology cannot afford not to listen to that question. If sociologists, as educators, are to believe that we make a legitimate claim on students’ time and attention, we must commit ourselves to “relevance” (Burawoy 2004: 14). Understanding this “relevance”, I would argue, necessitates considering – and reconsidering, and discussing – how the classroom connects to students’ lives, and how students perceive those connections. What would you say?