The thorough saturation of performance metrics in publishing is well known. We get listicles, slideshows, information gap headlines, outrage, and sensationalism in our feeds to bait our clicks. Being paid by the click creates an infosphere where the content becomes coincidental to its circulation, and we all know and recognize this trend for what it is.
But what does it mean when the value of an academic career is reduced to a short letter-number combination? The H-index is a popular metric that grants numeric value to a scholar’s work (your score is 7 if you have 7 papers that have 7 or more citations, an h-10 means you have 10 papers with 10 or more citations, and so on). There is a lot wrong with the H-index from a measurement standpoint–it has trouble accounting for multiple versus single authorship, it only tallies publications and citations from traditional academic venues, and it only collects data from documents written in English. Yet even if we “fixed” the measure to attenuate bias, combined the measure with additional indicators of influence, and/or expanded the instrument to capture greater complexity, a larger philosophical issue remains: The metrification of scholarly pursuit.
Last week, alterations to Google Scholar coincided with sharp commentary on metrification. We were especially taken with two pieces on the LSE Impact blog addressing academic metrification in general and its particular manifestation on the ResearchGate repository. All of this comes at a time when scientists are also warning of dangerous levels of secrecy that make practitioners choose between the moral courage to blow the whistle on an industry and lucrative intellectual property contracts. For all the talk of collaborative and interdisciplinary work, scholars have never played it so close to the vest. The ubiquitous score-keeping in the day-to-day life and career path of the academic researcher is thoroughly felt but this increasingly gamified scholarship is a topic not spoken about too loudly.
Academic score-keeping is an old practice but the increasingly seamless relationship between publication, data collection, and subsequent ranking establishes perverse incentives for everyone involved in the production of knowledge. The consequences of metrification extend beyond academic practitioners to include The University as a social institution, and to knowledge production and dissemination in general.
Tying an academic career trajectory to one’s capacity to generate impressive quantitative scores profoundly shapes what it means to be a scholar and engage in scholarship. It incentivizes things like obligatory and unnecessary conference presentations that add lines to a CV; multi-authored journal articles in which people boost publication numbers regardless of meaningful contributions to the text; “salami publishing” in which authors write articles about simple ideas using thin data instead of slowly constructing a robust piece of work; and the use of strategic citation practices and adherence to “buzzwords” to maximize visibility.
As the amount of applicants continues to swell for each tenure-track job, and as the structure of the academy moves ever closer to a corporate model, we risk trading careful research for a lab rat race. A publishing record that once bestowed a full professorship thirty years ago may be a marginal case for a mid-career tenure case file today. Departments continue to admit graduate students not because their assessment of the field demands additional practitioners, but because a dip in a graduate enrollments may catch the eye of a dean looking to merge and consolidate departments.
Some of our readers are “academics” (current, aspiring, and former), and some not. But all of our readers are curious and thoughtful and likely invested in practices of knowledge production. We hope to start a conversation about metrification in light of technological and social shifts. We’re asking for your help:
To what degree is academic life run by measures and scores and metrics?
Are the measures accurately describing and promoting good work? Or, as we fear in this post, is work being made to fit and maximize the measures?
Can you think of measures and scores we haven’t listed here? Or specific ways research is changed to make the numbers look good (like choosing a research topic, the terms you use when writing, the people you cite, where you publish, and so on)?
Do measurements need to be improved or removed?
Please participate in the comments! This is a topic we at Cyborgology have cared about for a long time and have not seen many robust discussions elsewhere.
Headline image via: Source
Comments 3
wishcrys — October 5, 2017
haiku for academia, my forlorn lover:
paper submissions
textual combat back and forth
pretty CV line
grant applications
much effort, little returns
senseless cyclic webs
reference letter pleas
status-signalling endorsements
meritocracy
noah — October 5, 2017
Lately I've been trying to understand populism with respect to academic philosophy. One issue is that metricization is epistemically anti-populist. While publication metrics purport to be meritocratic by rating research impartially, the any popular understanding of the value of that academic work is hidden behind, or within, the metrics.
For each metricization, there is an associated theory of why it represents some measure of academic value. However, why or how this actually captures anything of value to humanity disappears into the numerical result. If I tell you my research has an h-value of 4, this tells nothing of what good I have done. Hence the question becomes: How do we bridge the gap between the academic measure of value and a lay, popular understanding of the value of our work?
A way to start figuring this out is to work on our understanding of audience. Who really is the audience that needs to be served by our academic work, and how can we show that it is of value to them? While the audience is overworked academic administrators, we will be stuck with the lowest common denominator: the H value or similar. If we can serve a wider population, then we will have varied measures, and hopefully more meaningful ones.
Laura Noren — October 9, 2017
One of the metrics that is commonly in place in academic contexts is the teaching evaluation. Teaching evaluations have been shown to reveal (presumably subconcious) gender biases and so probably shouldn't be all that widely used. Still, I find it interesting that teaching evalutations are so widely available and under-referenced as key evidence in hiring decisions. Again, because they contain known biases I am not necessarily advocating for teaching evaluations to be used.
My point is that we often think that which is measured is that which becomes incentivized. Teaching evaluations, at least at R1 schools, show that even if an organization or field measures a particular behavior it need not become a core metric for distributing the goodies available in that context.
Metrics that seem to matter more, in addition to publications in high impact factor journals and citations: amount of grant money, amount of attention from prominent sources (getting your research written up in the New York Times is a good move, for instance). This article didn't talk much about alt-metrics like the number of followers one has on Twitter or how many monthly readers one's blog or newsletter has, but those are surprisingly important in setting tacit understandings of prominence. These can be especially productive avenues for young scholars to build their brands.
The "metric" that matters most, however, is likely to be the rank of the department where scholars receive their PhDs. The academic elite tends to replicate itself. Getting a job at a school ranked more highly than the institute from which one graduated is exceedingly rare. The ranking is generally not discussed as a metric, but in a qualitative, highly contextualized kind of way by reference to the strength of said department or the notoriety of the junior applicant's advisor(s).
It is important to talk about how quantitative and qualitative information are used to support the tacit hierarchies of the status quo even as they are described as mechanisms of meritocracy and transparency. Thanks for starting the conversation.