The thorough saturation of performance metrics in publishing is well known. We get listicles, slideshows, information gap headlines, outrage, and sensationalism in our feeds to bait our clicks. Being paid by the click creates an infosphere where the content becomes coincidental to its circulation, and we all know and recognize this trend for what it is.

But what does it mean when the value of an academic career is reduced to a short letter-number combination? The H-index is a popular metric that grants numeric value to a scholar’s work (your score is 7 if you have 7 papers that have 7 or more citations, an h-10 means you have 10 papers with 10 or more citations, and so on). There is a lot wrong with the H-index from a measurement standpoint–it has trouble accounting for multiple versus single authorship, it only tallies publications and citations from traditional academic venues, and it only collects data from documents written in English. Yet even if we “fixed” the measure to attenuate bias, combined the measure with additional indicators of influence, and/or expanded the instrument to capture greater complexity, a larger philosophical issue remains: The metrification of scholarly pursuit.

Last week, alterations to Google Scholar coincided with sharp commentary on metrification. We were especially taken with two pieces on the LSE Impact blog addressing academic metrification in general and its particular manifestation on the ResearchGate repository. All of this comes at a time when scientists are also warning of dangerous levels of secrecy that make practitioners choose between the moral courage to blow the whistle on an industry and lucrative intellectual property contracts. For all the talk of collaborative and interdisciplinary work, scholars have never played it so close to the vest. The ubiquitous score-keeping in the day-to-day life and career path of the academic researcher is thoroughly felt but this increasingly gamified scholarship is a topic not spoken about too loudly.  

Academic score-keeping is an old practice but the increasingly seamless relationship between publication, data collection, and subsequent ranking establishes perverse incentives for everyone involved in the production of knowledge. The consequences of metrification extend beyond academic practitioners to include The University as a social institution, and to knowledge production and dissemination in general.

Tying an academic career trajectory to one’s capacity to generate impressive quantitative scores profoundly shapes what it means to be a scholar and engage in scholarship. It incentivizes things like obligatory and unnecessary conference presentations that add lines to a CV; multi-authored journal articles in which people boost publication numbers regardless of meaningful contributions to the text; “salami publishing” in which authors write articles about simple ideas using thin data instead of slowly constructing a robust piece of work; and the use of strategic citation practices and adherence to “buzzwords” to maximize visibility.

As the amount of applicants continues to swell for each tenure-track job, and as the structure of the academy moves ever closer to a corporate model, we risk trading careful research for a lab rat race. A publishing record that once bestowed a full professorship thirty years ago may be a marginal case for a mid-career tenure case file today. Departments continue to admit graduate students not because their assessment of the field demands additional practitioners, but because a dip in a graduate enrollments may catch the eye of a dean looking to merge and consolidate departments.

Some of our readers are “academics” (current, aspiring, and former), and some not. But all of our readers are curious and thoughtful and likely invested in practices of knowledge production. We hope to start a conversation about metrification in light of technological and social shifts. We’re asking for your help:

To what degree is academic life run by measures and scores and metrics?

Are the measures accurately describing and promoting good work? Or, as we fear in this post, is work being made to fit and maximize the measures?

Can you think of measures and scores we haven’t listed here? Or specific ways research is changed to make the numbers look good (like choosing a research topic, the terms you use when writing, the people you cite, where you publish, and so on)?

Do measurements need to be improved or removed?

Please participate in the comments! This is a topic we at Cyborgology have cared about for a long time and have not seen many robust discussions elsewhere.

Headline image via: Source