Scholarly Metrics Under the Microscope: From Citation Analysis to Academic Auditing

Bruce White (Massey University, Palmerston North, New Zealand)

Library Review

ISSN: 0024-2535

Article publication date: 7 September 2015

237

Keywords

Citation

Bruce White (2015), "Scholarly Metrics Under the Microscope: From Citation Analysis to Academic Auditing", Library Review, Vol. 64 No. 6/7, pp. 510-512. https://doi.org/10.1108/LR-06-2015-0066

Publisher

:

Emerald Group Publishing Limited

Copyright © 2015, Emerald Group Publishing Limited


In the introduction to this timely and important collection of writings on the value, meaning and use of numerical measures in the study and the assessment of scholarly research, Cronin and Sugimoto firmly declare their stance:

[…] by assembling a representative cross-section of the literature critiquing evaluative bibliometrics we may be able to raise awareness of the approach’s limitations and also encourage greater procedural caution among relevant constituencies.

Much of the current concern with citation-based metrics relates to their use in research assessment and evaluation-based funding, but, in fact, criticisms of citation counting have been around since its inception. The great strength of this collection is to present a multi-layered map of the context underlying current practice in bibliometric research measurement.

The book begins with Eugene Garfield’s 1955 paper on citation indexing, in which he suggests not only the use of citation linking between papers as a tool of information discovery, but also of evaluation: “the citation index has a quantitative value, for it may help the historian to measure the influence of the article – that is its impact factor”. Each of the following six parts begins with a helpful introduction by the editors, briefly outlining the main points of the argument and summarising each contribution.

Part 1, Concepts and Theories, begins with a 1981 paper by Cronin, “The Need for a Theory of Citing”, which goes straight to the heart of the matter, that although evaluative bibliometrics is almost exclusively based on the counting of citations, there is no shared agreement on what the act of one work citing another actually means. This is a good start. Garfield is a lively writer and his essay sets the scene perfectly, while the theoretical deficiency of citation indexing is a major underlying theme. Each of the following six papers in Part 1 deals with an aspect of the problem of relating numbers to their meanings, ranging from Paul Wouter’s optimistic view of the possibility of creating a formalized representation of scholarship, to Priem and Hemminger’s anticipation of Scientometrics 2.0 and the use of the social web as a source of evaluative data.

A roll-call of the titles of Part 2, Validity Issues, includes “Abuses of Citation Indexing”, “The Footnote Fetish”, “No Citations Analyses Please”, “Scientific Communication – A Vanity Fair?” “Coercive Citation in Academic Publishing”, “Uses and Abuses of Bibliometrics” and “Sick of Impact Factors”. Ranging in date from 1967 to 2012, these papers convey a strong sense of practical scepticism supplementing the concerns about insubstantial theory. Many of the problems outlined in MacRoberts and MacRoberts’ 1987 paper “Problems of Citation Analysis: A Critical Review” – multiple authorship, clerical error and incomplete source coverage – still exist, which underlines the editors’ recommendation of caution in its use.

Part 3, Data Sources, puts bibliometric data under the microscope and finds bugs. It begins with Garfield’s 1985 defence of the selectivity and incomplete coverage of the ISI tools from which citation data were (and are) drawn and includes one of Péter Jacsò’s lively critiques of Google Scholar, as well as López-Cózar, Robinson-Garcia and Torres-Salina’s 2012 paper on the manipulation of Google Scholar metrics. Of particular interest is Wouters and Costas’s “Novel Forms of Impact Measurement” (2012), which is a critical compendium of alternatives to the traditional ISI and Scopus approach to research and citation metrics, including Google Citations, CiteULike, Mendeley and Zotero. Diana Harley in “Issues of Time, Credit and Peer Review” (2012) gives us the working academic’s view of the demands of the new scholarly environment and is a must-read for those librarians with an interest in open data.

Part of the appeal of bibliometrics is the possibility of a number that can stand as a proxy for concepts, like “value” and “significance”, that are difficult to define and even harder to agree upon. Part 4, Indicators, takes a hard look at the two most controversial of these, the h-index and the impact factor. Jorge Hirsch’s 2005 paper on the h-index should be read by anyone using it, or assisting others to, as should the critiques pointing to its inconsistencies and deficiencies. The impact factor, which has come to be applied to journals rather than individuals, also comes in for some rough treatment, notably in Monastersky’s classic 2005 essay “The Number That’s Devouring Science”. In fact, Cronin and Sugimoto’s introduction to this section, “Angels on a Pinhead”, calls into question the very notion of indicators and asks whether bodies of scholarly work can ever be “reduced to a single number”.

The last two parts of the book do a lot to explain the pressures that are driving the demand for one-size-fits-all numbers. Part 5, Science Policy, begins with a 1987 paper by policymaker, Jean King, which points out the difficulties involved in allocating scarce resources and the limitations of peer review in this context. After reviewing the existing bibliometric options, she concludes that there is a need for “reliable field-independent indicators […] to be generated on a routine basis”. Cue the h-index. The difficulty is, as the following papers point out, that there are no simple and reliable indicators because field norms are too diverse and researchers, individually and collectively, either adapt their behaviours to follow the money or suffer the consequences, as Diana Hicks points out in her 2013 paper on social science publishing. Part 6, Systemic Effects, is an extended howl of protest from members of the research community at the new politico-managerial environment of auditing, output measures and indicators that they find themselves in. As Roger Burrows points out in “Living with the h-index” (2012), “academic value is essentially becoming monetized and as this happens academic values are becoming transformed. This is the source of our discomfort”.

At this point, the editors might seem to be leading an angry academic mob against all research evaluation, but they are at pains to point out in their epilogue, “The Bibliometrics Baby and the Bathwater”, that this is not their intention. The difficulty is not that bibliometric measures are without meaning, but the meaning lies in large aggregations of data and not in particular instances which can vary wildly. Their penultimate sentence carries the message – “the assessment of a scholar’s work should be based on direct engagement with that work rather than on quantitative indicators of the work’s impact”.

A review can only offer a taste of the 55 contributions contained in this volume. As the editors point out, it could have been three times the length, but they appear to have carried out their task of selection conscientiously, and it is hard to think of significant viewpoints that have not been included. If at times, the voices seem strident and some of the claims rather self-serving, then this is only a reasonable reflection of the emotions aroused by what may once have seemed like a rather abstruse field of study. This is an essential book for all university libraries, but more importantly, it is an important book for librarians to read if they are to understand the strange and emerging linkages between the bibliographic tools and norms of our profession and the wider community in which we exist.

Related articles