Innovating peer review, reconfiguring scholarly communication: an analytical overview of ongoing peer review innovation activities

Wolfgang Kaltenbrunner (Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands)
Stephen Pinfield (The University of Sheffield Information School, Sheffield, UK)
Ludo Waltman (Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands)
Helen Buckley Woods (The University of Sheffield Information School, Sheffield, UK)
Johanna Brumberg (VolkswagenStiftung, Hannover, Germany)

Journal of Documentation

ISSN: 0022-0418

Article publication date: 3 August 2022

Issue publication date: 19 December 2022

2985

Abstract

Purpose

The study aims to provide an analytical overview of current innovations in peer review and their potential impacts on scholarly communication.

Design/methodology/approach

The authors created a survey that was disseminated among publishers, academic journal editors and other organizations in the scholarly communication ecosystem, resulting in a data set of 95 self-defined innovations. The authors ordered the material using a taxonomy that compares innovation projects according to five dimensions. For example, what is the object of review? How are reviewers recruited, and does the innovation entail specific review foci?

Findings

Peer review innovations partly pull in mutually opposed directions. Several initiatives aim to make peer review more efficient and less costly, while other initiatives aim to promote its rigor, which is likely to increase costs; innovations based on a singular notion of “good scientific practice” are at odds with more pluralistic understandings of scientific quality; and the idea of transparency in peer review is the antithesis to the notion that objectivity requires anonymization. These fault lines suggest a need for better coordination.

Originality/value

This paper presents original data that were analyzed using a novel, inductively developed, taxonomy. Contrary to earlier research, the authors do not attempt to gauge the extent to which peer review innovations increase the “reliability” or “quality” of reviews (as defined according to often implicit normative criteria), nor are they trying to measure the uptake of innovations in the routines of academic journals. Instead, they focus on peer review innovation activities as a distinct object of analysis.

Keywords

Citation

Kaltenbrunner, W., Pinfield, S., Waltman, L., Woods, H.B. and Brumberg, J. (2022), "Innovating peer review, reconfiguring scholarly communication: an analytical overview of ongoing peer review innovation activities", Journal of Documentation, Vol. 78 No. 7, pp. 429-449. https://doi.org/10.1108/JD-01-2022-0022

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Wolfgang Kaltenbrunner, Stephen Pinfield, Ludo Waltman, Helen Buckley Woods and Johanna Brumberg

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

In the last two decades, we have witnessed significant efforts to innovate peer review practices in the publishing domain. While the terminology to describe innovations is often inconsistent (Ross-Hellauer, 2017; Tennant et al., 2017), we can readily identify initiatives experimenting with different forms of transparency in the review process (“open peer review”), efforts to make accessible research outputs prior to peer review in the shape of so-called preprints, new forms of recruiting reviewers (“crowd review”), as well as a host of digital tools to support editors and reviewers in detecting plagiarism and other forms of misconduct. In this paper, we present the results of a survey designed to create an overview of ongoing projects to innovate peer review practices. The survey was an initiative of the Research on Research Institute (RoRI), in which our research team is developing a collaborative research agenda with publishers and other organizations in the scholarly communication system. The survey was disseminated among diverse stakeholders, including journal editors, smaller and larger commercial publishers, not-for-profit or non-traditional commercial actors, as well as various public organizations.

Contrary to earlier research, we are not primarily interested in studying the extent to which peer review innovations increase the “reliability” or “quality” of reviews (Bruce et al., 2016; Tennant et al., 2017), nor are we trying to measure the uptake of innovations in the routines of academic journals (Wolfram et al., 2020; Horbach and Halffman, 2020). Instead, our data set focuses our analytical attention on peer review innovation activities as a distinct object of analysis. Our analysis will unpack in significant detail how the innovation projects reported to us as part of the survey purport to configure current review practices, both in terms of the stated intentions of their developers and in terms of their broader implications for disseminating scientific research. For this purpose, we rely on an inductively developed taxonomy that describes and compares innovation projects in terms of five main dimensions. For example, what is the object of review in a given project? How are reviewers recruited, and does the innovation entail specific review foci?

Our work is restricted to peer review in the context of scholarly publishing and scholarly communication more generally. Peer review in other contexts, such as peer review of grant proposals and peer review in research assessment settings, falls outside the scope of our work.

The paper is structured as follows. We firstly situate our analysis in a literature review of research on peer review innovations, showing exactly how our approach complements previous attempts to create overviews of innovation activity. We then introduce the details of the survey we used to collect empirical material, as well as the inductive taxonomical framework we relied on to order it. In the main empirical section of the paper, we provide a narrative summary of our material, in which we compare how the various innovation projects configure review processes. In a concluding section, we attempt to observe cross-cutting trends across diverse types of innovations, which will allow us to identify the main thrusts of innovation and some of the tensions between them, and which will provide the background for recommendations on how to better coordinate ongoing initiatives.

2. Peer review innovations as an object of study

While often criticized for its shortcomings, peer review continues to be widely seen as one among a range of distinct features that help set scientific knowledge apart from other forms of knowledge, thus warranting its special status in modern societies (Hackett and Chubin, 1990; Nicholas et al., 2015; Eve et al., 2021). Innovations in peer review processes – spurred especially by the pervasive digitization of many aspects of scholarly communication from the 1990s onward – have consequently drawn significant analytical attention from various disciplines. The vast majority of these studies focus on particular types of innovation, for example the impact of replacing double-blind review procedures with a fully transparent form of review where referees and authors are mutually known to each other (Van Rooyen et al., 1999; Pontille and Torny, 2014; Spezi et al., 2017).

A significantly smaller body of literature has attempted to analyze peer review innovations on a more aggregate scale. We can distinguish two main foci pursued by these overview studies. Some focus on the ability of innovations to improve the quality and efficiency of peer review, and others on the adoption of innovations across journals.

The former category includes studies written from an “activist” perspective (Tennant et al., 2017; Barroga, 2020; Bruce et al., 2016). Underlying such analyses is often the notion that peer review is in crisis – for example in the shape of bias or escalating misconduct that it fails to detect. Innovations are then studied in terms of their ability to tackle these problems, and in comparison to established review practices. Other analytical studies in this category report the findings of experiments or examine outcomes of peer review processes on a wide range of issues. For example, recent studies have analyzed the effect of publishing peer review reports on reviewer behavior (Bravo et al., 2019), or biases in outcomes of peer review (Severin and Chataway, 2021; Squazzoni et al., 2021). A common challenge for this type of overview study is that they require clearly defined criteria of “robustness” and “quality” of peer review, which runs up against often very heterogeneous interpretations of such concepts even within particular fields.

The second category of overview study focuses primarily on the question of the uptake of a defined set of peer review innovations by journals (Wolfram et al., 2020; Horbach and Halffman, 2020). An example is the article by Horbach and Halffman (2020), which focuses on the extent to which editorial practices of journals have in fact incorporated peer review innovations such as open peer review, registered reports and software to detect plagiarism and other forms of misconduct. Drawing on theories of the social construction of technology, Horbach and Halffman criticize the notion of diffusion of peer review innovations for suggesting that innovation is a quasi-natural process that unfolds seemingly by itself (like a chemical reaction). They draw attention to the fact that innovations need to be sought out by users, that users tend to change their practices only when they run into a problem (e.g. because established practices fail to detect misconduct) and that users adapt the technology as they incorporate it into their routines. The data set on which Horbach and Halffman base their analysis was provided by survey responses by 361 journal editors. They find that these practices have remained remarkably stable, and thus unaffected, by most innovations; the only innovation that is widely used is plagiarism detection software. However, as the authors indicate, their results may be constrained by the type of data collected. As is common for survey-based studies, the response rate is somewhat low at 6.1%.

Both types of study are valuable not only given the overall dearth of this kind of encompassing research on peer review innovations. Both, however, also limit their scope of analysis in specific ways by virtue of their design and assumptions. Activist studies, by focusing on a normative definition of what is wrong with peer review, often neglect a potentially wide range of innovations that do not fit the assumed criteria of what is necessary to “fix” peer review, not to mention the analysis of longer-term effects of innovations that would appear to contradict the original aims of activist agendas (Rodríguez-Bravo et al., 2017; see also Dahler-Larsen, 2019). Studies focusing on the incorporation of innovation in editorial routines, on the other hand, implicitly picture editors as “obligatory passage points” for anything related to peer review, thus excluding many arrangements that are not coordinated by journal editors in the first place.

In this paper, we do not make a priori assumptions about the legitimate functions of peer review and about what organizational level of the scholarly communication process innovation must impact to “really” be effective. Many innovations can in fact be seen to actively subvert assumptions and arrangements in scholarly communication that are usually taken for granted by acculturated users (cf. Star and Ruhleder, 1996; Bowker and Star, 1999), which makes it problematic to use traditional journal-centric models of peer review as a frame of reference. For example, the notion of manuscripts as input for “pre-publication peer review” implies a normative view of what constitutes “publication” and when, in the process of knowledge production, it occurs. Specifically, it suggests that only an article published by a journal constitutes a publication, a notion that is clearly challenged by publicly accessible preprints of papers not (yet) accepted by a journal. Many innovations fundamentally reconfigure objects of review and the role of particular actors in the review process, thereby giving rise to emergent evaluative practices that do not necessarily have a one-to-one equivalent in current conventions. To analytically capture these emergent as well as relatively more conventional forms, we will propose to unpack innovation initiatives according to a detailed taxonomical framework we outline in the next section.

3. Conceptualizing peer review innovations

One way in which researchers and practitioners have attempted to understand and design innovations in peer review is by deconstructing review processes and identifying their constituent parts. This has often involved developing taxonomies, which systematize and standardize descriptions of elements of peer review. Two recent examples have been widely discussed: the ASAPbio preprint review taxonomy and the taxonomy produced by the International Association of Scientific, Technical and Medical Publishers (STM). The former focuses on a particular innovative domain: reviewing preprints (Yan, 2021). This taxonomy is designed as a way of encouraging services which provide various kinds of review and informal feedback on preprints to make the process as transparent as possible, for example for readers, and covers elements which are of particular importance in preprint reviewing. It covers a number of areas including how the review or feedback was requested, what they cover, the identity and level of anonymity of the reviewer, declaration of competing interests, public commenting, opportunity for author responses and how a recommendation is made. The STM taxonomy covers journals and articles, and is based on four main elements: “(1) identity transparency, (2) who the reviewer interacts with, (3) what information about the review process is published, and (4) whether post-publication commenting takes place” (STM, 2020). It steers away from some areas of innovation, for instance, scope of review, and whether review includes consideration of novelty, significance or rigor, as these are regarded as “not sufficiently defined and demarcated.”

In the empirical analysis below, we will similarly describe and compare peer review innovations according to a faceted taxonomical framework (Figure 1), albeit one that includes potentially all types of innovation projects and that breaks down peer review into five elements. These elements were identified as part of an inductive analysis of our data, and also took into account the taxonomical frameworks proposed by ASAPbio and STM, bearing in mind their different purposes and scopes compared with our own work.

  1. Object of peer review (What is being peer reviewed?)

  2. Aim of peer review (Why is peer review performed?)

  3. Role of peer review actors (Who performs peer review?)

  4. Nature of peer review (How is peer review performed?)

  5. Openness/Transparency of peer review (What information is available to whom during and after peer review?)

We discuss the different categories here in relation to published literature. The first analytical dimension is the object of peer review. Traditionally, the object of peer review in the second half of the 20th century was scientific manuscripts, that is, otherwise not publicly accessible drafts of prospective scientific papers. This was not always the case, however. In historic forms of evaluating contributions in scholarly communication, articles published in journals were originally a subordinate form of publication that merely reported on experiments conducted in the context of scholarly societies (Csiszar, 2018). Recent innovation projects partly aim to again change or expand the focus of what is being reviewed. For example, they subject data sets and computer code used in scientific experiments to peer review, or they make accessible arguments in the shape of preprints and thus give rise to review formats that do not involve journal editors. Other objects outside of the scope of this study might be accommodated in the taxonomy, something which might be included in future analyses. These include proposals for funding and also academic CVs (in gray in Figure 1).

Second is the aim of peer review. We here distinguish between two aims, namely, to provide comments on a scientific work with the aim of improving it, or to support decision-making in science, for example to decide whether or not a manuscript should be published, or whether or not a funding proposal should be granted. Some forms of peer review perform both functions, for example, reviewers for journals are usually asked to make comments which will improve a submission as well as make a recommendation on acceptance or rejection (often subject to its improvement). Reviews of preprints, however, are normally contributed to improve the paper and are separate from any decision-making process by a journal. Reviewers of grant proposals, for the sake of comparison, would usually only contribute to a decision-making function, albeit with justifications.

A third complex of questions focuses on the role of review actors. The concept of peer reviewer – itself a neologism coined in the 1960s (Horbach and Halffman, 2020), but making reference to the aristocratic connotation of the original notion of “peer” – suggests as key criteria special expertise and membership of a scientific community. However, the massive growth and differentiation of science combined with new demands regarding the societal relevance of publicly funded science make the definition of these features anything but straightforward (Nowotny et al., 2001; Ziman, 2001). We therefore interrogate innovations in terms of whether they imply particular criteria someone should meet to act as reviewer, for example regarding expertise or level of seniority.

Of similar analytical interest is how exactly reviewers are selected. In traditional journal-centric review processes, editors identify epistemically and socially suitable reviewers for manuscripts on a case-by-case basis (Crane, 1967; Vermeir, 2020). Yet the digitization of the scholarly publishing system creates new ways of recruiting reviewers, and big publishing companies offer infrastructural services that entail delegating elements of the review process to staff or AI, including automated algorithmically driven selection and recruitment of reviewers, with unclear implications for the style and scientific focus of review work (Horbach, 2020). Connected to this is the question whether there are any minimum thresholds regarding the number of reviewers as well as approaches to ensure diversity and demographic representativity. Having established this information, we also interrogate whether different experts are assigned distinct tasks and responsibilities. For example, in journal peer review, reviewers are usually asked to write comprehensive reports about the submission, whereas more specific forms of review focused on data sets or software code can be based on a division of labor.

A final comparative dimension under the same heading is whether innovations entail rewarding reviewers. The peer review system is sometimes conceptualized as a gift economy driven by perpetually renewed feelings of mutual indebtedness among members of academic communities (Kaltenbrunner et al., 2021). Yet, the invisibility of peer review labor also provides a powerful incentive against doing too much of it, and arguably ever more so in the context of career incentive systems that strongly emphasize publications. A potential area of innovation is therefore constituted by attempts to more formally acknowledge the work of reviewers. Yet, new incentives may not simply constitute a “solution” to a problem, but potentially reconfigure publishing practices in broader and unanticipated ways.

A fourth main analytical dimension is the nature of peer review. A perpetual subject of debate among scientists, criteria of “good research” are constantly updated and shaped through the process of scholarly communication itself (Guetzkow et al., 2004; Aksnes et al., 2019), for example, in the sense that collective standards of robustness change over time or insofar as certain journals end up cultivating a certain research style. Peer review innovations provide a potential new opportunity to intervene in this process in formal terms, namely, by specifying the focus and criteria of peer review (e.g. soundness-only, novelty, relevance). A difficulty here is that review practices are not simply learned in a single context, but as part of a broader socialization process in science and through a constant switching of roles (Kaltenbrunner and de Rijcke, 2019). Relevant questions to ask are therefore how innovations instruct reviewers about the aims of peer review and how exactly are peer review evaluations reported and integrated, for example, by means of a prestructured form that requires them to address specific points, or rather in the shape of a free-form, essay-like report. Finally, we ask whether innovation projects raise any requirements regarding the maximum duration of a peer review cycle.

The final main category focuses on questions of openness and transparency, that is, are review reports made available, and is the identity of reviewers and authors made public or made visible to each other during the review process? Double-blind peer review is itself a historically rather recent phenomenon, and has never been uncontested in any field (Horbach, 2020; Taylor & Francis, 2022). Nevertheless, many innovations take double-blind peer review as a convention in need of rethinking, with various approaches to de-anonymizing peer review and publishing review reports being among the most widely discussed novelties in recent decades (Pontille and Torny, 2014; Ross-Hellauer, 2017; Wolfram et al., 2020). Perhaps because of that, terminology is inconsistent both among innovation actors and in published research analyzing innovation (Horbach, 2020). “Transparent peer review” usually refers to the practice of publishing the reviews for manuscripts that have been accepted for publication in a journal. In some cases, it also refers to the practice of publishing the names of the reviewers. The term “open peer review” is also often used.

4. Data collection

The data for this paper were collected through a survey organized by RoRI. Ethical approval was granted by the Ethics Review Committee of the Social Sciences of the Faculty of Social and Behavioural Sciences of Leiden University. The survey was sent to a broad range of actors, including commercial publishing companies, the publishing branches of various learned societies, journal editors, as well as not-for-profit organizations. The survey was in English, but we accepted responses also in the following languages: Chinese (Mandarin), German, Portuguese, Spanish and Russian. Potential respondents were identified through a combination of systematic sampling and snowballing. We relied on the ReimagineReview inventory of peer review innovation projects created by ASAPbio (n.d.) to identify respondents. We also advertised the survey through a blog post on the website of the Open Access Scholarly Publishing Association (OASPA), a non-profit trade association of open access journal and book publishers (Waltman et al., 2021a). In total, we received 95 pertinent submissions, that is, self-defined innovations, by 54 respondents (including informed consent). This means that some respondents submitted more than one initiative. In some cases, the information provided in the survey responses was insufficient to describe the innovation projects in the level of detail required by our taxonomy. We therefore occasionally supplemented the survey data with desk research in the shape of online sources such as journal and publisher websites. In addition, we asked survey respondents to comment on an earlier version of this paper, which in a number of cases led them to clarify and expand on their original submissions. The full data set can be accessed via Figshare (Kaltenbrunner et al., 2022).

Our analysis of the data we collected is necessarily qualitative. We cannot, for example, generalize from our sample to the whole population of academic publishers. In any case, because every initiative is unique, we realize that it will have a different effect on different organizations of different sizes serving different communities and applying to different published outputs. A second caveat is that our data collection was not geared to provide the basis for a historiography of peer review. Where possible, we indicate the original inventor of particular innovations, but we do not aim to offer an exhaustive authoritative account of priority claims. Nevertheless, we believe that the data we have analyzed represent a rich resource, providing insight into current thinking and activity in this fast-moving area.

5. Results

5.1 Object of peer review

While many projects in our sample continue to focus on peer review of research articles, an overall rather common area of innovation is the object of peer review. One type of innovation project is registered reports, which are represented by two cases in our sample. Introduced independently but simultaneously by the journals Cortex and Perspectives on Psychological Science in 2012 (Chambers and Tzavella, 2021), registered reports entail submitting a research design, where scientists spell out a hypothesis and research methods for review before submitting the actual results, thereby encouraging them to stick to their original research design and to be transparent about deviations from it. The aim is to increase the quality of research by preventing misuse of the many degrees of freedom researchers typically have in their methodological choices, for example, selective use of data and what is considered an opportunistic reframing of research questions. Arguably because of their character as a disciplining instrument, however, the use of registered reports appears to be focused on fields where there is an established discourse about questionable research practices and “research waste,” as is the case in psychology (Chambers and Tzavella, 2021).

Another, more common, type of innovation in our data aims to expand the focus of peer review from research articles to data sets, source code and other digital artefacts, so as to improve the quality and robustness of research. Five initiatives in our sample encourage or require the deposition of data to facilitate independent assessment of research claims advanced by authors. Reviewers in such initiatives are encouraged to take into account the submitted data sets, but the review process remains otherwise unchanged. By contrast, the four initiatives offering tools and services to review source code or digital artefacts – the Dagstuhl Artefacts initiative, ReScience, a pilot project by PLOS Computational Biology focusing on reproducibility of biological models used in submissions, and a code review service for a selection of 19 journals in the Nature portfolio (which in its totality is part of the Springer Nature group) – partly entail a parallel review process that is taken on by specialist reviewers.

The most common type of innovation regarding the object of review is the review of preprints, which principally enables a large range of researchers to comment on a scientific work. At the same time, preprint reviewing is not a single homogenous practice. Submitted innovation projects in our sample include, firstly, dedicated platforms like arXiv (established in its current form by Paul Ginsparg in 1991) as well as bioRxiv and medRxiv (established in 2013 and 2019, respectively). Organizationally separate from journal peer review processes and operated in a nonprofit context, these allow scientists to post preprints in parallel to a journal submission.

Moreover, some initiatives in our sample build on these pioneering platforms and provide additional functionalities to them, thus emphasizing the thorough embedding of preprints in the publication culture of many scientific fields and the increasingly infrastructural character of preprint servers. SciRate and PREreview are websites for recommending and commenting on preprints hosted elsewhere, as well as for inviting comments of one's own. PREreview is platform-agnostic and merely requires that preprints have a DOI or an arXiv ID, while SciRate focuses on preprints on arXiv. Both initiatives require users to register to access the commenting and recommending functionalities. Both PREreview and SciRate run on open source infrastructure.

Another group of preprint initiatives in our sample is organizationally connected to big publishers and enables authors to optionally post a preprint of a submitted article. The scope of these initiatives is sometimes very significant. In Review, a free preprint service on the platform Research Square, which is used by Springer Nature and its subsidiary BMC, covers 486 journals. It allows authors to publish preprints in parallel to a journal submission and receive comments on them via the platform, while a regular peer review process is organized by the publisher, with progress updates shown to the submitting author on the preprint server Research Square. This seems to be an instance of a successful innovation that was initially driven by academic practice in particular fields but has become so widespread that it is now adopted by commercial actors on a large scale (Chiarelli et al., 2019; Delfanti, 2016). Another instance is Review Commons, a not-for-profit initiative used by EMBO Press, The Company of Biologists, and the journal eLife (as well as other organizations that did not submit replies to our survey), which all together serve 17 journals. Some publishers, moreover, explicitly encourage preprint deposition, such as the portfolio of BMJ-branded journals.

All of the preprint initiatives mentioned so far tie in with publication and review practices built around preprints, but do not in themselves mandate any particular review process. This distinguishes them from projects where preprint posting is integrated in the review process, such as F1000Research, recently acquired by Taylor & Francis, and the publishing platform Access Microbiology, currently being set up by the Microbiology Society. Rather than connecting preprint publishing to traditional journal publishing, these initiatives offer an alternative workflow that combines elements of preprinting and journal publishing. After some basic checks, articles of various types are immediately posted on the platform, reviews are then commissioned and are also published on the platform and authors are invited to make revisions to their article, which are again published on the platform. In the case of Access Microbiology, submissions are handled by academic editors whose identity and decisions are made public alongside the submission. The platform eLife has recently moved in a similar direction. It now peer reviews preprints and makes public reviews written for readers (as well as providing private recommendations for the authors), although eLife relies on third-party preprint servers rather than integrating preprints in its own platform.

Another special case is Plaudit, which allows users with an ORCID account to endorse any digital object with a DOI, which can be considered a form of peer review. There are otherwise no criteria for what type of document it must be or in what form it has been published. Plaudit can be used via an extension for the Google Chrome browser.

Finally, our sample includes two major bibliographic indexes, namely, the European Reference Index for the Humanities (ERIH PLUS) and the Scientific Electronic Library Online (SciELO). The former was originally established in 2005 by the European Science Foundation and is now operated by the Norwegian Centre for Research Data. The latter was established in Brazil in 1997 and today comprises OA journals from 14 countries in Latin America and South Africa, as well as in Portugal and Spain. The two initiatives can be seen as a form of meta-peer review, insofar as they aim to identify publishing outlets that meet certain requirements regarding the quality and rigor of peer review. Both initiatives have established formal procedures to determine whether particular outlets should be included in the index. These are carried out by designated advisory boards and academic experts, drawing on assessment criteria that we will discuss in somewhat more detail below.

5.2 Aim of peer review

Most of the innovations covered in our survey were designed to provide comments on a scientific work with the aim of improving it, or to support decision-making in science, in particular to accept or reject a scientific work for publication. Peer review carried out as part of a journal submission process involves both of these strands. However, some of the innovations reported to us were “journal agnostic,” and so were only directly associated with providing comments on a scientific work rather than directly informing a decision-making process, for example, to accept or reject an output for publication. Review of preprints – as afforded by PREreview and SciRate – obviously tends to be based on a journal agnostic review, given that they are not connected to any particular journal even though journals may encourage preprint deposition.

5.3 Role of peer review actors

A first question we are interested in under this heading is whether initiatives identify explicit criteria someone should meet to act as reviewer, for example, in terms of expertise or level of seniority. Our sample contains two examples that involve patients as reviewers for journals. Both obviously focus on biomedical research with a particular emphasis on its practical or clinical relevance: one is the BMJ, and the other Research Involvement and Engagement (published by BMC). In the former, patient reviewers are invited on a selective basis depending on the submission, while in the latter, journal foresees the regular use of two patient reviewers and two academic reviewers for all manuscripts. Of course, this raises the question of what constitutes a “peer” in peer review, and whether patients are considered to have that status – some would characterize patient review as “lay” or “non-expert” review in contradistinction to peer review. In the case of these two journals, the aim is to broaden the category of “peers” in the hope that this will make peer review more conducive to the production of practically relevant knowledge in which patients might be judged to have a particular kind of expertise.

Aside from these cases, there are a significant number of initiatives that involve forms of evaluation of submissions carried out by professional staff in publishing companies or organizations operating preprint servers. This includes a substantial amount of screening of preprints for plagiarism, scope and adherence to ethical and technical guidelines (although not all are made transparent to users). Platforms such as arXiv, bioRxiv and medRxiv already have relatively rigorous screening in place, and there are indications that large publishers are making efforts to more thoroughly integrate preprint publishing in the journal publishing process and create trust in it through additional checks and balances (Nature portfolio, n.d.; Russell et al., 2021).

Professional staff working for publishing companies are also routinely involved in screening manuscripts sent to journals. This form of evaluation usually focuses on adherence to some basic formal guidelines, but also on thematic fit, plagiarism checks, image manipulation checks and language use. On the last of these, there may be the possibility of offering support by publisher copy editors before passing a manuscript on to journal editors, for example, in the case of Cambridge University Press journals. In contrast to the patient review initiatives, the involvement of publisher staff in peer review is often framed more as an additional service to journals or authors to improve or secure the quality of submissions. Such screening in both preprint and journal publishing is implicitly based on a distinction between substantive scientific aspects of a submission, which are reserved for domain specialists, and procedural or practical aspects, which can be delegated to other kinds of professionals, such as publisher staff. Naturally, the boundary will often be fuzzy.

As mentioned above, our sample also contains two journal indexing initiatives, ERIH PLUS and SciELO. Being international undertakings, they rely on review procedures involving advisory boards and individual academic experts from diverse countries to process applications for inclusion in the index. This is arguably based on the notion that a degree of familiarity with national conventions of scholarly publishing is necessary to properly judge the rigor of journal peer review processes.

Another differentiating feature of innovation projects relating to the role of reviewers is what they mean for the way reviewers are selected. For example, are they picked by editors or can they sign up individually? Firstly, screening of preprints and manuscripts before actual review, as well as practices like code review, often means that review actors are assigned in a way that is disembedded from disciplinary community structures, and often according to a platform logic – for example, publisher staff are responsible for broadly defined research fields that are part of an organizational subdivision.

There are, moreover, a considerable number of initiatives that involve an access-controlled forum where users can register to recommend and comment on submissions. The respective initiatives in our sample are F1000, Copernicus publishing, SciPost (all three of which are on top of regular review), ReCode (open to registered users who volunteer), Thieme publishing (so-called crowd-sourced peer review), bioRxiv, medRxiv, SciRate, PREreview, as well as four Royal Society journals. Instead of an editor judging the topical expertise of reviewers for manuscripts on a case-by-case basis, reviewers self-select for submissions after undergoing some form of verification of identity and competence upon signing up to the forum. This massively expands opportunities for users to comment on submissions, and it principally expands the community of potential reviewers from the often disciplinary readership of a particular journal to the much more extensive but scientifically less tightly knit group of users of a digital platform. It is not always clear who is responsible for such vetting of forum users, however, and how stringent the criteria are – for PREreview, for example, it is sufficient to have an ORCID account.

In addition, there are a number of initiatives that allow or encourage reviewers to invite co-reviewers, typically with the aim of mentoring early career researchers (ERC). Such a system is offered for the complete portfolio of the Geological Society of London, for two journals published by Oxford University Press (OUP), a handful of Wiley journals (exact number not specified), as well as for a few BMC journals (where researchers who reject invitation to review can agree to at least co-mentor a younger colleague). All Nature Portfolio journals, moreover, allow principal investigators to bring on a junior colleague as a reviewer, and 14 Nature Reviews journals offer an ECR mentoring program that also includes special training resources. Approaches of this sort are distinct from the case of access-controlled forums insofar as selection of reviewers relies on acquaintances and assessment of suitability by editor-invited reviewers, rather than on a registration supervised by forum moderators. Review work is thereby arguably more strongly embedded within disciplinary communities. A few initiatives, moreover, offer the possibility for authors to propose and/or invite reviewers, for example, the Academy Submission Route initiative of OUP (in addition to independent reviewers), most journals in the Wiley portfolio, F1000 and four Royal Society journals. Suggestions are typically non-binding, however, and may be rejected by editors.

Yet another innovation with implications for reviewer recruitment is the use of reviewer databases. Many publishers offer such databases to journal editors as an optional service, but there are also cases where their use is built into the editorial process as a default by the publisher. Frontiers has a highly automated review process where a large number of potential reviewers are invited with automated messages, until a sufficient number of them accept. Reviewers are selected algorithmically by matching their publication histories with the topic of the manuscript to be reviewed. Optionally, editors may manually invite additional reviewers. The reviewer pool resulting from the automated process far exceeds the social networks of any particular editor or even discipline, which usually constitute an “outer boundary” for reviewer recruitment in disciplinary gift economies (Kaltenbrunner et al., 2021). The effects of this strategy for the content of the review reports is unclear.

Few initiatives in our date mention minimum thresholds regarding number of reviewers. UCL Open: Environment indicates a minimum of two reviewers, while the BMJ and BMC patient review initiatives require two scientific and two patient reviewers. However, there are several co-reviewing and ECR mentoring initiatives underway, which imply that at least two reviewers work together in writing a review report. This goes for 14 Springer Nature journals, the portfolio of the Geological Society of London, four OUP journals, the journal of the ENT Society of Portugal, an unspecified number of journals published by the American Society for Microbiology and the journals published by The Company of Biologists. Naturally, all innovations involving access-reviewed forums imply a potentially larger number of reviewers, in the sense that they invite open comments by registered users on top of commissioned reviews.

Only relatively few initiatives in our sample aim to diversify reviewers. The publishing branch of the Institute of Physics (IOP) mentions an effort to diversify the reviewer pool by including more women onto editorial boards and by encouraging journals to invite more women and scientists from non-Western countries as reviewers, and a similar initiative is pursued by the Geological Society of London. The Journal of Evidence-based Health Care indicates the aim of increasing diversity as part of their open peer review initiative, but this seems to be limited to de-anonymization of reviewers, and thus to creating transparency in gauging diversity of reviewers, rather than actively diversifying recruitment processes. The apparent overall lack of such initiatives amongst contributors to our data (which included a significant number of large publishers) is noteworthy, since it is apparent that publishers have equity, diversity and inclusion initiatives in other areas, focusing on issues such as gender, as well as issues attempting to widen geographical diversity, often focused on encouraging contributions from outside the Global North. It might be that further initiatives in this area will be developed in the coming years, for instance, in the context of the “Joint commitment for action on inclusion and diversity in publishing,” an initiative led by the Royal Society of Chemistry and supported by a large number of publishers (Royal Society of Chemistry, n.d.).

The question of role specialization amongst reviewers is an interesting one: if peer review is performed by multiple review actors, do different reviewers have different tasks and responsibilities (e.g. distinction between reviewers, associate editors and editors-in-chief)? The initiatives submitted to us usually do not contain specific information on this. It is safe to assume, however, that mentoring initiatives for younger scientists will tend to affect distribution of reviewer tasks, since such initiatives imply a situation where senior reviewers direct junior colleagues in how to go about the review. Moreover, and as we have pointed out above, preprint review usually implies a special screening role for professional staff in organizations hosting preprint servers. Regarding journal publishing, many publishers, moreover, arrange for an initial screening of manuscripts for plagiarism, language and rough topical fit. This is done by publisher staff and may also now involve the use of AI tools (e.g. in the case of the publishing platform Access Microbiology). After that, editors take over to screen manuscripts and invite reviewers for more substantive and domain-specific assessment.

For self-selecting review processes happening in forums, whilst there may not be a formally arranged specialization, there will potentially be an emergent self-coordinating dynamic at play, where individual comments build on each other, or where comments focus on a small number of submissions. This may lead to imbalances in the review process, with no central authority in the shape of an editor to steer the process to ensure that all elements of a submission are considered in equal measure. But note that comments on arXiv and bioRxiv are moderated to ensure scientific relevance, and that not all forum environments operate with comments in the first place. PREreview, for example, is a platform for posting either “rapid” or “full” reviews that require users to fill in a more or less fine-grained list of questions. Moreover, and with the exception of one initiative of the so-called crowd review (by Thieme Publishing), forum-based review is carried out in addition to commissioned reviews, which may provide more balance.

In the case of patient review initiatives, there are, of course, more clearly defined review foci. In the BMC journal Research Involvement and Engagement, manuscripts are handled jointly by an academic editor and a patient editor. Patient reviewers focus on practical relevance, and academic editors focus on robustness. In the BMJ patient review process, submissions include lay summaries which patient reviewers are invited to comment, following the reviewer guidelines provided by the journal.

Code review initiatives obviously single out code as a distinct review task. Code review is offered for 19 Nature journals, and the need for it is decided by the editor. Re-science Code review is hosted on GitHub and offers interactive commenting by members who have an account. The PLOS Computational Biology pilot initiative offers peer review of biological models by fellow academics, but this is supported by an external academic center specialized on reproducibility of code.

Finally, do innovations entail a specific way of rewarding reviewers? This is again a very common area of innovation that aims to solve a long-standing problem, namely, that of the invisibility of peer review labor, and therefore the lack of credit attached to it. Also, it is a type of innovation where individual initiatives can readily build on each other. We can distinguish two thrusts of innovation. One simply consists in allowing reviewers to make the fact of their work visible through links to established platforms like ORCID or Publons (established in 2012 and acquired by Clarivate Analytics in 2017). For respondents to our survey, this is done for the complete portfolio of the Geological Society of London, four EMBO Press journals, the complete journals of the American Society for Microbiology, all IOP journals, nearly all journals published by Wiley, one MIT Press journal, two OUP journals in the life sciences, the journal Fennia, the journals of the Royal Society, BMJ-branded journals and all Springer Nature journals. There are probably many more journals that offer services of this type, but they did not report this in our survey as an innovation. Moreover, all forms of open peer review that offer reviewers the possibility to publish their names can be seen to also serve the same function.

Another type of innovation does not merely aim to make invisible labor visible, but to incentivize review work by connecting it in more immediate ways to activities that are explicitly rewarded, in particular to publishing. The Journal of Evidence-Based Healthcare makes reviews themselves citable and thus turns them into publications in their own right, while SciPost publishes signed reports to encourage reviewers to post high-quality comments/reports. The OA publisher PeerJ has a well-established approach of not only rewarding reviewers with contribution points, but also compensating them with an APC discount for future publications of their own. Other publishers offer author incentives such as access to their content (cf. Emerald Publishing, 2021).

5.4 Nature of peer review

A further key differentiating element of different peer review initiatives is what they entail for the focus/criteria of the peer review process. For example, are they meant to evaluate specific features of a submission, such as novelty, significance, relevance to certain audiences or its inherent methodological and conceptual soundness? A few initiatives in our sample that entail screening-type review reserved for publisher staff spell out the foci of initial screening and/or digital tools for screening in more detail, usually to emphasize the added value they bring for journals. Examples include plagiarism checks as offered by all bigger publishers, language checks prior to regular review as inter alia performed by Cambridge University Press and AI-supported image screening in Wiley and Frontiers journals. Usually, the criteria for these evaluative services are such that they can be universally applied to submissions across fields.

There were three publishing outlets in our sample that explicitly mandate a soundness-only review to reviewers and editors. This is part of an agenda to combat review practices that effectively assess the perceived impact or importance of a submission, which is seen as a problem for scientific progress. The cases are the Health Psychology Bulletin, Access Microbiology, and four journals published by the Royal Society. This, of course, is a practice now widely adopted by the so-called mega journals, first used in PLOS ONE. Soundness-only review thus is a practice that still keeps becoming more widespread, perhaps spurred by the growing importance of OA publishing.

Some innovations add special emphasis on particular review criteria, namely, reproducibility, and/or the inclusion of source data. EMBO Press requires inclusion of source data to avoid questionable practices such as cherry-picking results and p-hacking. The journal Evidence-based Healthcare explicitly encourages reviewers to check how robust the evidence base of manuscripts is, and the two patient review projects (BMC Research Involvement and Engagement and British Medical Journal) obviously entail a focus on practical relevance to patients. Springer Nature life sciences journals use a reporting checklist to increase transparency of reporting standards, and six Springer Natural journals that are part of a transferable review pilot project put a special focus on reproducibility as part of their review process. The three registered report projects in our data by Wiley, the Royal Society and Springer Nature (for multiple titles in the Nature Portfolio and BMC) focus on robustness and methodology in line with the constrained nature of the output itself. Obviously, code review initiatives similarly require a special focus on code reproducibility. The Dagstuhl Artifacts initiative, for example, ensures that the digital artifact is well documented, easy to reuse, consistent and complete, and the PLOS Computational Biology pilot requires separate review of biological models used in a paper.

There is, moreover, a range of interesting but more circumscribed experiments that deserve mentioning. OUP offers a “no revisions” review for two of its journals, meaning that the outcome of peer review is either a publish-as-is/minor revisions or reject. Under the label of “open abstracts,” the journal Internet Policy Review publishes drafts of papers that are subject to rapid open reviews no longer than a paragraph, meant to give authors feedback on ongoing work.

Finally, ERIH PLUS and SciELO constitute meta-review initiatives focusing on creating indexes of reputable journals. Indexation in ERIH PLUS inter alia involves desk research to check whether journals have explicit procedures for external peer review in place; whether there is an academic editorial board whose members are affiliated with universities and other independent research organizations; and ensuring that no more than two-thirds of the authors published in the journal are affiliated with the same institution. All information required for such assessment must be publicly available. The indexation process used for SciELO is similar, although the specific criteria for inclusion are defined in more detail. Journals must, for example, have an editorial team with academic affiliations; rely on a clearly structured review process using plagiarism checks and external referees with suitable expertise; document their submission rates and review turnaround time; as well as document their citation patterns in comparison to journals with a similar profile. From 2021 onward, there is the additional requirement that journals should demand that the submitted manuscripts should cite and reference all data, software codes and other materials that were used in or generated by the research.

A crucial related issue is how the nature of peer review is communicated to reviewers. Our sample contains few explicit answers, but it is common for journals and review platforms to have some form of “guidelines for reviewers.” There are also cases where instructions for reviewers are themselves a key part of an innovation effort. The two patient reviewer initiatives explicitly highlight special guidelines to instruct patient reviewers about what to focus on and how to frame their reviews (inter alia by means of sample reviews). The preprint platform PREreview has a code of conduct for reviewers, including the expectation that reviewers should be “constructive” and “humble,” among other things. There are other initiatives of which we are aware outside our data set attempting to change the tone of peer review reports, including calls for “academic kindness in peer review” (Willis, 2020). In some cases, the setup of particular initiatives itself gives indications how review actors are instructed. All six ECR-focused initiatives obviously entail a special mentoring-based procedure where senior researchers teach junior colleagues what to focus on. This is then part of a community-embedded review process. We also surmise that open commenting has community-based learning effects, in the sense that users instruct each other about the use of the commenting function. Dedicated reviewer training in the form of digital workshops and/or digital resources (including bias awareness raising), by contrast, is mentioned by the IOP journals, by the journal Annals of KEMU and as part of the ERC mentoring initiative offered for Nature Reviews journals. Worth mentioning in this regard is also the cross-publisher COVID-19 Rapid Review project, launched by a group of publishers and related organizations in April 2020, with the aim of maximizing the efficiency and speed of peer review of COVID-19 research. Besides participating actors committing themselves to preprinting COVID-19 research, making research outputs openly accessible and speeding up publication times of COVID-19 articles (Waltman et al., 2021b), the initiative involved developing guidelines for preprint review as well as training of vendor staff to carry out checks on articles.

How do scientific experts that perform peer review report their evaluation of a scientific work and how are these evaluations integrated? This appears to be a less visible area of innovation, although structured reporting is common and has been the subject of experiments historically. EMBO Press, Unisa Press, Frontiers as well as eLife all explicitly mention structured review reports as part of an initiative to facilitate the process of drawing together individual review reports. In many cases, the manner of reporting changes as a result of other features. There are, for example, numerous review-accessed forums and cross-reviewer commenting initiatives in our sample (e.g. EMBO press, Science AAS, The Company of Biologists), the latter being pioneered by Atmospheric Chemistry and Physics and the BMJ in the mid-2000s. These imply special reporting dynamics that arise from how the review platform is set up. There is usually no specific reporting format for voluntary users, simply a commenting function that allows for open-ended and self-assigned commenting, and arguably often according to reviewers reacting to each other. The remaining initiatives imply that wherever individual reviewer reports are commissioned, they are drawn together by an editor or a group of editors, thus corresponding to the traditional model of peer review, where editors arbitrate between different reviewers.

We found that few initiatives aim to specify a maximum review time. The American Society for Microbiology (a few journals) indicates a timeframe of 4 weeks at most (common to many journals from a range of publishers), the journal Internet Policy Review offers rapid reviews for drafts (“Open Abstracts”) and the journal Life Science Alliance specifies a maximum of one round of review, since all submitted articles are forwarded from other journals and thus have already been reviewed. The COVID-19 Rapid Review initiative obviously aims to speed up the review process, but does not set a threshold. The well-recognized problem of long peer review times – one that was a key driver of historic innovations such as PLOS ONE seemingly remains a challenging one.

5.5 Open/transparent peer review

The final main category focuses on questions of openness and transparency – are review reports made available, and is the identity of reviewers made public or made visible among reviewers and authors? Open/transparent peer review is a loose label for various types of approaches to increased openness and transparency in peer review. Ross-Hellauer (2017) suggests that the term “open peer review” appeared in scholarly literature as early as the 1990s, but became widely used only around mid-2000. The term “transparent peer review” is often taken to signify that review reports are published alongside articles (sometimes signed), with the EMBO journal (EMBO Communications, 2019) coining the term in 2009 while drawing on earlier models by journals such as BMJ, Atmospheric Chemistry and Physics, as well as the BMC journals. Our sample suggests that open/transparent peer review remains a major area of experimentation, and as such is still far from a homogeneous practice (see also Wolfram et al., 2020).

Ten publishers and learned societies in our sample offer to publish the reviews of accepted manuscripts, which often goes together with the option of reviewers signing their reports by name. Examples in our sample include two journals from The Company of Biologists, the OA journals of IOP Publishing, one MIT Press journal, two OUP journals, four Royal Society journals, two journals of the Royal Society of Chemistry, a few journals in the portfolio of SAGE (exact number not specified), one Ubiquity Press journal, more than seventy Wiley journals, the BMJ-branded journals (published by the British Medical Association), one journal of the Geological Society of London (with three more offering the possibility of publishing reviewer names but not the review reports), as well as nine Nature journals and many BMC journals (exact number again not specified). Most commonly, publication of reviews and signing of review reports is optional and depends on the consent of authors and/or reviewers. Only a few journals in our sample actually publish the names of reviewers on a mandatory basis, namely, the BMJ-branded journals and the journals of the Geological Society of London. There are also a few independent journals in our sample that publish the reviews of accepted manuscripts. The Fennia optionally publishes signed or unsigned review reports, and eLife makes public reviews written for readers in addition to providing private recommendations for the authors. The journal Internet Policy Review discloses reviewer identities to authors of manuscripts (and vice versa), while for its “open abstracts” review functionality, review reports can be signed and made public also to readers. The publishing platform Access Microbiology publishes signed or unsigned review reports of all submissions, irrespective of the review outcomes (i.e. revisions decisions and “No Longer Under Review”).

At the same time, a whole range of journals have begun to offer an optional double-blind approach to peer review, and partially in parallel to the option of open/transparent peer review. All Nature-branded journals and Communications journals have begun to offer double-blind peer review from 2015 onward, and IOP Publishing simultaneously offers transparent peer review for its fully open access journals as well as double-blind peer review as a standard for all other journals in its portfolio. The reasons are not always fully explained, but in the case of IOP Publishing, reducing bias in peer review is the motivation for offering double-blind peer review: “Double-anonymous peer review – where the reviewer and author identities are concealed – has the potential to reduce bias with respect to gender, race, country of origin or affiliation which should lead to a more equitable system” (Harper, 2020).

In the traditional journal peer review system, each reviewer performs peer review independently, without interacting with other reviewers. While this holds true for the majority of review practices submitted as part of our sample, a number of innovation projects focus on more collaborative ways of doing peer review, under labels such as co-reviewing, cross-reviewer commenting, collaborative review and crowd review. As mentioned above, the Geological Society of London, OUP, a few Wiley journals (exact number not specified) and BMC offer the option of co-reviewing, in which an early career researcher and a senior researcher jointly perform peer review and the senior researcher serves as a mentor for the early career researcher. Another approach used by a number of organizations is cross-reviewer commenting, offering reviewers the option to comment on each other's reviews before the reviews are sent to the authors. EMBO, The Company of Biologists, Review Commons, Science as well as eLife use cross-reviewer commenting. A similar approach is taken by Frontiers, who refers to it as “collaborative review,” where authors can exchange comments with reviewers and editors in a discussion forum related to the manuscript. A more radical version of these ideas is the approach of crowd review taken by Thieme. In this approach, a submission to a journal is shared with a crowd of 50–100 individuals who are invited to comment on the submission. The reviewers can also respond to each other’s comments.

The final type of innovation in our sample consists in making review reports transferable, that is, a system where rejected submissions are cascaded from one journal to another alongside the review reports. Typically organized at the level of a single publisher's portfolio, transferable reviews are related to creating a form of transparency, but also constitute a way of managing peer review as an economy, in the sense of trying to prevent redundant review work. Our sample contains six examples of transferable review initiatives, including a handful of OUP journals in ornithology, four EMBO Press journals, Wiley's OA journals and chemistry titles, the journal mSphere, as well as BMJ submissions with reviewed preprints. The Company of Biologists reported that they participate in a cell biology transfer network that includes journals from multiple publishers. Furthermore, OUP and Hindawi reported that they participate in the COVID-19 Rapid Review initiative. Transfer of reviews between journals from different publishers is one of the key elements of this initiative, although it appears that at the time of writing this, no actual transfers have yet taken place (Waltman et al., 2021b). Other publishers participating in the COVID-19 Rapid Review initiative did not report this to us as an innovation.

A variation of transferable reviews is a system where manuscripts are submitted to a pool of journals, whose editors collectively decide which exact journal to forward the manuscripts to. Here, the selection of a suitable outlet is done by editors themselves rather than by authors. There are three examples of this in our sample: EMBO Press journals, a small group of Nature Portfolio journals in the context of a pilot project called “Guided OA” and the journal Life Science Alliance, the latter even across publishers. Described as a “trickle-down” OA journal jointly owned by Rockefeller University, EMBO and Cold Spring Harbor Laboratory, Life Science Alliance provides a publication channel for papers that did not survive the very selective editorial processes at nine journals owned by the three partners. It operates with a rapid consultation process between editorial teams about whether or not to offer manuscript transfer before the rejection decision is communicated to the authors. Review reports – where they exist – can be transferred as well.

A special approach geared to economize peer review labor is also used by the Association for Computational Linguistics (ACL), a learned society in a field where papers presented at conferences are the main form of scholarly communication. Faced with a proliferation of conference submissions and the difficulty to mobilize enough reviewers, the Association now experiments with a so-called Rolling Reviews approach. This means that authors submit a paper not to a single conference but to a submission platform, and in case it is positively reviewed, the paper can be assigned to one among a number of thematically suitable conferences affiliated with the ACL. Positively reviewed papers for which there is no more space at any conference can, moreover, be directly published in a dedicated OA journal.

Yet another variant is the Academy Submission route offered by OUP to members of the European Academy of Microbiology who target the journal microLife. Prospective authors here invite fellow Academy members as peers to review their manuscripts and then send them to the journal together with the reviews. The editor-in-chief then makes a publication decision based on these reviews.

6. Discussion and conclusions

As we have argued at the beginning of this paper, previous literature has largely focused on either specific types of innovations (Van Rooyen et al., 1999; Pontille and Torny, 2014; Fitzpatrick, 2009), or has created overviews from particular activist or otherwise normatively informed perspectives on how peer review ought to be organized (Tennant et al., 2017; Barroga, 2020; Bruce et al., 2016; Horbach and Halffman, 2020). Our analysis has instead focused on innovations as a distinct object of study in its own right, unpacking ongoing initiatives according to a five-part taxonomical framework.

As we have explained above, the taxonomy is mainly based on inductively identified categories that capture relevant comparative dimensions for our empirical materials. An important design choice here was to avoid imposing conceptual definitions on the data, for example by ordering the innovations in terms of their divergence from some assumed standard form of peer review. Instead, we ordered the material according to open questions, such as, what is the object of review, or how are reviewers selected? This also means that our taxonomy could be easily expanded to describe and compare review practices on a broader and simultaneously more fine-grained scale. For example, it could be made to capture review practices in a funding context if expanded by an additional set of questions about the setup of review panels and inclusion of additional review objects such as CVs and research proposals. Some of the existing categories could, moreover, be expanded to capture more specific details about review practices, for example to compare the specificities of different incentive systems for reviewers.

The comparison of the empirical materials, moreover, allows us to observe a range of cross-cutting trends, which we will now discuss. We will specifically focus on tensions between the diverse thrusts of innovation, which puts a spotlight on the need for coordination between otherwise independent innovation activities.

Our data firstly suggests that many innovations in the categories “objects” and “nature of review” amount to promoting more rigorous quality control, namely, by multiplying the objects of, and occasions for, review. This is one of the main aims of all initiatives involving explicit peer review of source code and data sets, peer review for reproducibility, as well as registered reports. Of course, such increased rigor will also tend to further increase the amount and cost of review work, which is already putting a heavy burden on the research system (Aczel et al., 2021).

Simultaneously, numerous innovations in the categories “role of reviewers” and “transparency of review” aim to increase the efficiency of peer review, which can, to some extent, be seen as a remedy to the growing amount and cost of review work. There are several initiatives based either on making review reports transferable across journals or on having authors submit to a group of journals in the first place, whose editors then collectively decide which outlet is best suited to handle a given manuscript. Yet another approach to increasing review efficiency is to introduce or reaffirm distinctions of procedural and substantive review, whereby tasks labeled as procedural can be delegated to publisher staff or AI. All of these approaches thus rely on removing disciplinary or social boundaries that hamper the ability of particular actors to review certain types of objects. Peer review forums built around preprint platforms, moreover, allow registered users to self-assign review tasks. Further research would be needed to assess to what extent efficiency-oriented innovations affect the outcomes of peer review, for instance, by reducing the depth or thoroughness of peer review.

A further line can be drawn between initiatives that explicitly encourage a diversity of opinions in peer review and those that operate with a more universal notion of the type of quality control that should be achieved. Innovations that remove social and disciplinary boundaries to reviewing –such as open peer review of preprints as afforded by initiatives like PREreview – generally seem to fall into the former category. The same can be said for initiatives that de-anonymize the review process. Such initiatives encourage a deliberative approach where potentially opposed speakers can explicitly address each other, even though there is also a risk that reviewers may not feel comfortable giving their frank opinion in a de-anonymized setting. On the other hand, there are initiatives that assume a more singular treatment of review objects. This includes any automated or partly automated quality checks (plagiarism, language, etc.) as well as registered reports. Registered reports assume a specific understanding of how the research process should be organized, based on an epistemological ideal of particular forms of experimental science.

Although our empirical material constitutes a snapshot rather than a systematic overview, it has, moreover, made clear that innovations in the area of “transparency of review” do not constitute a linear development toward an agreed-upon idea of transparency. Instead, we observe diverse and often field- and journal-specific trends: making review reports and reviewer identities transparent is now a widely offered possibility, but there are also some signs of a trend toward abandoning mandatory disclosure of reviewer identities (BMC) and toward double-blind peer review (IOP Publishing). This suggests that there are different philosophies involved: one assuming that disclosing identities of authors and reviewers is useful for accountability in peer review, and another presupposing that objectivity of peer review requires anonymity of authors and reviewers (see also Kiermer and Mudditt, 2021).

In sum, it appears that innovation activities pull not just in diverse but partly also in mutually opposed directions. Several initiatives aim to make peer review more efficient and less costly, while other initiatives aim to promote rigor of peer review, which is likely to increase the cost; innovations based on a singular notion of “good scientific practice” seem to be at odds with more pluralistic understandings of quality in scientific work; and the idea of transparency in peer review is the antithesis to the notion that objectivity requires anonymization. Given how vast the field of peer review innovation has become, some friction is to be expected, and trends in opposing directions arguably are in part a result of adaptation of innovations to local contexts. Nevertheless, the above charted fault lines suggest a need for coordination, to avoid that the very success of individual innovations directly undermines or goes at the expense of others. In future work in the RoRI, we hope to work on this together with respondents to our survey as well as other scholarly communication organizations.

Figures

Taxonomy of peer review developed during analysis

Figure 1

Taxonomy of peer review developed during analysis

Data availability: The survey responses are available on Figshare (Kaltenbrunner et al., 2022).

References

Aczel, B., Szaszi, B. and Holcombe, A.O. (2021), “A billion-dollar donation: estimating the cost of researchers' time spent on peer review”, Research Integrity and Peer Review, Vol. 6, p. 14, doi: 10.1186/s41073-021-00118-2.

Aksnes, D.W., Langfeldt, L. and Wouters, P. (2019), “Citations, citation indicators, and research quality: an overview of basic concepts and theories”, Sage Open, Vol. 9 No. 1, doi: 10.1177/2158244019829575.

ASAPbio (n.d.), “Reimagine review”, available at: https://reimaginereview.asapbio.org/.

Barroga, E. (2020), “Innovative strategies for peer review”, Journal of Korean Medical Science, Vol. 35 No. 20, e138, doi: 10.3346/jkms.2020.35.e138.

Bowker, G. and Star, S.L. (1999), Sorting Things Out: Classification and its Consequences, The MIT Press, Cambridge.

Bravo, G., Grimaldo, F., López-Iñesta, E., Mehmani, B. and Squazzoni, F. (2019), “The effect of publishing peer review reports on referee behavior in five scholarly journals”, Nature Communications, Vol. 10, p. 322, doi: 10.1038/s41467-018-08250-2.

Bruce, R., Chauvin, A., Trinquart, L., Ravaud, P. and Boutron, I. (2016), “Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis”, BMC Medicine, Vol. 14, p. 85, doi: 10.1186/s12916-016-0631-5.

Chambers, C.D. and Tzavella, L. (2021), “The past, present and future of registered reports”, Nature Human Behaviour, Vol. 6, doi: 10.1038/s41562-021-01193-7.

Chiarelli, A., Johnson, R., Pinfield, S. and Richens, E. (2019), “Preprints and scholarly communication: an exploratory qualitative study of adoption, practices, drivers and barriers”, F1000Research, Vol. 8, p. 971, doi: 10.12688/f1000research.19619.2.

Crane, D. (1967), “The gatekeepers of science: some factors affecting the selection of articles for scientific journals”, The American Sociologist, Vol. 2 No. 4, pp. 195-201.

Csiszar, A. (2018), The Scientific Journal: Authorship and the Politics of Knowledge in the Nineteenth Century, Chicago University Press, Chicago.

Dahler-Larsen, P. (2019), Quality. From Plato to Performance, Palgrave, London.

Delfanti, A. (2016), “Beams of particles and papers: how digital preprint archives shape authorship and credit”, Social Studies of Science, Vol. 46 No. 4, pp. 629-645, doi: 10.1177/0306312716659373.

EMBO Communications (2019), “A decade of transparent peer review”, available at: https://www.embo.org/features/a-decade-of-transparent-peer-review/.

Emerald Publishing (2021), “Author and reviewer access”, available at: https://emeraldpublishinggroup.freshdesk.com/support/solutions/articles/36000210806-author-and-reviewer-access.

Eve, M., Neylon, C., O’Donnell, D., Moore, S., Gadie, R., Odeniyi, V. and Shahina, P. (2021), Reading Peer Review. PLOS ONE and Institutional Change in Academia, Cambridge University Press, Cambridge.

Fitzpatrick, K. (2009), Planned Obsolescence, Publishing, Technology, and the Future of the Academy, NYU Press, New York.

Guetzkow, J., Lamont, M. and Mallard, G. (2004), “What is originality in the humanities and the social sciences?”, American Sociological Review, Vol. 69 No. 2, pp. 190-212, doi: 10.1177/000312240406900203.

Hackett, E.J. and Chubin, D.E. (1990), Peerless Science. Peer Review and US Science Policy, SUNY Press, Albany.

Harper, M. (2020), “IOP Publishing commits to adopting double-anonymous peer review for all journals”, available at: https://ioppublishing.org/news/iop-publishing-commits-to-adopting-double-blind-peer-review-for-all-journals/.

Horbach, S.P.J.M. and Halffman, W. (2020), “Journal peer review and editorial evaluation: cautious innovator or sleepy giant?”, Minerva, Vol. 58, pp. 139-161, doi: 10.1007/s11024-019-09388-z.

Horbach, S.P.J.M. (2020), “To spill, filter and clean: on problematic research articles, the peer review system, and organisational integrity procedures”, Doctoral Dissertation, Nijmegen.

Kaltenbrunner, W. and de Rijcke, S. (2019), “Filling in the gaps: the interpretation of curricula vitae in peer review”, Social Studies of Science, Vol. 49 No. 6, pp. 863-883, doi: 10.1177/0306312719864164.

Kaltenbrunner, W., Birch, K. and Amuchastegui, M. (2021), “Editorial work and the peer review economy of STS journals”, Science, Technology, and Human Values, Vol. 47 No. 4, pp. 670-697, doi: 10.1177/01622439211068798.

Kaltenbrunner, W., Pinfield, S., Waltman, L. and Woods, H. (2022), “PeerReviewInventory_Dataset.xlsx”, figshare, doi: 10.6084/m9.figshare.17161835.v1.

Kiermer, V. and Mudditt, A. (2021), “Open reviewer identities: full steam ahead or proceed with caution?”, available at: https://scholarlykitchen.sspnet.org/2021/09/21/open-reviewer-identities-full-steam-ahead-or-proceed-with-caution/.

Nature portfolio (n.d.), “In review at nature journals”, available at: https://www.nature.com/nature-portfolio/for-authors/in-review#q4.

Nicholas, D., Watkinson, A., Jamali, H.R., Herman, E., Tenopir, C., Voleniune, R., Allard, S. and Levine, K. (2015), “Peer review: still king in the digital age”, Learned Publishing, Vol. 28 No. 1, pp. 15-21, doi: 10.1087/20150104.

Nowotny, H., Scott, P.B. and Gibbons, M.T. (2001), Re-Thinking Science: Knowledge and the Public in an Age of Uncertainty, Wiley, Hoboken.

Pontille, D. and Torny, D. (2014), “The blind shall see! The question of anonymity in journal peer review”, Ada: A Journal of Gender, New Media, and Technology, Vol. 4, doi: 10.7264/N3542KVW.

Rodríguez-Bravo, B., Nicholas, D., Herman, E., Boukacem-Zeghmouri, C., Watkinson, A., Xu, J., Abrizah, A. and Świgon, M. (2017), “Peer review: the experience and views of early career researchers”, Learned Publishing, Vol. 30 No. 4, pp. 269-277, doi: 10.1002/leap.1111.

Ross-Hellauer, T. (2017), “What is open peer review? A systematic review”, F1000Research, Vol. 6, p. 588, doi: 10.12688/f1000research.11369.2.

Royal Society of Chemistry (n.d.), “Joint commitment for action on inclusion and diversity in publishing”, available at: https://www.rsc.org/new-perspectives/talent/joint-commitment-for-action-inclusion-and-diversity-in-publishing/.

Russell, B., Sack, J., McGonagle-O’Connell, A. and Alves, T. (2021), “Publishers integrate preprints into their workflows”, available at: https://scholarlykitchen.sspnet.org/2021/09/13/guest-post-publishers-integrate-preprints-into-their-workflows/.

Severin, A. and Chataway, J. (2021), “Overburdening of peer reviewers: a multi-stakeholder perspective on causes and effects”, Learned Publishing, Vol. 34 No. 4, pp. 537-546, doi: 10.1002/leap.1392.

Spezi, V., Wakeling, S., Pinfield, S., Creaser, C., Fry, J. and Willett, P. (2017), “Open-access mega-journals: the future of scholarly communication or academic dumping ground? A review”, Journal of Documentation, Vol. 73 No. 2, pp. 263-283, doi: 10.1108/JD-06-2016-0082.

Squazzoni, F., Bravo, G., Grimaldo, F., García-Costa, D., Farjam, M. and Mehmani, B. (2021), “Gender gap in journal submissions and peer review during the first wave of the COVID-19 pandemic. A study on 2329 Elsevier journals”, PLoS One, Vol. 16 No. 10, e0257919, doi: 10.1371/journal.pone.0257919.

Star, S.L. and Ruhleder, K. (1996), “Steps toward an ecology of infrastructure: design and access for large information spaces”, Information Systems Research, Vol. 7 No. 1, pp. 111-134, doi: 10.1287/isre.7.1.111.

STM (2020), “A standard taxonomy for peer review, version 2.0”, available at: https://osf.io/68rnz/.

Taylor & Francis (2022), “Understanding journal metrics”, available at: https://authorservices.taylorandfrancis.com/publishing-your-research/choosing-a-journal/journal-metrics.

Tennant, J.P., Dugan, J.M., Graziotin, D., Jacques, D., Waldner, F., Mietchen, D., Elkhatib, Y., Collister, L., Pikas, C., Crick, T., Masuzzo, P., Caravaggi, A., Berg, D., Niemeyer, K., Ross-Hellauer, T., Mannheimer, S., Rigling, L., Katz, D., Tzovaras, B., Pacheco-Mendoza, J., Fatima, N., Poblet, M., Isaakidis, M., Irawan, D., Renaut, S., Madan, C., Matthias, L., Kjær, J., O’Donnell, D., Neylon, C., Kearns, S., Selvaraju, M. and Colomb, J. (2017), “A multi-disciplinary perspective on emergent and future innovations in peer review”, F1000Research, Vol. 6, p. 1151, doi: 10.12688/f1000research.12037.3.

Van Rooyen, S., Godlee, F., Evans, S., Black, N. and Smith, R. (1999), “Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial”, BMJ, Vol. 318 No. 7175, pp. 23-27, doi: 10.1136/bmj.318.7175.23.

Vermeir, K. (2020), “What about editors?”, Centaurus, Vol. 62 No. 1, pp. 1-4, doi: 10.1111/1600-0498.12313.

Waltman, L., Pinfield, S., Kaltenbrunner, W. and Woods, H.B. (2021a), “Guest post: peer review in transition?”, available at: https://oaspa.org/guest-post-peer-review-in-transition/.

Waltman, L., Pinfield, S., Rzayeva, N. Oliveira Henriques, S., Fang, Z., Brumberg, J., Greaves, S., Hurst, P., Collings, A., Heinrichs, A., Lindsay, N., MacCallum, C.J., Morgan, D., Sansone, S.-A. and Swaminathan, S. (2021b), “Scholarly communication in times of crisis: the response of the scholarly communication system to the COVID-19 pandemic [Report]”, Research on Research Institute. doi: 10.6084/m9.figshare.17125394.v1.

Willis, M. (2020), “’Do to others as you would have them do to you’: how can editors foster academic kindness in peer review?”, available at: https://www.wiley.com/network/archive/do-to-others-as-you-would-have-them-do-to-you-how-can-editors-foster-academic-kindness-in-peer-review.

Wolfram, D., Wang, P., Hembree, A. and Park, H. (2020), “Open peer review: promoting transparency in open science”, Scientometrics, Vol. 125, pp. 1033-1051, doi: 10.1007/s11192-020-03488-4.

Yan, V. (2021), “Developing a taxonomy to describe preprint review processes”, available at: https://asapbio.org/developing-a-taxonomy-to-describe-preprint-review-processes.

Ziman, J. (2001), Real Science. What it Is and what it Means, Cambridge University Press, Cambridge.

Corresponding author

Wolfgang Kaltenbrunner can be contacted at: w.kaltenbrunner@cwts.leidenuniv.nl

Related articles