Reimagining the future of peer review beyond operations
by adya misra
Beyond the doom and gloom of AI taking over peer review and the excitement of how this technology could transform scholarly publishing, this peer review week I take stock of how peer review could work in future, focusing on the smaller ripples in the industry that could (and perhaps should) become more commonplace and in future, the gold standard. While also considering the improvements we could make to how we do peer review and indeed, why we do it.
Peer Review as we know it, has been in a state of flux and yet, hasn’t changed much in the last few decades. Journal editors will invite one, two or three suitably qualified scholars to provide opinions on unpublished manuscripts to help them decide whether it provides a suitable contribution to knowledge and thus merits publication. We have various modes of peer review, including single anonymized, where the reviewer knows the identity of the author(s) but the authors may not know the identity of the reviewer; double anonymized, where authors and reviewers are unaware of each other’s identities. We’ve also started adopting transparent and open peer review models whereby review reports are made available alongside the published article, often with reviewer identities alongside it to help readers make understand what was discussed or changed during peer review. The focus of this article is not to discuss the merits and demerits of each mode of peer review but to instead look beyond the operational possibilities of peer review and how these changes can benefit research communities.
Participatory research and review
The Cape Town statement at the World Congress of Research Integrity in 2022 was clear in its aim to include under-represented research communities in research from its conception to its implementation and dissemination. The authors of the statement wish to see a fairer, more equitable research landscape where researchers in LMICs and perhaps the Global South (I hate this phrasing by the way) are equal participants in the research that involves their communities, receiving appropriate credit for their work in terms of authorship as well as opportunities to direct the research, not simply executing the vision of others. While this may require funding bodies to recalibrate the way they provide and evaluate research grants (Welcome Trust are leading this space), publishers have been quick to offer guidance to Editors to encourage the shift in thinking. Sage, among many others, have policies that outline the expectations of the Publisher when it comes to parachuting or helicopter science; we don’t want to publish any of it.
Participatory research should naturally lead to participatory review, too. The keen medical editors reading this will recognize that many medical titles now include patient/participant communities to be authors and indeed reviewers of relevant journals. While the participation of patients in medical research has been long debated, we may argue that their participation in peer review could be hotly debated too. Are they considered peers, if they are not technical subject experts but experts in their own right? How would these patients/participants be selected and how would their appropriateness to review be determined? Would we need to check for competing interests? I don’t have the answers to these complex questions, but I do know that the inclusion of patients/participants in peer review brings a new element to academic research that could make it more holistic and perhaps more relatable to the wider public reading the research. While I focus on medical research here, there is much to say about including participants in review of other research subjects such as ecology, conservation, anthropology, indigenous or invasive plants etc. There may be much to sort out in terms of logistics, but the ideal future of ethical peer review should include review from the participant or patient population so that we can ensure the research we publish continues to meet and respect the needs of the community.
Preprints and double anonymized peer review
The number of preprints continues to grow year on year and challenge our perceptions of peer review as well as publishing. At Sage we encourage researchers to deposit their unpublished manuscript to a preprint repository to increase its visibility, but unlike other publishers we do not have any mandatory requirements for preprints, allowing authors to make the choice appropriate for their situation and research community. We have been grappling with issues around confidentiality and anonymity within peer review if a preprint is out there for anyone to read prior to publication. The advice to reviewers so far has been to not look for a preprint if the journal operates a double anonymized peer review model. This is a tricky area for journal editors and publishers who take responsibility for the confidentiality of the peer review process. Should we then, abandon the double anonymized peer review model? It has been a great tool in reducing the usual forms of racial, gender or geographical bias we see in peer review, but will it be as relevant if every single author started depositing preprints? Critics of the double anonymized peer review model have often claimed that authors and reviewers may be able to guess each other’s identities when reading each other’s contribution. I believe that is the key here: guessing is very different from knowing, but this may be the reason preprint adoption is not as widespread in certain research areas.
Decoupling peer review from publication
Earlier in 2023 eLife later announced its new peer review model that removed the accept/reject aspect of peer review and retained only peer review. There was and continues to be extensive discussion on whether this model achieves what it set out to do- remove the aspect of journal prestige from publishing and focus simply on the research. When an author submits to eLife, they receive reviewer reports (which are published) and a general editorial evaluation is published to outline the importance or relevance of the research. The reviewed preprint is published in eLife if authors take on board the feedback provided and appears no differently to any other online-only research publication. There is no editorial accept or reject.
The criticism around this model has been fascinating to observe; but the transparency of process as well as the publishing of review reports along with author responses appears to be exemplary in where we may like to take scholarly publishing. Taking away the arbitrary accept versus reject and placing a more general editorial evaluation puts more emphasis on the process rather than its outcome, something we have been trying to achieve via various initiatives around replications, publishing negative findings or registered reports. If an article is accepted in a journal, readers and researchers make assumptions about its quality and impact based on the journal name. This model would force us to look deeper, read the review reports and make an interpretation of the article’s contributions. Whether the research community is ready for this and the impacts of this decoupling on science communication in general remains open for discussion.
Large language models and AI in peer review
The elephant in the room: Large language models and Artificial intelligence remain the biggest discussed topics in peer review and academic publishing. New technologies can be harnessed to improve operational efficiencies, but we haven’t yet seen any evidence to suggest that large language models can or should be used in peer review. Large language models cannot think like a reviewer and would struggle to provide the depth or constructive criticism we expect from reviewers. While we are still grappling with this new technology, we now know that it struggles to compile complex information from various sources, often struggles with hallucinations such that it may fabricate certain facts or statements and might amplify biases in the algorithms used to train these models. We and other publishers were quick to recognize that guidance would be helpful for editors, authors and peer reviewers. You may find this guidance in a related blog post: How do AI tools and Large Language models fit into the future of Peer Review?.
Sage is a proud partner and contributor to Peer Review Week. Browse more content here.
About the Author