Originally published in the Scholarly Kitchen on March 26, 2015

Image Credit: Nick at http://www.lab-initio.com/

When I searched Google Scholar for “improving peer review” I got 16,800 results (and I only included items published since 2014). SearchingThe Scholarly Kitchen for “peer review” I found that there were 40 results dated 2009 or later. Adding my own informal discussions with publishers as well as scientific and medical authors and researchers, it seems pretty clear that article review and submission processes are on the minds of all participants and that experiments underway.

So this month we asked the Chefs: How can we improve the article review and submission process?

Joe Esposito: I hate to be a contrarian, but the right question is not how to improve the submission and review process but for whom is the process being improved? The answer here is obvious: the only player in this system that drives decisions is the one that invests capital, and that means the publisher. Improving the system has to have a benefit for the publisher or it won’t happen. In this formulation improving the process for authors and reviewers is best understood if it provides a return to the publisher. Will a more efficient system persuade more authors to submit papers to a particular publisher? That’s a reason to invest. Will it reduce costs? That is a reason to invest. But it should be clear that all such improvements are an arms race: when one publisher does this, all the others must follow.

So rather than thinking about the process or the system, let’s think about this question in the context of the marketplace reality we must work with.

David Smith: Sixteen years ago now, I ran a peer review process where our average time to first response was less than eight weeks. Total elapsed time, not working weeks. I once hassled an Australian reviewer in a pub in Melbourne for his opinion (virtually; via the phone – they wouldn’t fly me out there, alas). So if time is really a problem I venture to suggest that “it ain’t rocket science, get yer process sorted out!”

But the mechanics of the submission process for the author could be very much improved. Why do we insist on formatted citations? DOI links would be better all around in as many places as possible would they not? Give them a Dropbox folder (per article) to enable easy sharing to us of the material for publication. Clear out the stylesheet anachronisms; make the submitted article format something that is easy for time short reviewers to get to grips with (there’s that Dropbox folder again…); ORCIDS to save time on boring administrata; login using Facebook or Google (Or cough, ORCID, cough). Regard the author as a purchaser and our process(es) as a shopping cart funnel and apply the same type of focus on maximizing the throughput to the ‘checkout’.

Charlie Rapple: From public acknowledgement for reviewers (for example, Publons) to never-ending review (such as PubPeer), it’s clear that experiments around the review process are gaining traction – such start-ups are both growing (Publons is about to open a second office) and making a meaningful contribution (PubPeer has helped to expose flaws in several papers that have then been retracted).

Is there as much disruption happening around submission? One challenge I’m considering is how we can better capture and surface information that is currently lost in the submission process. For example, many journals ask for highlights, key findings, implications, publicity/outreach summaries, statements of novelty and so on as part of the submission process, to assist editorial triage and review. Often, this information is never published alongside the article. Why not? Outdated or inflexible publication formats, systems or workflows? The unpolished nature of the material? A lack of clarity about who it is aimed at? I’m curious as to what might prevent this information from surfacing – and also, curious to learn of examples where this kind of information has successfully been made public, and to what effect.

Alice Meadows: While peer review may have its detractors, survey after survey shows that most researchers continue to trust it and see it as central to the scientific process. That’s not to say the current system is perfect, of course – which system is? But hopefully, through experimentation with different forms of peer review – pre- and post-publication – it will continue to improve.

In the meantime, the one thing I believe would most improve the submission and review process is better education and training. At present, this is virtually non-existent, at least in any consistent or comprehensive way. Individual PIs and professors may teach their students, many publishers and societies offer in-person or online training, and organizations like Sense About Science also provide support – but there are still way too many reviewers and authors who have never received any formal training at all.  And it’s starting to show – especially with the emergence of new players such as China, which is set to overtake the US shortly in terms of article authorship. As the authors of this Chroniclearticle on peer review point out, “The emergence of world-class universities creates the potential for China to become a vastly influential part of the higher-education landscape. We should all care whether the academic work being done there meets a standard that scholars in the United States—and around the world—can trust and build upon.” Surely this is something that the global scholarly communications community, collectively, can and should help with?

Michael Clarke: The main problem with the review process is that it often has to be redone. Papers not accepted at Journal A have to be re-reviewed by Journal B and sometimes Journals C and D. This is incredibly inefficient and results in higher than necessary system costs (if we think of the scientific publishing process as a system). It is true that authors often revise papers between submissions and that many times the authors don’t necessarily want Journal B to know that the paper was previously submitted to Journal A.

There are also many cases where the paper is very good but Journal A just couldn’t publish it due to its scope or limitations on how much they can accept. Publishers are increasingly building internal peer review cascades for papers in this category, but that only works if Journal A and Journal B are published by the same house. Might there be a better approach? A way to cascade between publishers? And a “common app” approach to paper formatting so that papers don’t have to be re-formatted between submissions? Publishers have worked hard over the last decade to streamline the submission process and reduce the time from submission to publication, but this does not address the issue that causes the largest delay, which is having to reformat and resubmit papers to multiple journals.

Phill Jones: Journal submission systems have a terrible reputation among researchers. As a former researcher, I could rant all day about them, but I’ll restrain myself and pick one aspect. People complain about slow upload speeds and poorly designed workflows that mean they have to babysit a submission for several hours.

For example, if a submission includes 10 files, including cover letter and high-res figures, it takes 5 minutes to upload each file, and you require input from the user afterwards to complete the submission, you’re going to end up with some pretty frustrated authors. We used to have a situation where authors just put up with that sort of thing, but in these days of author charges, researchers are beginning to expect service for their money. My advice would be for publishers to try out their submission systems themselves (under realistic conditions, with large files and multiple authors) and see how much of a pain they are to use. If you do this, you’ll probably see some easy wins.

There are a number of people working on solutions for Google docs-like cloud based collaborative writing solutions. Systems like this could work with publisher templates, making life easy for authors and reducing submission check cost. The goal is to plug directly into review systems for seamless authoring, submission and review.

Peer-review is the worst form of academic quality control, apart from all the others. I’ll leave that to one of my fellow Chefs to worry about.

Angela Cochran: Ah, manuscript submission…the necessary task of jumping through unnecessary hoops. Publishers can make the submission process easier by reviewing their laundry list of requirements at least once a year. I bet there is something you could live without. For example, most of our accepted papers go through at least two versions. This affords us the luxury of not needing final figure files and other pieces at the time of submission. On that note, check in with the production folks every now and then. The submission instructions may include formatting or file requirements that are no longer necessary in the production process.

Submission systems could be a little more proactive about making the systems user friendly. It feels like most improvements make the editorial office users happy (makes sense seeing as they pay for the service) but we also need for the author users to be happy. Let’s not ignore the look and feel of the interface.

The single greatest problem for editors with the review process is finding good reviewers. More and more, I hear complaints about inadequate reviews or invited reviewers ignoring deadlines. We have not found a cost-effective and efficient way to solve this problem. The reviewers and editors also report that a good chunk of authors are not adequately responding to reviewer comments. We try to make it clear to authors that they don’t have to agree with all the requested changes but they should address in their response to reviewers why they aren’t going to make the changes requested.

I asked our editorial coordinators, those on the front lines of author and reviewer queries, what they thought. “To the researchers that work through us, our process is often a trudge across a bog of unwieldy software and cursory guidelines…we become a speed bump between research and dissemination. We must strive to be unobtrusive: progress updates need to be transparent, password retrieval a cinch, requirements visible from miles around,” said Nick Violette.

Jennifer Chapman offered the following: “These processes can be improved when we walk in the footsteps of either the reviewer or the author. By knowing how each step works without error or complication, we can then begin to make the process user friendly with a limited number of steps. Ease of process is key.”

Lastly, we must always remember that being an author is a very small part of what our authors do each day. We should try not to make this difficult for them.

Judy Luther: Peer review is often in the news with either ideas about accelerating the existing workflow with an incentive for reviewers or an innovative approach to post publication peer review in a more open environment.  Since PLOS ONE distinguished between reviewing for quality but not for significance, different configurations have evolved leading to a more open review process.

As a result researchers have access to an increasing array of options for public comment, discourse or a review that enables communication online that may not be occurring offline.  At least two initiatives are seeking to provide an incentive for researchers to participate by recognizing their peer review activities.  ORCID is collaborating with CASRAI, the Consortia Advancing Standards in Research Administration Information to acknowledge peer review activities. Publons is launching a new service to showcase peer review activity. 

Researchers are clearly identified on ResearchGate which introduced Open Review, a structured mini review with sections for methodology, analyses, references, findings and conclusions. One of the first posts described failed attempts to replicate a study casting doubt on the validity of the original paper. Since it is difficult to publish negative results, an open review environment can serve to provide a more complete view.

Somewhat controversial, PubPeer is a site allowing anonymous posts that serve the whistleblower role. Its blog lists articles that were withdrawn or retracted as a result of issues raised on its site. It is currently facing a lawsuit that may affect its future. 

Core to journal publishing is the creation of the scholarly record to document results for future reference. However, peer review is not necessarily effective in catching author misconduct and cannot be expected to validate research results.  Retractions due to author misconduct are on the rise and the most notorious cases have rocked their disciplines because authors who fabricated data went undetected for years without their results being questioned.  Although it comes with challenges one of the advantages of the open peer review environment is that it serves as a vehicle that could improve the scholarly record.

Phil Davis: Submission fees. The submission process is a matching market in which the submitting author makes a calculation based on the likelihood that his/her manuscript will be accepted. Before online submission systems, authors had to pay several costs in the submission process: the cost of printing several manuscript copies; the cost of shipping a bulky package of these manuscripts to the editor; but mostly, authors had to pay with their time. Online submission systems severely reduced (or eliminated) many of these costs to the author. The result–a growing flood of poorly matched manuscript submissions–should not be a surprise. In addition, internal manuscript transfers (also known as peer review cascades), may further encourage authors to select inappropriate journals for their first choice, knowing that, if rejected, the manuscript may be transferred to the next journal in line. If journals are not willing to make expecting authors wait (most journals wish to reduce their time to first decision statistic), they must consider other ways to encourage good selection choices. A submission fee–even a small one–may incentivize better submission decisions.


Now it’s your turn.

How do you feel we can improve article submission and review processes? What has your organization done in this area?

We look forward to hearing from you!