The many ways that academic journals become slow (and the one to fix them)

The many ways that academic journals become slow (and the one to fix them)

Peer reviewing and publishing are lengthy processes, but frequently they take too long; here, we capture a few common causes of journal slowdowns and the best way to fix them or avoid them altogether


Every research stakeholder benefits from fast journals

Journal speed matters to everyone involved in research. Researchers want to publish fast in high impact journals in order to claim discovery early and get as much visibility as possible. Research administrators care about the return on investment and want to accelerate the pace of research via faster publication and discovery. Publishers want to have happy authors (and staff) in order to attract more submissions. In addition, when journals are fast, publishers realise revenue faster, especially in the Open Access world where APC revenue is realised at the time of acceptance (or publication).


Journal speed is anything but guaranteed

Peer reviewing and publishing are lengthy processes that frequently last longer than four months (see Chart 1 below and here for a deep dive). Submitted manuscripts typically go through some light form of editorial checks, then enter review and revision followed by acceptance, typesetting, proofing, and publication, unless they get rejected or withdrawn somewhere along the way.

The process is inevitably long, but it frequently lasts longer than it should. For example as shown on Chart 1 below, two leading Open Access megajournals that together account for ~40k articles per annum (equivalent to 1.5% of all journal articles) take longer than half a year from submission to publication. That’s the average, and it is slow (the median is 20-30 days faster – still slow).

No alt text provided for this image


When do things go wrong?

So why do journals break down and become infuriatingly slow? There are several causes that can work independently of each other or, more commonly, in combination. Here are some of them as seen from the publisher’s point of view.

No alt text provided for this image


Submissions increase overnight (just don’t celebrate too early)

A sudden increase of submissions is a bane and a boon: while it leads to more subscription revenue in the long run or Open Access APC revenue in the mid run, it puts pressure on operations in the short run. Submissions can more than double overnight, but the editorial and publishing teams take much longer to grow. Until the team catches up, the manuscript pipeline will slow down and may become unmanageable.

Most commonly, submissions grow rapidly overnight when a journal gets a JIF (best known as Impact Factor). Other possible reasons include the signing of publisher deals with sizeable consortia or journals flipping to the Open Access model (given the scarcity of high impact Open Access journals, such journals can be very attractive for researchers).


Manuscripts get worse… or better

The workload per manuscript increases when the ‘quality’ of submissions deteriorates. Submissions of poor quality require more editing and more iterations with the authors, inevitably delaying the peer review and publishing process. In theory, higher quality submissions can also increase the workload because more of them become publishable, and publishable manuscripts require more work than those rejected in early stages.


Moving platforms (hint: it takes a while)

Staff and editors have complex jobs even when working with just one workflow on one MTS (manuscript tracking system). The complexity increases exponentially when journals migrate from one MTS to another. Given that manuscripts spend several months on a platform, journals typically co-exist on both platforms for a year or longer during the migration (unsurprisingly, publishers tend to stick with one system). The added complexity results to manuscript queues and slowdowns.

The effect of trials (e.g. testing a new service for authors) is similar but not as taxing, given that trials introduce complexity of limited scale temporarily.


Competition for reviewers heating up

Manuscript handling becomes more challenging when the competition for limited resources (such as reviewers) increases. For example, assume that the number of reviewable manuscripts increases faster than the number of researchers that are available for review, i.e. the demand for reviewing grows faster than the supply of reviewers. Inevitably, finding reviewers becomes slower. This is especially true for journals that do not use specialised services for finding reviewers, either based on machine learning or manual work by specialised teams.


Hiring, training, and slowing down

A drop in productivity can be triggered by aggressive hiring and training, which in turn can be triggered by an increase of submissions or an increase in staff turnover. For example, when a journal gets a new JIF, submissions increase and new staff is hired. The new staff is inevitably less productive than their seasoned colleagues, who, additionally, have to give up manuscript handling in order to train the new team. This is a recipe for queues and slowdowns.


Shifting teams around (ever tried to change a bike chain on the move?)

A common practice in publishing (typically Open Access) is to 'rush' acceptances and publications towards year-end in order to boost financial performance (nothing wrong here, as long as editorial standards are respected). It requires shifting staff along the pipeline temporarily, which if done carelessly may accelerate manuscripts that are about to be accepted to the expense of manuscripts that are in the early stages of editing and reviewing. Once again, the unintended consequence is slowdowns and queues.

The effect of permanent changes to tasks and teams is similarly disrupting, though in theory more short-lived, given that such interventions are better planned and aim at speed improvements.


Rushing submission checks (and mismatching tasks)

Submitted manuscripts have to be checked for completeness and soundness before they are handed over to senior editorial staff and reviewers. Typically authors and junior staff iterate until manuscripts are brought to shape. Reducing the checks at submission may accelerate that part of the process, but it may lead to slower assessment by senior editors and reviewers and more iterations or revisions with the authors.

More broadly, tasks should be rightly matched to the seniority of each editorial and publishing role. Assigning junior tasks to senior teams is a recipe for demotivation and delays. Assigning senior tasks to junior teams might accelerate the process, but it will also increase the risk of an editorial error.


Want to publish faster? Then measure, measure, and measure

So how can publishers make sure that their workflow is as lean and fast as possible? The answer to this is quite simple: measure. Then measure again. And then measure some more. And while at it, make sure you are measuring the right things.

Keeping track of the overall time from submission to acceptance or publication is not good enough. If a slowdown develops at the early stages of the pipeline, it may never be captured or it may be captured inaccurately and late (4-5 months too late). The later and the less accurately an issue is identified, the longer it takes it fix.

Instead, you need to break down the workflow in smaller bits, e.g. starting with quality assessment, then reviewer finding, then reviewing, etc. Such segments can be tracked almost real-time. Then any fault will be captured quickly and accurately, allowing for further troubleshooting and quick action.

Of course, there is always the alternative to reject editorially sound manuscripts in order to dedicate resources more effectively to a smaller volume of manuscripts. But, for Open Access publishers, that amounts to giving up around $1,500 every time a manuscript gets rejected. Decisions, decisions…

Depending on the subject area of a journal, a processing time of 100-120 days is feasible. It starts with understanding the current performance, identifying areas for improvement, acting on them, monitoring the results, and iterating.

To find out more, contact us at contact@scholarlyintelligence.com or visit www.scholarlyintelligence.com.

Martin Delahunty

Company Director, Inspiring STEM Consulting

5y

Great insights Christos. Thanks for sharing!

Like
Reply

To view or add a comment, sign in

More articles by Christos Petrou

Explore content categories