Editor’s Note: Today’s post is by Ashutosh Ghildiyal. Ashutosh is a strategic leader in scholarly publishing with two decades of experience driving sustainable growth and global market expansion. He currently serves as Vice President of Growth and Strategy at Integra.

Let me begin with a rant. Why is so much of the marketing around AI tools focused on making human beings obsolete? Why are new advances so often compared to what a human expert can do — as if the sole purpose of AI is to replace us? Is AI for us, or are we for AI?

When I see endless comparisons and performance tests pitting AI against humans, I can’t help but ask: what’s the point? What exactly are we trying to prove? We have a powerful new technology, and instead of asking how it can complement or elevate human effort, we rush to see where it can outperform people.

Yes, there’s no doubt that AI can perform memory-based or pattern-based thinking more efficiently, and its speed, accuracy, and ability to handle complex data are increasing at a rapid pace. But that’s only one aspect of intelligence — verbal and analytical intelligence — one that, unfortunately, has been given far more value than it deserves in our education systems and professional environments. There are other forms of intelligence, such as emotional intelligence, creativity, insight, and the ability to perceive beauty, which are non-measurable in nature, and therefore likely out of bounds for AI. Nevertheless, they form an indispensable part of the human experience and our collective consciousness.

Instead of obsessing over replacement, we must explore how AI can amplify human thought — how it can free up our mental bandwidth so we can focus more on the meaningful, creative, and qualitatively rich aspects of our work.

One important debate that’s missing from the current discourse is this: where ought AI be used, and where ought it not — even if it can be used?

a robot painting at an easel

Rediscovering the Purpose of Scholarly Publishing: Gatekeeping as a Public Trust

Human beings have an innate need to explore and discover. We’ve done so for thousands of years. It’s part of our nature, part of living, and society needs it for progress and development.

Science, one might say, is the ongoing pursuit of understanding. In this quest, scientists and researchers often seek ways to share their discoveries — not just with peers, but with the broader world. Perhaps that is why scholarly journals came into being in the first place: as a means to record, reflect upon, and communicate emerging knowledge.

Now, with the tremendous scale of global research activity, the function of scholarly publishers has expanded. Society in general and readers of research specifically are now essential stakeholders, as are the researchers. They need to be able to find and trust this knowledge. Journals can — and must — play the crucial role of helping society find knowledge and trust it. This is what I believe is the true value of scholarly publishing today.

In my recent article in The Scholarly Kitchen, I elaborated on this, arguing that gatekeeping is the most important value and core function of publishing. At its core, gatekeeping in scholarly publishing is about ensuring the credibility and relevance of research, a task undertaken through the collaborative efforts of editors and peer reviewers. It serves the dual purpose of curation and validation — selecting meaningful, relevant work within a field and validating it through rigorous review by subject-matter experts.

We must consider peer review in this light, as a critical part of the trusted content curation process.

Peer Review Is in Crisis—On Multiple Fronts

The peer review crisis refers to the growing set of challenges confronting the traditional academic publishing system, which has long served as the primary mechanism for quality control in scholarly research. At its core, the crisis stems from an unsustainable imbalance between the rapidly increasing volume of research submissions and the limited pool of qualified reviewers willing to assess them — a phenomenon often described as “reviewer fatigue.” This crisis is unfolding across multiple dimensions.

It is an economic crisis, driven by the lack of adequate compensation, limited support for career advancement, and the absence of meaningful recognition for peer reviewers. While existing initiatives to recognize and reward reviewers are valuable and represent important steps forward, the persistent challenge of reviewer fatigue underscores a deeper, systemic issue.

It is a crisis of capacity and diversity — with a vast pool of untapped and untrained reviewers who remain underutilized.

And it is increasingly a crisis of attention.

Human attention spans are shrinking. Digital media, instant consumption, an endless stream of choices, and the rise of short-form content — reels, stories, snippets — are all shaping how we engage with information. Our attention is no longer freely directed; it is captured and manipulated by algorithms and interfaces.

Meanwhile, the ease offered by tools like ChatGPT is creating a new kind of cognitive complacency. These tools are powerful, but when we rely on them entirely, without any personal cognitive engagement, we begin to lose the ability and willingness to think deeply for ourselves.

Attention requires space and time — it cannot be rushed or optimized. And peer review, at its core, demands attention. So, the question becomes: how can we help peer reviewers manage this challenge of attention in the age of distraction and AI?

I believe AI can play a critical supporting role by reducing the volume of routine or peripheral tasks that editors and reviewers are burdened with. This frees them to focus on the one thing that truly matters: the manuscript — its quality, relevance, and contribution to human understanding. This work demands sustained human attention — and with it, a certain artistic freedom, space for intuition, and the opportunity to explore and interpret meaning. Because really, what is a human being meant for, if not to pay attention and find meaning?

This core function cannot — and should not — be handed over to AI. Not because AI lacks capability — it can already generate reviews, and its reasoning abilities are advancing rapidly. But when we delegate the act of meaning-making to machines, we risk reducing scholarly publishing to a mechanized, utilitarian process devoid of depth and discernment.

Outsourcing the work of human editors and reviewers to AI may appear attractive in the short term, promising faster publication cycles or lower costs. However, such choices can erode the long-term value, integrity, and credibility of scholarly publications. Over time, this undermines not only individual journals but also the entire scholarly publishing ecosystem, making it vulnerable to disruption by non-traditional and potentially lower-quality platforms.

The heart of scholarly publishing — editorial oversight and peer review — must remain human, thoughtful, and intellectually engaged. At the same time, AI is here to stay. It is already woven into publishing workflows, and its role will only continue to expand.

But if we strip the human element from this gatekeeping function, we jeopardize the very foundation of trust and quality on which scholarly communication is built. Preserving and advancing the industry demands that we strengthen — not weaken — editorial judgment. This means investing in human expertise and using AI purposefully — to augment, not replace, the editorial process.

The challenge before us is clear: to integrate AI consciously and ethically, in a way that supports our core mission. The future of scholarly publishing depends on a renewed human commitment to quality, integrity, and meaning — enhanced, but never eclipsed, by technology.

Current and Potential Use of AI in the Editorial and Peer Review Process

Currently, AI tools are being used in various stages of the publishing workflow, including in peer review, where several tools are emerging. These tools help save both manual and cognitive effort through intelligent automation in areas such as:

  1. Manuscript Screening & Assessment
  2. Language Editing & Formatting
  3. Reviewer Selection
  4. Reviewer Training
  5. Reviewer Assistance

Do human editors and reviewers need to assess every submission? Yes — but not the ones that clearly fail to meet basic criteria. The goal is to filter out unqualified manuscripts early, allowing for timely rejections, redirections to more appropriate journals, or author revisions.

What remains must still be thoughtfully read and evaluated by editors and peer reviewers. This step is — and must remain — non-negotiable. AI tools can assist by flagging potential issues, surfacing insights, or providing cues that streamline the process and reduce review time. But this support must never be mistaken for substitution.

Editorial and reviewer judgment must always take precedence over AI-generated suggestions. These tools are meant to reduce mechanical and repetitive effort, not to make decisions.

It’s akin to a doctor using lab reports to aid diagnosis. The doctor doesn’t conduct every test themselves, nor do they follow the report blindly. They interpret the data within the broader context of the patient’s condition. Similarly, human editors and reviewers remain central to the process, with AI serving to enhance their efficiency and accuracy, not replace their expertise.

Screening tools are available to help qualify manuscripts before they are passed on to peer review, and in some cases, for post-acceptance checks as well. These tools perform various language, technical, and research integrity checks. Some rely on built-in technologies to assess elements within the manuscript and categorize manuscripts based on publishers’ needs for additional checks, while others compare the content against external databases to generate specific signals. The scope and types of these checks are continually expanding to serve the needs of editorial teams and peer reviewers, so they no longer have to perform such tasks manually for each manuscript.

Advancements in research integrity checks are accelerating, with continuous progress being made in AI-powered tools that support reproducibility and open science.

Other uses of AI in peer review include identifying and matching reviewers with manuscripts based on their availability, workloads, and areas of expertise. This also saves time and reduces manual effort.

Another promising area is reviewer training. AI tools can help gamify or simulate training exercises for reviewers. It’s something I’ve touched upon in an article in Science Editor, though I’m not sure how widely this idea is being explored.

A range of new tools is emerging to support reviewers during the peer review process. These tools may help with statistical checks, reference and citation checks, translation or editing of review reports, and more. Some tools offer voice note features that can be transcribed into structured text. Others may summarize or extract key ideas from the manuscript, highlight certain aspects, or even generate an AI-based peer review report — or review the review written by the human reviewer.

Publishers can choose to offer such tools within a secure, controlled environment, without enabling copy-paste functionality, for example. It should be an immersive, distraction-free space for reviewers — like taking an online exam — where they receive support but also face built-in constraints. These constraints ensure that AI is not used for tasks where we still want reviewers to apply their own attention and insight. This approach helps protect originality.

Personally, I wouldn’t want peer reviewers to receive too much hand-holding or support. They should be enabled to independently explore the manuscript, in distraction-free environments, so their reviews emerge from independent thinking rather than guided attention.

Independent human judgment brings irreplaceable value — especially in an era marked by misinformation, synthetic content, and diminishing public trust in science. Human reviewers draw on experience, contextual understanding, and subject expertise in ways that AI cannot replicate. Their attention is not merely a filter for errors but a lens through which nuance, originality, and meaning are discerned.

After all, what is originality in the age of AI? It is something free from influence and comparison — an insight or observation that arises from an individual’s own perception. It may resemble something that already exists, but its source is independent insight, translated into their own words.

Observation must remain independent; articulation, however, can be supported by AI. Even then, human review is essential to ensure that the articulation truthfully reflects and conveys the original insight.

Responsible Use of AI

What is responsible use of AI? At its core, it means using AI in ways that do not infringe upon the rights of others. Ironic, perhaps — given that much of AI’s knowledge base and training data has been built on copyrighted material, often without explicit consent or compensation.

It also means using AI with purpose — to address a necessity — and not using it where it isn’t needed or where we ought to engage our own minds. It is our cognitive effort that gives authenticity and meaning to any endeavor.

I have no issue with using AI for screening, analysis, editing, or preparing a manuscript, but the output must then be carefully reviewed by a human. Reviewing is hard work; it is cognitively demanding, but it is also non-negotiable. As long as these conditions are met, I believe that qualifies as responsible use of AI. Furthermore, any decision to use AI must also take into account the human and environmental costs.

There is cognitive effort, and there is manual effort. Manual effort is about the time spent on tasks. Cognitive effort involves attention, observation, knowledge, human empathy, and conscious engagement.

We should aim to reduce both kinds of effort, but only when it does not compromise authenticity and meaning. Preserving attention and directing it where it matters most—in pursuit of enhancing human fulfillment — is how we should be utilizing AI.

Delegating to AI: What Should — and Shouldn’t — Be Outsourced?

We should approach using AI the way a leader delegates to an intern — by providing clear context, guidance, and feedback to help it deliver the best possible results, while retaining responsibility for judgment and direction.

With AI, a single editor or reviewer — equipped with clarity of judgment and purpose — can achieve more than ever before. AI can assist in digesting large volumes of content, checking for technical compliance, and highlighting patterns that might otherwise go unnoticed. But it cannot decide what truly matters. That requires attention, insight, and often a kind of intellectual empathy — the ability to understand why something is important within a field and for its audience.

Whether AI can replace an editor or reviewer entirely is a tempting but ultimately misleading question. Editors and reviewers bring much more than domain knowledge. They bring a sense of what is relevant, what is timely, and what is worthy. They bring judgment, nuance, and a deep understanding of the human and social context surrounding the research they evaluate.

Yet, in a world where knowledge work is increasingly commoditized, a danger emerges: if editorial judgment becomes mechanized — driven only by metrics, red flags, or AI-generated scores — then perhaps it no longer matters whether intuition or context is involved. That would be a loss not just for science, but for society.

So, how should we approach delegation in this context?

Editors and reviewers must remain in the leadership role — setting the context, defining the standards, and using AI tools as assistants, not substitutes. They should guide these tools, review outputs critically, and ensure that every decision reflects thoughtful human oversight. Similarly, publishing teams and service providers should use AI consciously to enable better decisions, not to bypass them.

In this sense, delegation to AI must not be seen as outsourcing responsibility, but as orchestrating support, where the human retains purpose, integrity, and control.

The ones who will thrive in this new world are not those who offload their responsibilities to machines, but those who harness AI thoughtfully to elevate quality, integrity, and impact.

Centering the Human in a Tech-Augmented Future

The purpose of science is to expand human understanding, and the purpose of scholarly publishing is to curate, verify, and communicate that understanding responsibly. As we embrace AI in this process, we must not lose sight of that mission.

AI is a powerful technology, capable of reducing friction, freeing up human effort, and enhancing decision-making — but it is not the observer or the decision-maker. The observer — human consciousness — is what gives meaning to content. Our role is not to compete with AI, nor to hand over our thinking to it, but to use it wisely and intentionally.

In the editorial and peer review context, this means preserving the human essence — judgment, curiosity, empathy, and attention — while using AI to streamline and support. The future of scholarly communication will be shaped not by the tools we adopt, but by how consciously we choose to use them. Let us make those choices with clarity and purpose, rooted in a human-centered approach.

Ashutosh Ghildiyal

Ashutosh Ghildiyal is a strategic leader in scholarly publishing with two decades of experience across author services, business development, and strategic leadership. He has worked extensively with authors, institutions, and scholarly publishers worldwide, driving sustainable growth and international market expansion. He currently serves as Vice President of Growth & Strategy at Integra.

Discussion

1 Thought on "Guest Post: Gatekeepers of Meaning — Peer Review, AI, and the Fight for Human Attention"

“This core function cannot — and should not — be handed over to AI. Not because AI lacks capability — it can already generate reviews, and its reasoning abilities are advancing rapidly.”

Any evidence about this assertions? In reality, hallucinations are increasing, not declining.

“Reviewing is hard work; it is cognitively demanding, but it is also non-negotiable.”

I agree and that’s one reason for not using “AI” to begin with; it takes too much time to verify everything and fix the garbage “AIs” mostly output in the science context. In addition — whether you are an author, an editor, a reviewer, or a publisher — you have your reputation at stake when playing with the garbage generators.

Then:

“1. Manuscript Screening & Assessment
2. Language Editing & Formatting
3. Reviewer Selection
4. Reviewer Training
5. Reviewer Assistance”

Of these, (1) and (5) might work. For the latter, it might be useful to have screening about references, and I am not talking about “AI”-generated summaries. But paywalls probably hinder progress at this front.

Regarding (2), already before “AI”, typesetting too has been in a crisis [1]. Today, some big commercial publishers do “typesetting” via scripts that alter people’s names, change capital letters (e.g., Europe becomes europe), and so forth and so on. With respect to (3), all major publishers already have recommender systems for inviting reviewers (you can call them “AI” if you want to, sure). Regarding (4), are you really sure scientists (or anyone else for that matter) want to engage with chatbots for “training”? I doubt it.

Finally, are you sure it is really “AI” that is needed? What about old-fashioned policies? Already if the three largest publishers, say, would agree and enforce a policy that for every paper you publish, you have to review three manuscripts, the crisis would be solved. (Though, I leave peer review quality out of this discussion.)

Finally:

“Human attention spans are shrinking.”

Coupled with the garbage generators and declining basic skills, including literacy, this scenario is indeed terrifying. I wonder who will even do science after about fifty years?

[1] 10.1163/18784712-20240032

Comments are closed.