Editor’s note: Today’s guest post is from Steve Smith, founder of STEM Knowledge Partners and an independent consultant with over 25 years of experience in scholarly publishing, including leadership roles at Blackwell Publishers, John Wiley & Sons, Frontiers Media, and AIP Publishing.

At SSP’s New Directions Seminar, a panel examined how artificial intelligence is already active in the three core activities of scholarly communication: reading, writing, and reviewing. The premise was not whether AI should enter these spaces but which roles help, what guardrails are necessary, and where human responsibility must remain visible.

Jessica Miles of The Informed Frontier discussed the reader and discovery context. Josh Dahl from ScholarOne considered authorship and integrity. Chirag “Jay” Patel of Cactus Communications addressed peer review and workflow burden. A shared thread ran throughout: AI is simultaneously the largest challenge and the largest opportunity. The practical task is to reduce that paradox to governance and workflow choices that preserve trust.

Toy robot reading a book

When AI Reads First

AI assistants are increasingly the first interface that processes scholarly content, summarizing and extracting answers before a person reaches a publisher’s page. Several publishers now report traffic declines from search and referrals.

The panel framed this as a measurement and product problem. Miles called for new engagement metrics beyond pageviews: “We’re deeply in need of new ways of measuring engagement because at the moment we’re not able to consistently distinguish between human readers and bots.” She advocated for personalization and machine-readable formats alongside traditional text. “It’s not an either/or, not humans or AIs, it’s a both/and.”

Patel argued that publishers have “fallen behind the times when it comes to web technology, putting the user experience and interface on the back burner.” The traffic crisis reflects a deeper loss: “Publishers severed their relationship with readers by ceding discovery to Google, Twitter, and other platforms years ago.” His advice: focus on readers still arriving, understand why they come, and improve their experience. “The 40 or 50% still showing up are probably the people you want showing up anyway.”

Dahl noted that some publishers are licensing content to AI platforms under agreements that restore usage data and ensure provenance, revealing how discovery happens off-platform. The implications: discovery will increasingly begin inside assistants, requiring publishers to adapt through licensing and attribution strategies; and rather than recovering every lost visit, publishers should invest in a better user experience for those who still arrive.

The Author Integrity Problem is Two Problems

Authorship combines two distinct issues. Tools that improve clarity, translation, and formatting can make scholarship more equitable. Automated text generation can produce weak or synthetic manuscripts at volume. The first is assistance; the second threatens originality.

Dahl’s framing was pragmatic: publishers are investing in both author assistance and research-integrity screening because both needs are real and must be separated in policy. Detection alone is not a strategy. Studies suggest AI-generated text is already common, but disclosure remains rare because researchers fear stigma.

Recent STM guidelines categorizing nine types of AI use in manuscript preparation make responsible disclosure more likely and editorial decisions faster. They also address a double standard Miles highlighted: “If I give my paper to a copyeditor to review prior to submission, a journal may not require that I disclose that. It’s hard to see why AI should be treated differently.” The working principle is symmetry: if a human contribution would require disclosure or attribution, treat the tool equivalently.

Human accountability was consistent. “The writing needs to be yours because that’s how you actually learn about the research itself,” Dahl stressed. AI can help with language refinement and literature synthesis, but original thinking must come from the human. This addresses what some call cognitive debt: productivity gains that disguise skill erosion when authors outsource too much thinking. The panel favored design over bans: use assistants where they remove friction without removing scholarly value; avoid them where they replace inquiry or voice.

The Reviewer as Learner

The reviewer burden has grown faster than the pool of willing reviewers, a problem predating large language models. Tools that help produce higher-quality reports in less time may be required to keep the system functioning.

Patel’s position: “AI should serve as an assistant helping reviewers and editors make better decisions quicker and instruct reviewers on how to conduct better reviews.” Tools proposing complete reviews invite sameness. Tools that surface gaps and suggest checks help reviewers focus judgment where needed. “These tools need to be instructional rather than just a one-click solution,” he said, emphasizing that reviewers must own the final report.

The data: up to 17% of reports show AI authorship signs, and many reviewers find them helpful. Patel attributed this to AI’s persuasive prose but warned: “If you’re not detecting and enforcing it, your policies are really worthless.” Policies aren’t standardized; enforcement requires resources. Meanwhile, busy reviewers see AI as useful assistance.

The panel recommended education combined with working guardrails: prohibit public uploads of confidential manuscripts, encourage enterprise accounts, require audit logs, and ask reviewers to share prompts when requested. “Reviewers need to provide their own expertise and insights, not just a report from ChatGPT,” Patel said. “The reviewer needs to own the report.”

Editorial screening tools reduce manual checks but can increase triage work through false flags. Still, Patel was cautiously optimistic: “If a publisher is not using AI, they are missing issues with manuscripts. Nothing is perfect, but AI can help get us closer.”

Cross-Cutting Tensions

Opacity and bias. Powerful models are often closed, making it difficult to assess whether AI amplifies existing inequities. Miles emphasized RAG (retrieval-augmented generation) systems that cite sources in real time: “If I’m going to license my data to develop AI systems, RAG or other inference systems are a must-have.” Dahl noted that as AI systems mature, transparency about scholarly trust markers could influence what gets surfaced, analogous to how PageRank once elevated authoritative sources.

Attribution and intellectual contribution. Miles argued that current AI systems lack agency; they’re tools directed by humans. “AIs only ‘read’ because a human directed this tool to do so.” The human using the tool skillfully makes the difference between slop and quality. Patel emphasized accountability: “The person submitting the paper or doing the review should take full responsibility and be able to answer questions about it.”

Business models and governance. If assistants become the primary discovery layer, analytics must evolve from raw visits to verified engagement, thus aligning incentives toward quality and attribution. A minimum 2025 policy: no public uploads of confidential content when required, enterprise accounts with audit logs, clear appeal paths, human sign-off on decisions that affect careers or enter the scholarly record, and disclosure rules mirroring human equivalents. Patel added: “We need a federal policy because publisher policies are not working.”

Environmental cost. Compute power has real footprints: electricity, water, minerals. Patel, who leads Cactus’s SDG initiative, acknowledged the tension but expects efficiency improvements: “We’re seeing the emergence of small models that perform just as well as large models, locally hosted LLMs, and more efficient training methods.”

What Remains Human?

Discussion returned to a recent open letter warning that universities are “sleepwalking into AI dependency.” Miles drew an important distinction between scholarship activities and scholarly communications work, but affirmed: “Having a spectrum of informed views on the appropriate role of AI is critical. The enthusiasts should rhapsodize, and the critics should take a stand based on their values.”

If AI reads first, writes first, and flags first, what remains human? Original thinking, judgment about relevance, and the ability to notice what’s missing. Human accountability where truth claims enter the record, human mentorship between draft and decision, and human voice where readers learn what findings mean: all irreplaceable. Assistance is welcome; substitution is not.A simple test navigates edge cases: if an activity counts as intellectual contribution when a person does it, treat it as a contribution when a tool materially shapes it. If it counts as housekeeping, treat tool use as housekeeping. Symmetry reduces confusion.

Why the Mood Was Not Despair

Are there reasons for optimism? Patel suggested broader AI exposure reveals fragility, increasing demand for quality. Dahl pointed to likely infrastructure consolidation as collaboration accelerates. Miles noted ChatGPT’s explosive growth demonstrates a sustained appetite for knowledge. As AI takes on routine work, judgment about why questions matter and methods are sound remains human. The task: let tools do what they’re good at, set boundaries where they’re not, and keep human judgment visible at consequential decision points.

Organizations must avoid short-sighted thinking that replaces talented people with fragile systems. Patel warned: “We need to make sure we don’t think early-career professionals are replaceable with AI, because they’re not.” Organizations treating AI as a substitute for human development may produce more of the “slop” they hope to eliminate.

The underlying message: the community can navigate this disruption by investing in people, demanding transparency where it matters, building tools that instruct rather than replace, and maintaining symmetry between human and machine assistance. Trust infrastructure will depend less on perfect detection than on clear accountability and keeping human judgment central wherever consequences are real.

Steve Smith

Steve Smith

Steve Smith is the founder of STEM Knowledge Partners and an independent consultant with over 25 years of experience in scholarly publishing, including leadership roles at Blackwell Publishers, John Wiley & Sons, Frontiers Media, and AIP Publishing.

Discussion

Leave a Comment