Editor’s Note: Today’s post is by Henning Schoenenberger, Vice President Content Innovation, Springer Nature; Kiera McNeice, Research Transparency Manager, Cambridge University Press, and Joris van Rossum, Program Director, STM Solutions

table showing STM recommendations

Where did this recommendation come from?

It is almost a cliché to say that AI has changed the academic publishing industry – for authors, reviewers, editors, readers, and publishers themselves. In 2023, STM published guidelines outlining ethical and practical considerations regarding the use of AI tools in the publication process. In the two years since, technology has progressed significantly, creating even more possibilities to use AI to assist the dissemination of research outputs. In particular, recent developments in generative AI (GenAI) have led to rapid expansion in the capabilities of machine tools to assist with writing, editing, formatting, and even enhancing research manuscripts with images and diagrams.

As a result, there is a need for transparency around the use of AI in research dissemination. Publishers often provide guidelines for authors to transparently declare any human assistance in manuscript preparation, for example if they have used professional language editing services. But publisher guidelines have not kept pace with the rapid developments in AI, leading to uncertainty in the research ecosystem: Authors are unclear about their obligations to declare their use of AI tools; peer reviewers are not sure about acceptable AI use and declarations in manuscripts; and readers do not know to what extent AI tools may have been used in the creation of a manuscript. This uncertainty and lack of transparency around AI contributions to publications poses a risk to the integrity of academic publishing. Clear definitions and terminology are needed to facilitate the development of guidance and policies regarding the declaration and use of various kinds of AI assistance in manuscript preparation.

For this reason, in 2024 STM assembled a Task and Finish Group (TFG) with the aim of creating a classification of the various possible uses of AI, including GenAI, in the preparation of manuscripts. The TFG published a draft version at the end of May 2025, accompanied by a webinar which kicked off the consultation phase where we solicited feedback from stakeholders across the academic publishing industry. We received many valuable comments, which influenced the final recommendation which we are sharing here.

Who is this recommendation for?

Our primary aim is for publishers of academic research to consider each of the AI-assisted activities defined in this classification, in conjunction with the 2023 STM guidelines and:

  • Determine whether each activity is permissible for authors to use when preparing manuscripts, and ensure clear policies are available to authors.
  • Determine which permitted AI activities must be transparently declared during the submission process (e.g. to editors and peer reviewers).
  • Determine which permitted AI activities must be declared in the content of manuscripts themselves, to be included in the final publication and visible to all readers.
  • Provide clear guidance to authors about policies regarding the use and declaration of AI tools in preparing manuscripts.

We hope this resource will help research communities to clarify and codify expectations around the use of AI in disseminating research.

Are there limitations to the classification?

Yes, and these are clearly described in our recommendation document. There are many ways in which machine tools can be used in research processes (for example, to gather or analyze raw data), which are outside the scope of this recommendation. This classification only addresses the use of AI assistance for the preparation of manuscripts intended for publication in the scholarly communication ecosystem.

We’d also like to underscore that this classification does not in any way attempt to recommend or harmonize publishers’ policies around any use and/or declaration of the use of AI; we recognize that different research communities will have different expectations here. This work is also an expansion of the 2023 STM guidelines, which outlined ethical and practical considerations regarding the use of AI in the publication process, to clarify and classify different ways that authors might use AI tools. Actual policy decisions about what is permitted or expected to be declared – to what level of detail, how, and where – remain the responsibility of publishers, in collaboration with the research communities they serve.

What happens next?

Given the rate of AI development, we have no doubt that this classification will need to be a living document and undergo further revisions. We welcome further feedback on our recommendation.

We also acknowledge that other initiatives have addressed this issue, such as the GAIDeT taxonomy, the CANGARU project, and an initiative led by CNKI (the Chinese National Knowledge Institute). The STM Association is interested in engaging with the various communities and organizations in this space to align our understanding and classification of the use of AI tools in research dissemination.

Where can we see the Classification, and where can we share comments or ask questions?

The classification can be downloaded here. For more information or suggestions, Joris can be contacted at [email protected].

Henning Schoenenberger

Henning Schoenenberger, Vice President Content Innovation at Springer Nature, is an accomplished leader in innovative research content solutions. With a strong background in product management and digital innovation, Henning has a track record of early adopting Artificial Intelligence in scholarly publishing. He pioneered the first machine-generated research book published at Springer Nature.

Kiera McNeice

Kiera McNeice is Research Transparency Manager and Cambridge University Press & Assessment (CUPA). She is responsible for strategy and policy regarding the transparency and reproducibility of research published by CUPA and collaborates closely with colleagues in our Editorial teams and our Publishing Ethics and Research Integrity team, promoting best practices in open research and helping ensure the research we publish is robust and reliable.

Joris van Rossum

Joris van Rossum is Program Director of STM Solutions. Joris leads various programs and projects within STM Solutions, with a special focus on the STM Integrity Hub and AI. Before joining STM, Joris worked for several leading companies and organizations across the STM publishing space, including Digital Science and Elsevier, where he initiated and led a variety of important innovations and cross-publisher initiatives within the research and publishing ecosystem. Joris holds a master’s degree in biology and a PhD in philosophy.

Discussion

2 Thoughts on "Guest Post: Classifying AI Use in Manuscript Preparation – A Recommendation"

There is another important initiative to be aware of which dovetails nicely with the proposed classification system — RAISE, “Responsible AI in Evidence Synthesis” for using AI in research and evidence evaluation. The RAISE guidance, presented in three documents, outlines recommendations about AI use for the key roles in evidence synthesis (RAISE 1), guidance on building and evaluating AI tools (RAISE 2), and advice on selecting and using these tools with attention to ethical, legal, and regulatory considerations (RAISE 3). The information is available on the OSF https://osf.io/fwaud/

This is a very timely and valuable recommendation. The rapid uptake of GenAI tools has left many authors, reviewers, and editors unsure about what should or should not be disclosed. I especially welcome the fact that you frame this as a “living document” and openly call for collaboration. Our GAIDeT-team also sees the importance of building common ground across different initiatives, and we warmly support the idea of uniting efforts. We are particularly interested in exploring how such classifications can be made machine-readable and tested in real publishing workflows, and we would be glad to see journals willing to experiment with this. Thank you for opening the door to this wider conversation.

Comments are closed.