Editor’s Note: In today’s post, Frances Pinter, Director, Academic Relations, Central European University Press, speaks with Professor Yana Suchikova, Vice-Rector for Research at Berdyansk State Pedagogical University.
In a recent Scholarly Kitchen post summarizing the COPE Forum on “emerging AI dilemmas,” editors urged a shift from detection to transparent disclosure: “AI transparency is not about shame; it’s about intellectual honesty.” We spoke with Professor Yana Suchikova about GAIDeT – the Generative AI Delegation Taxonomy, which enables researchers to disclose the use of generative AI in an honest and transparent way.

Editors say the AI conversation is moving from detection to disclosure. What exactly is broken in today’s AI disclosures, and why does it matter right now for research integrity and trust?
Today’s AI disclosures fail in three ways: they are vague (“used ChatGPT for editing”), non-actionable (editors cannot tell what was done or why it matters), and they stigmatize honest authors. Instead of clarity, we get euphemisms; instead of routine practice, we get fear of judgment. Shifting from “detection” to clear, comparable declarations of generative AI contributions is a chance to restore the focus on scientific validity and accountability.
For a while, debates about “full prompt logs,” AI detectors, and watermarking dominated. Perhaps these seemed useful at the start. But today, when AI is used across the entire research lifecycle, such approaches look naïve and ineffective.
Honest authors often avoid disclosure. Is this due to stigma, inconsistent policies, fear of sanctions — or all of the above? What real harm does this cause for authors and editors?
People stay silent not out of laziness, but out of fear and confusion: every journal seems to have different rules, and in some communities, acknowledging AI use is treated as “less authorship” or even a “dirty” text. The result is hypocrisy: everyone uses AI, but almost no one dares to say so openly. Honest authors worry; dishonest ones hide it with ease, leaving editors to waste time on suspicion and guesswork.
We have called this the “The Purity Myth,” : a situation in which academic culture privileges formal markers of purity and compliance over substantive intellectual contributions and rigor. If that continues, we risk losing sight of what really matters — ideas, methods, and scientific rigor. AI can help make research clearer, not diminish it. By continuing to stigmatize a tool that has already become part of daily work, we corner ourselves.
That is why we propose a simple but honest alternative: GAIDeT, a taxonomy of delegated tasks. It makes transparency normal, clear, and safe — just as CRediT once made contributor roles a standard part of authorship disclosure.
I see an interdisciplinary team behind GAIDeT. What brought you together? Three of the authors are from Ukraine — did that context shape your decision to design a taxonomy for AI disclosure? How did a task-based taxonomy become the obvious solution?
It’s actually a fun story. There are four of us: myself, Natalia Tsybuliak, and Serhii Nazarovets from Ukraine, and Jaime A. Teixeira da Silva, an independent researcher in Japan. Natalia and I work together at Berdyansk State Pedagogical University. When we read Jaime and Serhii’s article about cases where ChatGPT had been listed as an author, we realized another “for or against” opinion piece would change nothing. What was missing was a shared language of process: if AI isn’t an author, then how do you honestly explain what it did and who is accountable?
So, our first public step was a short comment in the same journal: “ChatGPT isn’t an author, but a contribution taxonomy is needed.” When that was published, we wrote to Jaime and Serhii, suggesting we collaborate. That’s how a small interdisciplinary team was formed, made up of a botanist/meta-scientist, a librarian/scientometrician, a psychologist, and a materials scientist. I think this diversity helped us design something broadly applicable across disciplines.
The real “aha moment” came when we abandoned lists of tools or prompt logs in favor of delegated tasks across the research lifecycle. A task is the smallest meaningful unit — and one you can actually compare.
The Ukrainian context mattered too. Right now, Ukraine is drafting a new Law on Academic Integrity. The first reading included a troubling clause: “A person cannot be considered the author of an academic work (or part of it) if it was generated by a computer program in automatic mode at the person’s request.” This is fundamentally flawed, as authorship is about responsibility. Even if AI helps, a human must remain accountable for the content. That principle is at the core of GAIDeT. That’s why we call it a taxonomy of delegation — to underline the central role of the human researcher.
Briefly, what is GAIDeT — and how does it differ from debates on “AI authorship,” binary checkboxes, or journals demanding prompts?
GAIDeT was created to help researchers clearly describe any assistance they received from AI during research or publication. It maps the research and publishing process into macro-level stages (e.g., literature review, data processing, writing) and micro-level tasks (e.g., coding, translation, proofreading, editing, drafting conclusions). Authors simply indicate at which stage AI was used and for what task. This way, the declaration makes it clear that AI functioned as a research instrument, not as an autonomous author.
Why is disclosure anchored in delegated tasks across the research lifecycle rather than in named tools or raw prompts? How does this level of granularity help editors make consistent decisions?
Because tools change, but tasks remain. “Searching and systematizing the literature,” “generating text” — these are intelligible to any editor, and they can be compared across manuscripts and journals. That level of granularity lets editors consistently decide what is acceptable and what isn’t. By contrast, requiring disclosure of raw AI “prompts,” as some suggest, is a dangerous path.
But why shouldn’t journals require authors to submit their prompts? What are the methodological, ethical, and fairness drawbacks — and what proportionate alternatives exist if editors want assurance?
Working with LLMs is iterative and context-dependent. There can be dozens or even hundreds of prompts, shifting contexts, external tool calls, model updates, and stochastic parameters involved in an AI prompt. Even a full dump of prompts cannot reproduce a specific output without replicating the exact system state (model version, temperature, system prompts, toolchains). In other words, “prompting” is a weak indicator of reproducibility and cannot replace classical standards of evidence.
More importantly, prompts often contain unpublished data, commercially sensitive details, or material under ethical or contractual restrictions. Forcing authors to publish or store such logs creates leakage risks and undermines responsible data governance. Even anonymization does not always prevent reconstruction of context. That’s why the proportionate standard of transparency is a GAIDeT task declaration under human oversight — without requiring authors to submit their prompts.
What is the main advantage of your taxonomy? Could you show us a convincing GAIDeT statement that an author might include, and where in the manuscript it should appear?
The main advantage is simplicity, completeness, and intuitive clarity. To make it easy, we created a free online GAIDeT Declaration Generator. Here, the researcher selects delegated tasks from a checklist and instantly receives a ready-made statement to paste into their manuscript, ideally as a short subsection before the references.
In practice, using the generator takes just a few minutes:
- Enter who is completing the declaration.
- Indicate which GAI tools were used (e.g., ChatGPT-o3, Claude 3).
- Tick the research tasks delegated to AI from the GAIDeT list.
The system then produces a standardized statement, for example:
“The authors declare the use of generative AI during the research and writing process. According to the GAIDeT taxonomy (2025), the following tasks were delegated to AI tools under full human oversight: Code generation; Data analysis; Translation. AI tool used: ChatGPT [version x]. Responsibility for the final manuscript rests entirely with the authors. AI tools are not listed as authors and do not bear responsibility for the results. Declaration submitted: [x].”
How can journals capture GAIDeT in submission systems so statements become searchable and comparable signals (for triage, analytics, policy monitoring) without burdening authors?
Journals only need to add a few structured fields to their submission systems and store them as metadata (for example, in JATS/XML using a controlled vocabulary). That way, GAIDeT statements become searchable signals — for triage, portfolio analysis, or policy monitoring — without adding extra work for authors. In practice, GAIDeT is a controlled vocabulary designed for metadata integration, ensuring not only transparency for readers but also interoperability across indexing services and databases.
How does GAIDeT reduce social risk for early-career and non-native English authors, while still keeping human accountability as the core of authorship?
GAIDeT reduces stigma through the use of neutral task language. Declaring “language editing” is normal and honest. Generative AI tools can actually level the playing field, helping researchers with varying linguistic backgrounds to express their ideas more effectively. In such a future, the quality of writing will no longer be a barrier to publication; the focus will shift back to originality and rigor in the research itself.
I also believe universities should start including courses on the ethical use of AI in graduate training. Teaching researchers how to use these tools responsibly — and how to disclose their use transparently — is the foundation of academic integrity today. At the same time, GAIDeT ensures that human accountability remains non-negotiable, as final decisions, interpretations, and verification always rest with the authors. Early-career researchers should learn this from the start.
What’s the lowest-friction path for journals, universities, and funders to make GAIDeT the norm within a year? What two success metrics would convince skeptics it’s working (not just policy PDFs)?
The easiest path is to start with a pilot program in a handful of journals or publishers. Some Ukrainian journals have already done this, such as the Karazin Journal of Immunology, the Kharkiv Dental Journal, and the Ukrainian Journal of Radiology and Oncology. The next step is to add GAIDeT-based disclosure guidance in a journal’s instructions for authors. And then, of course, to build it into the metadata pipeline.
Two success markers after one year might be 1) at least 80% of submissions in a given journal include a valid GAIDeT declaration and 2) a measurable reduction in editorial queries along the lines of “please clarify what exactly AI did.”
If you could change one industry norm tomorrow to make AI disclosure safer, clearer, and more useful, what would it be—and what should readers do this week to help?
If I could change one norm tomorrow, I would place GAIDeT side-by-side with CRediT in every manuscript. That would normalize transparency without creating a witch-hunt culture.
And what can readers do right now? Try the GAIDeT Declaration Generator, suggest GAIDeT wording in a journal’s instructions for authors, and openly disclose one AI-delegated task in your next paper. That single step shifts the culture toward transparency and away from stigma.
Discussion
8 Thoughts on "Guest Post – Taxonomy of Delegation: How GAIDeT Reframes AI Transparency in Science, an Interview with Yana Suchikova"
This is a wonderful interview – thank you, Frances, and congratulations, Yana, on articulating so clearly why delegation offers a more honest and less stigmatizing way of framing GenAI use in research. I especially appreciated the point that tasks remain stable even when tools change, which makes GAIDeT both practical and future-proof. It is encouraging to see this discussion on The Scholarly Kitchen. Building a culture where GenAI disclosure is normal, comparable, and safe will help editors, reviewers, and above all authors. I hope more journals will be willing to pilot such approaches and show that transparency can be simple rather than punitive.
Thank you, Serhii — it means a lot to see your voice here as part of our shared work. You put it beautifully: tasks remain stable even when tools change. That is exactly why GAIDeT feels like a practical path forward.
This interview is spot-on, and pairs nicely with Tony Alves’ guest post from last week (https://scholarlykitchen.sspnet.org/2025/09/24/guest-post-how-the-ai-debate-has-changed-in-just-a-few-short-years/). Both articles highlight that the issue is no longer whether AI will be used, but rather what is the best way of disclosing it. I completely agree there’s a prejudice against using GenAI, but it seems like most researchers (and reviewers) are using it behind the scenes anyway. GAIdET appears to be a promising solution, similar to the CRediT authorship system. CRediT has seen increased implementation since being awarded a two-year grant to support outreach and engagement. Hopefully, funding bodies will become aware of the importance of standardising the disclosure of GenAI use and will sponsor the dissemination and implementation of systems like GAIdET.
You’re absolutely right – the key question now is not IF ai will be used, but HOW to disclose it in a way that is fair, consistent, and non-stigmatizing. One of the next steps we are working on is exploring how GAIDeT declarations can be captured not just in articles but also in metadata. For example, we are looking for volunteer journals willing to experiment with integrating GAIDeT into Crossref metadata via Crossmark https://doi.org/10.5281/zenodo.17176719 If editors or publishers are curious to test this, we’d be glad to hear from them. Even small pilots can help us learn how to make AI disclosures both human-readable and machine-actionable.
Exactly, Serhii — and this is one of the most exciting next steps: making GAIDeT declarations not only human-readable but also machine-actionable. If any editors or publishers reading this are curious, please do reach out — we would be delighted to explore pilots together.
Thank you, Victoria — I’m so glad you highlighted the link to CRediT. That is exactly the inspiration: to make disclosure of GenAI contributions as normal and accepted as contributor roles. I completely agree that funder support will be essential for scaling implementation.
I really enjoyed reading this, great to see such a pragmatic suggestion! Could this possibly be considered as an expansion of CRediT, rather than a whole new standard in it’s own right? It would help hugely with implementation!
In practice, the two can easily work together. A journal could present the CRediT roles as usual, followed directly by a GAIDeT declaration that specifies which tasks (if any) were supported by GenAI, under human oversight. That way, transparency is increased without creating parallel or conflicting standards. So, I’d frame GAIDeT less as a “whole new standard” and more as an extension layer that complements CRediT where GenAI is involved. We’d love to see future alignment between the two, that would indeed make adoption and implementation much smoother!