Impact Metrics on Publisher Platforms: Who Shows What Where?
A review of 12 major publishers finds that they display an average of 6 journal-level impact metrics on their platforms. The Journal Impact Factor is the only metric displayed on all 12.
A review of 12 major publishers finds that they display an average of 6 journal-level impact metrics on their platforms. The Journal Impact Factor is the only metric displayed on all 12.
Creative Commons licenses continue to confuse the communications community. Here we collect a decade-plus of articles looking to offer some clarity on their use.
Today’s guest blogger shares highlights from a recent panel at the New Directions Seminar that concluded AI is simultaneously the largest challenge and the largest opportunity.
Publishers have led themselves into a mess by focusing on rising submissions as a positive indicator of journal performance. The time has come to close the floodgates and require that authors demonstrate their commitment to quality science before we let them in the door.
In honor of International OA Week, The Scholarly Kitchen Chefs ponder the theme: Who owns our knowledge?
Diamond Open Access promises equity, but sustainability challenges remain. Discover the hidden costs, global gaps, and paths toward lasting open publishing.
Nearly three years after ChatGPT’s debut, generative AI continues to reshape scholarly publishing. The sector has moved from experimentation toward integration, with advances in ethical writing tools, AI-driven discovery, summarization, and automated peer review. While workflows are becoming more efficient, the long-term impact on research creation and evaluation remains uncertain.
In the fast-moving world of AI research tools, there are many community-focused concerns that vendors should have strong opinions on and plans for, from privacy and security to sustainability and copyright. But the most misunderstood issue, in my view, is the one at the heart of it all — how AI will reshape the economics of academic research.
Today, we speak with Prof. Yana Suchikova about GAIDeT, the Generative AI Delegation Taxonomy, which enables researchers to disclose the use of generative AI in an honest and transparent way.
The STM Association offers a classification scheme for the various possible uses of AI, including GenAI, in the preparation of manuscripts.
To kick off Peer Review Week, we asked the Chefs, What’s a bold experiment with AI in peer review you’d like to see tested?
NISO’s Open Discovery Initiative (ODI) survey reflects the positive and negative expectations of generative AI in web-scale discovery tools.
Summing up the Committee on Publication Ethics (COPE) Forum discussion on Emerging AI Dilemmas in Scholarly Publishing, which explored the many challenges AI presents for the scholarly community.
A scholarly communication ecosystem that relies on voluntary support rather than charging for access to content becomes radically less capable of keeping money in the system.
The MIT Press surveyed book authors on attitudes towards LLM training practices. In Part 2 of this 2 part post, we discuss recommendations for stakeholders to avoid unintended harms and preserve core scientific and academic values.