Impact Metrics on Publisher Platforms: Who Shows What Where?
A review of 12 major publishers finds that they display an average of 6 journal-level impact metrics on their platforms. The Journal Impact Factor is the only metric displayed on all 12.
A review of 12 major publishers finds that they display an average of 6 journal-level impact metrics on their platforms. The Journal Impact Factor is the only metric displayed on all 12.
Creative Commons licenses continue to confuse the communications community. Here we collect a decade-plus of articles looking to offer some clarity on their use.
Nearly three years after ChatGPT’s debut, generative AI continues to reshape scholarly publishing. The sector has moved from experimentation toward integration, with advances in ethical writing tools, AI-driven discovery, summarization, and automated peer review. While workflows are becoming more efficient, the long-term impact on research creation and evaluation remains uncertain.
In the fast-moving world of AI research tools, there are many community-focused concerns that vendors should have strong opinions on and plans for, from privacy and security to sustainability and copyright. But the most misunderstood issue, in my view, is the one at the heart of it all — how AI will reshape the economics of academic research.
Between a political policy environment focused on defunding and deleting data collections – an environment in which little can be trusted – and an onslaught of new AI tools that feed indiscriminately on data, bits of information at the intersection of rows and columns are appearing in headlines more than ever before. To avoid cultural memory loss, we must build systems that save what humanity needs across disciplinary silos rather than saving some archives and losing others through an accident of history.
AI web harvesting bots are different from traditional web crawlers and violate many of the established rules and practices in place. Their rapidly expanding use is emerging as a significant IT management problem for content-rich websites across numerous industries.
Does your publishing organization need a manifesto? Writing a manifesto for your organization can be a great exercise for team building and planning, and a way to ignite action.
We’re finally seeing a move to truly digital-first publishing systems and in today’s post Alice Meadows interviews Liz Ferguson of Wiley about this transition, including their own Research Exchange platform.
Today, we speak with Prof. Yana Suchikova about GAIDeT, the Generative AI Delegation Taxonomy, which enables researchers to disclose the use of generative AI in an honest and transparent way.
Today’s guest author offers a progress report on recent efforts to build open-source technology for open access book metrics.
To kick off Peer Review Week, we asked the Chefs, What’s a bold experiment with AI in peer review you’d like to see tested?
NISO’s Open Discovery Initiative (ODI) survey reflects the positive and negative expectations of generative AI in web-scale discovery tools.
Summing up the Committee on Publication Ethics (COPE) Forum discussion on Emerging AI Dilemmas in Scholarly Publishing, which explored the many challenges AI presents for the scholarly community.
As AI becomes a major consumer of research, scholarly publishing must evolve: from PDFs for people to structured, high-quality data for machines.
The MIT Press surveyed book authors on attitudes towards LLM training practices. In Part 2 of this 2 part post, we discuss recommendations for stakeholders to avoid unintended harms and preserve core scientific and academic values.