Editor’s Note: Today’s post is by John Frechette. John is co-founder and CEO of moara.io, an AI-powered research tool, and Sourced Economics, a research firm. John is also an adjunct instructor at George Washington University and Stevenson University.
On a recent webinar, I hosted a panel with a group of rockstar university librarians from the DC area. We discussed how academic organizations, more than most private companies, prioritize mission alignment when working with vendors.
And in the fast-moving world of AI research tools, there are many community-focused concerns that vendors should have strong opinions on and plans for, from privacy and security to sustainability and copyright.
But the most misunderstood issue, in my view, is the one at the heart of it all — how AI will reshape the economics of academic research.

The Direct Effects of AI on Academic Research
Let’s start with the basics. In economics, the law of demand holds that price and quantity are inversely related. But “price” isn’t strictly monetary. It also includes time. For most researchers, time is the dominant marginal cost. Any innovation that reduces the time required to consume research effectively lowers its price.
AI, in nearly every form, does just that. Whether through better search, automated summarization, or customization, AI dramatically reduces the hours needed to find, understand, or reuse papers. And the next wave of tools is pushing even further, offering more advanced data extraction, citation management, and large context windows that can analyze hundreds of papers at once.
The result is a surge in demand for academic research, and with it, more research being produced. A recent large-scale study of more than 41 million papers across the natural sciences found that scientists engaged in AI-augmented research published over three times as many papers and received nearly five times as many citations as their peers. However, note that these gains also came with signs of narrowing research diversity and collaboration.
But despite early usage trends, these cost and quantity improvements won’t be distributed evenly. The biggest gains will ultimately go to non-native English speakers, early-career researchers, and professionals working outside of academia. In turn, these folks are likely to be reading and writing significantly more papers.
A very recent study in the social and behavioral sciences showed a comparable trend — researchers using generative AI published more papers and saw modest quality gains, with the greatest benefits among early-career and non-English-speaking scholars.
AI should also make it easier for scholars to engage with research outside their own fields. In fact, for highly experienced academic researchers, the ability to quickly get up to speed on cross-disciplinary literature may be AI’s most powerful contribution.
Reference lists will very likely increase in size, a positive development for fields like my own, economics, where citationsare often selected sporadically and, at times, too subjectively.
This latter shift wasn’t inevitable. It’s happening largely because researchers have demanded that AI tools include traceable sources in their summaries. And for authors whose work is published in academic journals, the result is simple — these tools will very likely increase the usage and citation of their research — potentially by a lot.
The Indirect Effects of AI on Research
One of my biggest critiques of research practices in fields like economics isn’t just the limited scope of reference lists, or how informal literature reviews tend to be — though both are true. It’s how poorly insights across papers are synthesized, and how rarely they shape the conclusions of new work. For instance, in many econometric studies, literature reviews are treated as a checkbox exercise.
As such, in fields like economics, literature isn’t just under-read but under-integrated. Prior studies are often cited to show awareness of existing work rather than to inform hypotheses, methods, or analyses. Too often, the literature review becomes record-keeping instead of a foundation for new inquiry.
AI tools are beginning to address this challenge. The ability to instantly generate narrative summaries of key literature on nearly any topic is a breakthrough that, for my own work, is transformative. And with newer, more customizable tools, researchers can select the exact papers and even annotations to include in these summaries.
Without AI, summarizing an area of research generally involves sifting through papers one by one, jotting down notes, and piecing together a coherent story. Reconciling empirical results is difficult. And despite best efforts, important papers are often missed.
While AI systems can’t eliminate every challenge, they can make it easier to find the key ideas, connect findings across studies, and build stronger, more informed arguments. These tools won’t produce more research, but better research.
Looking ahead, I hope we may even move beyond having standalone “literature review” sections in research papers. Instead, citations could be integrated seamlessly throughout, describing both high-level context and specific insights from earlier studies.
This would shift how researchers engage with the literature. They may be less educated on individual papers, but more attuned to the broader patterns and points of consensus in their field. What’s more, less time will be spent reading through papers that appeared relevant but proved not to be.
A Note on Concerns
No innovation comes without trade-offs. In my view, among the most legitimate and significant concerns when embedding AI into research workflows is the potential harm to learning.
There’s certainly real value in reading full papers, not just scanning metadata in an AI-generated summary. That process exercises cognitive skills that are crucial for generating new ideas. Learning requires mental effort. And it’s unclear whether, say, skimming metadata across 20 papers is more useful than reading one in full — especially for students still developing their research habits.
A separate discussion should be had on imposing thoughtful constraints in learning environments. Just as we limit notes on certain exams, some assignments might deliberately require students to engage with papers directly, without AI assistance. These aren’t about rejecting AI, but designing methods that bolster rather than replace mental effort.
Another common concern — and a recurring one in every wave of innovation — is automation.
It’s always revealing to look at past technological shifts through the eyes of the professionals affected. For example, a quick Google search for “digitalization in libraries,” with a filter for pre-2000 results, yields interesting articles like this one from CLIR and another from the ALA — both focused on the idea of a “library without walls.” Much of the debate at the time centered on the evolving role of the library as well as whether library services would decline, and how to prevent that outcome.
My view is that digitalization posed a larger disruption to academic institutions than AI does today. And, while it may have slowed the growth of demand for some library services, it didn’t reduce demand overall. According to Statista data, library spending and staffing have remained relatively stable.
As with every shift, the work of academic professionals will evolve, but widespread displacements are extremely unlikely.
Clearly, the right balance must be struck between innovation and caution. The academic world found that balance through the period of early digitalization. Let’s carry those same lessons over into the current landscape.
Facing Forward
If we’re serious about progress, this type of mindset needs to go, and quickly:

To be sure, mine too! But technological competition, in the private sector and academia alike, increasingly favors those who engage early — not just the fastest — and are the most willing to learn.
Let’s learn to love change and guide it with purpose.
Discussion
8 Thoughts on "Guest Post — The Economics of AI in Academic Research"
I’m genuinely surprised that you didn’t address some of the major challenges AI poses to academic research. Issues such as copyright infringement, author’s rights, and the paper mill crisis that has flooded the scholarly communications landscape. I am hoping that you can possibly add some thoughts on these very real topics.
Hey Rachel, thanks for the comment. I did not intend for this article to be a full accounting of costs and benefits, rather an analysis on the major effects of AI on the quantity and quality of scholarly research.
Having said that, I do touch on my estimation that “for authors whose work is published in academic journals, the result is simple — these tools will very likely increase the usage and citation of their research — potentially by a lot.” Contrary to popular belief, I believe this new wave of research tools will be a huge positive for authors, whose articles will be significantly more searchable and accessible for others to use.
However, concerns do remain with the more general AI systems where sources are cited well, but not as rigorously as they are with tools focused strictly on analyzing academic papers.
Cheers!
“I hope we may even move beyond having standalone “literature review” sections in research papers. Instead, citations could be integrated seamlessly throughout, describing both high-level context and specific insights from earlier studies” – I would suggest that this is more a question of research culture and of how one is expected to structure a paper, rather than the use of AI. For instance, in the humanities, it is common to have a literature review and citations throughout the text to contextualise your ideas. At the same time, this integration requires more thorough reading of a paper, rather than less, which appears to contradict some of the ideas you’ve expressed.
I am surprised you (any many others that write on this topic) did not address the scope of knowledge contained LLMS. Boundless resources on many topics and in many languages are not digitized, and certainly not in LLMs. Is knowledge now defined by what is digitized?
Anastasiia – great point, achieving this is more dependent on other factors (e.g., discipline, methodologies used) than on AI itself. The idea I had in mind though was that summarization allows researchers to more readily stamp individual papers to an overarching narrative or timeline – essentially making it more intuitive to integrate them into the body of new work rather than isolating them in a standalone lit review section.
Roxie – thanks also for the comment. I agree that tools capable of drawing more effectively from non-English research (and, for that matter, a larger number of journals) would be a major step forward.
To your other point – certainly, significant knowledge exists outside of the digital world. But just because LLMs (or any other software) can’t access that knowledge doesn’t mean they aren’t a major advancement for science. We will never have a product that can log and interpret knowledge in every form.
That said, I’d argue LLMs are massively increasing, not decreasing, the diversity of sources we learn from. super basic example: historically, I typically reviewed 1-2 sources following a Google Search. Thanks to Gemini, I’m reviewing data grabbed from dozens, even hundreds of sources.
Finally, I’d just note that the tools I’m referring to are designed for a specific task – reviewing academic papers – which, fortunately, are already largely digitized and consistently formatted.
What about the cost of AI tools? Is that not just another digital divide? I remember when we could watch the World Series on the local television station, now we have to pay for that access. I am fine with learning and using and incorporating AI into my research workflow, but not at a cost and not sharing that divide or talking about it to students is the same as saying, you get better results if you pay to play, or the ICK factor. Money and greed. I guess I will just have to read the New York Times to get my baseball highlights (free to me with library card), or better yet, ask a friend, or listen to Public Radio (also needs money). Just too much ICK around the paid version for me to really want to pursue it.
Tracy: I am too frustrated with being unable to watch my Boston Bruins without paying up the you-know-what! I think it would be great if all AI tools were free – but because people have to build them, and no one can work for nothing – they will inevitably have to cost something. I guess the optimistic takes I’d offer are (a) a HUGE (almost unprecedented) amount of functionality in this space is indeed available for free – compared with many A-Z resources that are extremely hard to access, (b) it is a very competitive field compared with, say, content – as such, the type of pricing / margins seen in many established academic technology markets is really not possible (at least at the moment), and finally, (c) existing systems also cost both time and money (even MSFT suite has a price) – the aim is for newer tools to DRAMATICALLY decrease those costs, while also hopefully leading to better research outcomes. Cheers