Atlantis 46 (1):42-55 (
2025)
Copy
BIBTEX
Abstract
This paper explores how Large Language Models (LLMs) foster the homogenization of both style and con-tent and how this contributes to the epistemic marginalization of underrepresented groups. Utilizing standpoint the-ory, the paper examines how biased datasets in LLMs perpetuate testimonial and hermeneutical injustices and restrictdiverse perspectives. The core argument is that LLMs diminish what Jose Medina calls “epistemic friction,” which isessential for challenging prevailing worldviews and identifying gaps within standard perspectives, as further articu-lated by Miranda Fricker (Medina 2013, 25). This reduction fosters echo chambers, diminishes critical engagement,and enhances communicative complacency. AI smooths over communicative disagreements, thereby reducing oppor-tunities for clarification and knowledge generation. The paper emphasizes the need for enhanced critical literacy andhuman mediation in AI communication to preserve diverse voices. By advocating for critical engagement with AIoutputs, this analysis aims to address potential biases and injustices and ensures a more inclusive technological land-scape. It underscores the importance of maintaining distinct voices amid rapid technological advancements and callsfor greater efforts to preserve the epistemic richness that diverse perspectives bring to society