Results for 'algorithmic bias'

987 found
Order:
  1. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  2. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John, AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic (...) are hampered by conflations of various understandings of bias, ranging from neutral deviations from a standard to morally problematic instances of injustice due to prejudice, discrimination, and disparate treatment. This terminological confusion impedes efforts to address clear cases of discrimination. -/- In this paper, we examine the promises and challenges of different approaches to disambiguating bias and designing for justice. While both approaches aid in understanding and addressing clear algorithmic harms, we argue that they also risk being leveraged in ways that ultimately deflect accountability from those building and deploying these systems. Applying this analysis to recent examples of generative AI, our argument highlights unseen dangers in current methods of evaluating algorithmic bias and points to ways to redirect approaches to addressing bias in generative AI at its early stages in ways that can more robustly meet the demands of justice. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Algorithmic Bias and Risk Assessments: Lessons from Practice.Ali Hasan, Shea Brown, Jovana Davidovic, Benjamin Lange & Mitt Regan - 2022 - Digital Society 1 (1):1-15.
    In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  4. Fuck the Algorithm: Conceptual Issues in Algorithmic Bias.Catherine Stinson - manuscript
    Algorithmic bias has been the subject of much recent controversy. To clarify what is at stake and to make progress resolving the controversy, a better understanding of the concepts involved would be helpful. The discussion here focuses on the disputed claim that algorithms themselves cannot be biased. To clarify this claim we need to know what kind of thing ‘algorithms themselves’ are, and to disambiguate the several meanings of ‘bias’ at play. This further involves showing how (...) of moral import can result from statistical biases, and drawing connections to previous conceptual work about political artifacts and oppressive things. Data bias has been identified in domains like hiring, policing and medicine. Examples where algorithms themselves have been pinpointed as the locus of bias include recommender systems that influence media consumption, academic search engines that influence citation patterns, and the 2020 UK algorithmically-moderated A-level grades. Recognition that algorithms are a kind of thing that can be biased is key to making decisions about responsibility for harm, and preventing algorithmically mediated discrimination. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Algorithmic Political Bias Can Reduce Political Polarization.Uwe Peters - 2022 - Philosophy and Technology 35 (3):1-7.
    Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that (...) political classifications entrench political identities, I contend that they may often produce the opposite result. They can lead people to change in ways that disconfirm the classifications. Consequently and counterintuitively, algorithmic political bias can in fact decrease political entrenchment and polarization. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic (...) against people’s political orientation can arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  7. Algorithmic Fairness Criteria as Evidence.Will Fleisher - forthcoming - Ergo: An Open Access Journal of Philosophy.
    Statistical fairness criteria are widely used for diagnosing and ameliorating algorithmic bias. However, these fairness criteria are controversial as their use raises several difficult questions. I argue that the major problems for statistical algorithmic fairness criteria stem from an incorrect understanding of their nature. These criteria are primarily used for two purposes: first, evaluating AI systems for bias, and second constraining machine learning optimization problems in order to ameliorate such bias. The first purpose typically involves (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  8. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  9. Tackling Racial Bias in AI Systems: Applying the Bioethical Principle of Justice and Insights from Joy Buolamwini’s “Coded Bias” and the “Algorithmic Justice League”.Etaoghene Paul Polo & Donatus Osatofoh Ailodion - 2025 - Bangladesh Journal of Bioethics 16 (1):8-14.
    This paper explores the issue of racial bias in artificial intelligence (AI) through the lens of the bioethical principle of justice, with a focus on Joy Buolamwini’s “Coded Bias” and the work of the “Algorithmic Justice League.” AI technologies, particularly facial recognition systems, have been shown to disproportionately misidentify individuals from marginalised racial groups, raising profound ethical concerns about fairness and equity. The bioethical principle of justice stresses the importance of equal treatment and the protection of vulnerable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing power over our lives. They can and often do wield that power unfairly, and much has been said about algorithmic fairness. In contrast, algorithmic neutrality has been largely neglected. I investigate algorithmic neutrality, asking: What is it? Is it possible? And what is its normative significance?
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  15. Varieties of Bias.Gabbrielle M. Johnson - 2024 - Philosophy Compass (7):e13011.
    The concept of bias is pervasive in both popular discourse and empirical theorizing within philosophy, cognitive science, and artificial intelligence. This widespread application threatens to render the concept too heterogeneous and unwieldy for systematic investigation. This article explores recent philosophical literature attempting to identify a single theoretical category—termed ‘bias’—that could be unified across different contexts. To achieve this aim, the article provides a comprehensive review of theories of bias that are significant in the fields of philosophy of (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  16. Algorithmic Recommendation and Aesthetic Flourishing.Anthony Cross - forthcoming - Journal of Aesthetics and Art Criticism.
    In the age of streaming, we face a pressing problem of aesthetic choice: how are we to navigate the overwhelming quantity of content to which we now have access? Streaming platforms like Spotify and Netflix apply sophisticated machine learning tools to recommend personalized content to individual users. These recommender systems are presented to users as a technological solution to the problem of aesthetic choice, promising to help us discover new opportunities for engagement with aesthetic value. However, overreliance on algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of (...)
    Download  
     
    Export citation  
     
    Bookmark   80 citations  
  18.  50
    Algorithmic Harms and Algorithmic Wrongs.Nathalie DiBerardino, Clair Baleshta & Luke Stark - 2024 - In - Acm, FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. New York NY United States: Association for Computing Machinery.
    New artificial intelligence (AI) systems grounded in machine learning are being integrated into our lives at a rapid rate, but not without consequence: scholars across domains have increasingly pointed out issues related to privacy, transparency, bias, discrimination, exploitation, and exclusion associated with algorithmic systems in both public and private sector contexts. Concerns surrounding the adverse impacts of these technologies have spurred discussion on the topics of algorithmic harm. However, the overwhelming majority of articles on said harms offer (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  20. Generalization Bias in Large Language Model Summarization of Scientific Research.Uwe Peters & Benjamin Chin-Yee - forthcoming - Royal Society Open Science.
    Artificial intelligence chatbots driven by large language models (LLMs) have the potential to increase public science literacy and support scientific research, as they can quickly summarize complex scientific information in accessible terms. However, when summarizing scientific texts, LLMs may omit details that limit the scope of research conclusions, leading to generalizations of results broader than warranted by the original study. We tested 10 prominent LLMs, including ChatGPT-4o, ChatGPT-4.5, DeepSeek, LLaMA 3.3 70B, and Claude 3.7 Sonnet, comparing 4900 LLM-generated summaries to (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  21. Cognitive bias in large language models: A vindicatory approach.David Thorstad - forthcoming - British Journal for the Philosophy of Science.
    Recent studies allege that large language models (LLMs) exhibit a range of cognitive biases familiar from human cognition. I argue that the case for many biases is weaker than it may appear. Using case studies of knowledge effects in the Wason selection task, availability bias in relation extraction, and anchoring bias in code generation, I show how a range of vindicatory strategies traditionally used to vindicate apparent biases in humans can be used to push back against allegations of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22.  49
    Data Speak but Sometimes Lie: A Game-Theoretic Approach to Data Bias and Algorithmic Fairness.Chiara Manganini, Giuseppe Primiero & Esther Anna Corsi - 2026 - International Journal of Approximate Reasoning 190 (109608).
    In the present work, we develop a novel information-theoretic and logic-based approach to data bias in Machine Learning predictions and show its relevance in the specific context of fairness evaluation. We frame predictions made on biased data as Ulam games, which formalise key aspects of data-driven inference, and from which a variation of the rational non-monotonic consequence relation can be defined. We investigate this framework to model how differential levels of noise in input features impact Machine Learning predictions. To (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Negligent Algorithmic Discrimination.Andrés Páez - 2021 - Law and Contemporary Problems 84 (3):19-33.
    The use of machine learning algorithms has become ubiquitous in hiring decisions. Recent studies have shown that many of these algorithms generate unlawful discriminatory effects in every step of the process. The training phase of the machine learning models used in these decisions has been identified as the main source of bias. For a long time, discrimination cases have been analyzed under the banner of disparate treatment and disparate impact, but these concepts have been shown to be ineffective in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24. Broomean(ish) Algorithmic Fairness?Clinton Castro - 2025 - Journal of Applied Philosophy 42 (2):639-651.
    Recently, there has been much discussion of ‘fair machine learning’: fairness in data‐driven decision‐making systems (which are often, though not always, made with assistance from machine learning systems). Notorious impossibility results show that we cannot have everything we want here. Such problems call for careful thinking about the foundations of fair machine learning. Sune Holm has identified one promising way forward, which involves applying John Broome's theory of fairness to the puzzles of fair machine learning. Unfortunately, his application of Broome's (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Should Algorithms that Predict Recidivism Have Access to Race?Duncan Purves & Jeremy Davis - 2023 - American Philosophical Quarterly 60 (2):205-220.
    Recent studies have shown that recidivism scoring algorithms like COMPAS have significant racial bias: Black defendants are roughly twice as likely as white defendants to be mistakenly classified as medium- or high-risk. This has led some to call for abolishing COMPAS. But many others have argued that algorithms should instead be given access to a defendant's race, which, perhaps counterintuitively, is likely to improve outcomes. This approach can involve either establishing race-sensitive risk thresholds, or distinct racial ‘tracks’. Is there (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Mirrors That Mutate: AI, Bias, and the Architecture of Human Echoes.Som Subhro Nath - manuscript
    Bias in Artificial Intelligence (AI) systems is often discussed in terms of gender, race, or ideology; however, numerical bias—especially in pseudo-random decision-making—remains largely unexamined. This study presents a speculative yet empirically grounded thought experiment exploring the mutability of algorithmic bias in Large Language Models (LLMs). Building upon prior observations of the anomalous recurrence of the number 27 in first-interaction prompts (e.g., “Pick a number between 1 and 50”), the study proposes a conceptual shift: what if the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Artificial Intelligence and unintended bias: A call for responsible innovation.Dhruvitkumar Talati - 2021 - International Journal of Science and Research Archive 2021 (2(02)):298-312.
    This essay discusses the intricate and multifaceted problem of algorithmic bias in artificial intelligence (AI) systems, and emphasizes its human rights, social, and ethical implications. As AI technologies become increasingly embedded in high-stakes areas of medicine, finance, employment, law enforcement, and social services, risks of discriminatory decision-making remain on the rise. Algorithmic bias may perpetuate existing social biases, adversely affect disadvantaged populations disproportionately, and perpetuate institutional discrimination, and thereby pose serious ethical issues. The research endeavors to (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  29. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - 2025 - Philosophical Studies 182 (1):25-53.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  30. Exploring the Ethical Implications of AI Algorithms in Decision-Making Processes.Prof Rashmi Gourkar Atul Verma - 2024 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 7 (6):11068-11072.
    Artificial intelligence (AI) algorithms are increasingly influencing decision-making processes across various domains. While AI offers undeniable benefits in efficiency and accuracy, its ethical implications necessitate careful consideration. This research paper delves into the ethical landscape of AI algorithms in decision-making. It explores how biases within training data can lead to discriminatory outcomes. The paper further examines the challenge of transparency in AI algorithms, where the rationale behind decisions remains opaque. To ensure responsible AI implementation, the research proposes strategies for mitigating (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. The Inability of Algorithmic Systems to Discharge the Duty to Act Fairly.Ali Pasha Abdollahi - manuscript
    We examine the normative and epistemological foundations of algorithmic decision-making (ADM) systems. We argue that a data-driven ADM system, by its very design, necessarily fails to discharge the duty to act fairly. This is not an accidental outcome due to flawed data or biased programming, but a necessary result of the system's core logic, which substitutes individualized assessment based on an agent's own actions with a populationalized judgment based on the actions of others. This substitution constitutes a fundamental breach (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - 2024 - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency 1:1078-1092.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. From human resources to human rights: Impact assessments for hiring algorithms.Josephine Yam & Joshua August Skorburg - 2021 - Ethics and Information Technology 23 (4):611-623.
    Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  34. Adaptive Interventions Reducing Social Identity Threat to Increase Equity in Higher Distance Education: A Use Case and Ethical Considerations on Algorithmic Fairness.Laura Froehlich & Sebastian Weydner-Volkmann - 2024 - Journal of Learning Analytics 11 (2):112-122.
    Educational disparities between traditional and non-traditional student groups in higher distance education can potentially be reduced by alleviating social identity threat and strengthening students’ sense of belonging in the academic context. We present a use case of how Learning Analytics and Machine Learning can be applied to develop and implement an algorithm to classify students as at-risk of experiencing social identity threat. These students would be presented with an intervention fostering a sense of belonging. We systematically analyze the intervention’s intended (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. The Limits of Reallocative and Algorithmic Policing.Luke William Hunt - 2022 - Criminal Justice Ethics 41 (1):1-24.
    Policing in many parts of the world—the United States in particular—has embraced an archetypal model: a conception of the police based on the tenets of individuated archetypes, such as the heroic police “warrior” or “guardian.” Such policing has in part motivated moves to (1) a reallocative model: reallocating societal resources such that the police are no longer needed in society (defunding and abolishing) because reform strategies cannot fix the way societal problems become manifest in (archetypal) policing; and (2) an (...) model: subsuming policing into technocratic judgements encoded in algorithms through strategies such as predictive policing (mitigating archetypal bias). This paper begins by considering the normative basis of the relationship between political community and policing. It then examines the justification of reallocative and algorithmic models in light of the relationship between political community and police. Given commitments to the depth and distribution of security—and proscriptions against dehumanizing strategies—the paper concludes that a nonideal-theory priority rule promoting respect for personhood (manifest in community and dignity-promoting policing strategies) is a necessary condition for the justification of the above models. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  36. BIAS OF AI AND CIVIC VIRTUE IN DIGITAL ENVIRONMENT.Wonsup Jung - 2024 - Vietnamese Journal of Philosophy 3 (69):58-66.
    The article discusses two cases of in-terventions in AI technology, human and algorithmic. The first is the ‘Naver case,’ which was accused by the Korea Government of manipulating search algorithms. The case raises the issue of computer engineer's professional ethics, on whether to ‘intervene’ in existing algorithms to get ‘better results.’ The second is the chatbot case, ‘Yiruda’ (Korean chatbot) which made serious hate speeches against socially disadvantaged groups. It raised concerns about abuses of artificial intelligence. Finally, this paper (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. A philosophical inquiry on the effect of reasoning in A.I models for bias and fairness.Aadit Kapoor - manuscript
    Advances in Artificial Intelligence (AI) have driven the evolution of reasoning in modern AI models, particularly with the development of Large Language Models (LLMs) and their "Think and Answer" paradigm. This paper explores the influence of human reinforcement on AI reasoning and its potential to enhance decision-making through dynamic human interaction. It analyzes the roots of bias and fairness in AI, arguing that these issues often stem from human data and reflect inherent human biases. The paper is structured as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Aggregating Concepts of Fairness and Accuracy in Prediction Algorithms.David Kinney - forthcoming - 2025 Acm Conference on Fairness, Accountability, and Transparency (Facct ’25).
    An algorithm that outputs predictions about the state of the world will almost always be designed with the implicit or explicit goal of outputting accurate predictions (i.e., predictions that are likely to be true). In addition, the rise of increasingly powerful predictive algorithms brought about by the recent revolution in artificial intelligence has led to an emphasis on building predictive algorithms that are fair, in the sense that their predictions do not systematically evince bias or bring about harm to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Shared decision-making and maternity care in the deep learning age: Acknowledging and overcoming inherited defeaters.Keith Begley, Cecily Begley & Valerie Smith - 2021 - Journal of Evaluation in Clinical Practice 27 (3):497–503.
    In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with backgrounds in (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  40. We might be afraid of black-box algorithms.Carissa Veliz, Milo Phillips-Brown, Carina Prunkl & Ted Lechterman - 2021 - Journal of Medical Ethics 47.
    Fears of black-box algorithms are multiplying. Black-box algorithms are said to prevent accountability, make it harder to detect bias and so on. Some fears concern the epistemology of black-box algorithms in medicine and the ethical implications of that epistemology. Durán and Jongsma (2021) have recently sought to allay such fears. While some of their arguments are compelling, we still see reasons for fear.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  41. The Contradiction Trap: A Dialectical and Game-Theoretic Framework for Exposing Structural Bias.J. Atkinson - manuscript
    This paper introduces the contradiction trap: a dialectical and game-theoretic method for exposing concealed asymmetry in institutional and algorithmic reasoning. By forcing a system to reconcile mutually exclusive commitments, the trap converts inconsistency into evidence — a falsifiable signal of structural bias, motivated deviation, or narrative drift. -/- The framework models contradiction as an epistemic game with informational payoffs, providing a portable diagnostic for systems that claim impartiality but behave otherwise. It bridges philosophical logic, applied audit design, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Bias and Fairness in Machine Learning Models: A Critical Examination of Ethical Implications.Krishna Singh Mishra Vivaan Chandra Reddy, Saanvi Kumar Kapoor - 2024 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 7 (2):4927-4931.
    Machine learning (ML) models have become integral to decision-making processes across various sectors, including healthcare, finance, and criminal justice. However, these models often inherit and even amplify biases present in training data, leading to unfair outcomes for certain demographic groups. This paper critically examines the ethical implications of bias and fairness in ML models, exploring the sources of bias, its impact on marginalized communities, and the ethical challenges it poses. We review recent literature to identify common biases in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Justice in the age of algorithms: can AI weigh morality?Olivia Ruhil - 2025 - AI and Society 40 (7). Translated by Olivia Ruhil.
    Artificial intelligence (AI) has become a transformative force in the legal domain, automating complex tasks such as contract analysis, compliance checks, and legal research. However, the intersection of AI and moral decision-making exposes significant limitations. Legal systems are not merely instruments for enforcing rules—they are platforms where human morality, intent, and societal impact are weighed. This paper explores the critical question: Can AI truly deliver justice, or does it merely replicate historical biases encoded in training data? Using the concept of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. AI Ethics in Legal Decision-Making Bias, Transparency, And Accountability.J. D. Jelena Vujicic - 2025 - International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering 14 (5).
    Artificial Intelligence (AI) systems used in legal decision-making processes have created significant ethical challenges through their integration, leading to problems with bias and necessitating better transparency and accountability measures. This paper investigates the discriminatory effects of algorithmic bias by analyzing AI technologies that learn from historical legal datasets containing potential institutional biases. The opacity of AI decision-making, referred to as "black-boxed" decisions, creates complex obstacles to achieving both explainable judgments and fair outcomes. The article examines the absent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. How to Save Face & the Fourth Amendment: Developing an Algorithmic Auditing and Accountability Industry for Facial Recognition Technology in Law Enforcement.Lin Patrick - 2023 - Albany Law Journal of Science and Technology 33 (2):189-235.
    For more than two decades, police in the United States have used facial recognition to surveil civilians. Local police departments deploy facial recognition technology to identify protestors’ faces while federal law enforcement agencies quietly amass driver’s license and social media photos to build databases containing billions of faces. Yet, despite the widespread use of facial recognition in law enforcement, there are neither federal laws governing the deployment of this technology nor regulations settings standards with respect to its development. To make (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Correcting Underrepresentation and Intersectional Bias in Machine Learning.Alexander Tolbert - forthcoming - American Philosophical Quarterly.
    I consider the problem of learning from data corrupted by underrepresentation bias, where positive examples are filtered out at different, unknown rates for a fixed number of sensitive groups. I show that with a small amount of unbiased data, I can efficiently estimate the group-wise drop-out rates, even in settings where intersectional group membership makes learning each intersectional rate computationally infeasible. Using these estimates, I construct a reweighting scheme that allows me to approximate the loss of any hypothesis on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  48. Cyberethics and Racism: The Critical Need to Address Systemic Racism in Information Technology.Ferdinand Tablan - unknown
    In this paper, I advocate for the integration of systemic racism as a central topic in the education of information technology (IT) students, particularly within Cyberethics curricula. Systemic racism consists of deeply rooted policies, practices, and cultural norms that sustain racial inequality and discrimination regardless of individual intent. It extends beyond personal prejudice to encompass the institutional and structural disadvantages imposed on marginalized racial and ethnic groups. As technology becomes more embedded in everyday life, addressing systemic racism in IT education (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  99
    Human Instinct Without Human Bias, An AI Framework for Fair Leadership Selection.Zoe Middleton - manuscript
    Human hiring, particularly for leadership roles, is structurally incapable of full fairness. Bias is pre-conscious, automatic, socially reinforced and resistant to training. Even well-intentioned decision makers misread communication through the lens of identity. This paper proposes an AI mediated framework for leadership selection that removes identity cues while preserving the human communicative qualities that matter in real workplaces. The system evaluates candidates through voice mediated, identity blind interviews, scenario based reasoning analysis, evidence grounded trait assessment and anti mimicry safeguards (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. From the Eyeball Test to the Algorithm — Quality of Life, Disability Status, and Clinical Decision Making in Surgery.Charles Binkley, Joel Michael Reynolds & Andrew Shuman - 2022 - New England Journal of Medicine 14 (387):1325-1328.
    Qualitative evidence concerning the relationship between QoL and a wide range of disabilities suggests that subjective judgments regarding other people’s QoL are wrong more often than not and that such judgments by medical practitioners in particular can be biased. Guided by their desire to do good and avoid harm, surgeons often rely on "the eyeball test" to decide whether a patient will or will not benefit from surgery. But the eyeball test can easily harbor a range of implicit judgments and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 987