Results for 'Explainable AI'

992 found
Order:
  1. Explainable AI and Stakes in Medicine: A User Study.Sam Baron, Andrew J. Latham & Somogy Varga - 2025 - Artificial Intelligence 340 (C):104282.
    The apparent downsides of opaque algorithms has led to a demand for explainable AI (XAI) methods by which a user might come to understand why an algorithm produced the particular output it did, given its inputs. Patients, for example, might find that the lack of explanation of the process underlying the algorithmic recommendations for diagnosis and treatment hinders their ability to provide informed consent. This paper examines the impact of two factors on user perceptions of explanations for AI systems (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  3. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  4. Leveraging Explainable AI and Multimodal Data for Stress Level Prediction in Mental Health Diagnostics.Destiny Agboro - forthcoming - International Journal of Research and Scientific Innovation.
    The increasing prevalence of mental health issues, particularly stress, has necessitated the development of data-driven, interpretable machine learning models for early detection and intervention. This study leverages multimodal data, including activity levels, perceived stress scores (PSS), and event counts, to predict stress levels among individuals. A series of models, including Logistic Regression, Random Forest, Gradient Boosting, and Neural Networks, were evaluated for their predictive performance. Results demonstrated that ensemble models, particularly Random Forest and Gradient Boosting, performed significantly better compared to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Trust and Explainable AI: Promises and Limitations.Sara Blanco - 2022 - Ethicomp Conference Proceedings.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  7. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  8. Transparency and Interpretability in Cloud- based Machine Learning with Explainable AI.V. Talati Dhruvitkumar - 2024 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 7 (7):11823-11831.
    With the increased complexity of machine learning models and their widespread use in cloud applications, interpretability and transparency of decision-making are the highest priority. Explainable AI (XAI) methods seek to shed light on the inner workings of machine learning models, hence making them more interpretable and enabling users to rely on them. In this article, we explain the importance of XAI in cloud-computer environments, specifically with regards to having interpretable models and explainable decision-making. [1] XAI is the essence (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Leveraging Knowledge Graphs and Explainable AI to Improve Employee Turnover Predictions.T. Devender Rao Savitha Saketh, Valishetti Saicharan, Vodnala Rohith, - 2025 - International Journal of Advanced Research in Education and Technology 12 (3).
    Healthcare fraud detection is a critical task that faces significant challenges due to imbalanced datasets, which often result in suboptimal model performance. Previous studies have primarily relied on traditional machine learning (ML) techniques, which struggle with issues like overfitting caused by Random Oversampling (ROS), noise introduced by the Synthetic Minority Oversampling Technique (SMOTE), and crucial information loss due to Random Undersampling (RUS). In this study, we propose a novel approach to address the imbalanced data problem in healthcare fraud detection, with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Transparent, explainable, and accountable AI for robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  11. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Enhancing Skin Cancer Accuracy with Efficientnet and Explainable AI.Krishna Chaitanya T. Nithya, Pedakolimi Hari Hara - 2025 - International Journal of Innovative Research in Science Engineering and Technology 14 (4).
    This study proposes a novel approach for multi-cancer detection utilizing the VGH-6 algorithm coupled with an efficient neural network model and SHAP (Shapley Additive Explanation) AI method. The VGH-6 algorithm is a sophisticated computational tool known for its accuracy in identifying various types of cancer. In this research, it is combined with an efficient neural network architecture to enhance the classification performance and improve the overall detection process. Additionally, the SHAP AI technique is incorporated to provide insightful explanations for the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  14. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   80 citations  
  15. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  16. Generative AI-Enhanced Framework for Predicting and Explaining Adverse Drug Reactions (ADR): A Multi-Modal Machine Learning and LLM Integration Approach.Vasanthapuram Gangadhar - 2025 - International Journal of Innovative Research in Science Engineering and Technology 14 (5).
    Generative AI enables multimodal framework to predict and explain the occurrence of adverse drug reaction (ADR). The framework exploits the capability of integrating (Large Language) LLMs with visual cues to enhance the classification accuracy and produce clinically interpretable results. Ads shown to be cross-lingual categorised and less dependent on second language boundary phenomena within the contexts of this work indicate potential for real world pharmacovigilance applications.
    Download  
     
    Export citation  
     
    Bookmark  
  17. Explaining Neural Networks with Reasons.Levin Hornischer & Hannes Leitgeb - manuscript
    We propose a new interpretability method for neural networks, which is based on a novel mathematico-philosophical theory of reasons. Our method computes a vector for each neuron, called its reasons vector. We then can compute how strongly this reasons vector speaks for various propositions, e.g., the proposition that the input image depicts digit 2 or that the input prompt has a negative sentiment. This yields an interpretation of neurons, and groups thereof, that combines a logical and a Bayesian perspective, and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Explainability of Algorithms.Andrés Páez - 2025 - In Luciano Floridi & Mariarosaria Taddeo, A Companion to Digital Ethics. Wiley and Sons. pp. 127-136.
    The opaqueness of many complex machine learning algorithms is often mentioned as one of the main obstacles to the ethical development of artificial intelligence (AI). But what does it mean for an algorithm to be opaque? Highly complex algorithms such as artificial neural networks process enormous volumes of data in parallel along multiple hidden layers of interconnected nodes, rendering their inner workings epistemically inaccessible to any human being, including their designers and developers; they are ‘black boxes’ for all their stakeholders. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Explainability through Systematicity: The Hard Systematicity Challenge for Artificial Intelligence.Matthieu Queloz - 2025 - Minds and Machines 35 (35):1-39.
    This paper argues that explainability is only one facet of a broader ideal that shapes our expectations towards artificial intelligence (AI). Fundamentally, the issue is to what extent AI exhibits systematicity—not merely in being sensitive to how thoughts are composed of recombinable constituents, but in striving towards an integrated body of thought that is consistent, coherent, comprehensive, and parsimoniously principled. This richer conception of systematicity has been obscured by the long shadow of the “systematicity challenge” to connectionism, according to which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  21. Certifiable AI.Jobst Landgrebe - 2022 - Applied Sciences 12 (3):1050.
    Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  22. Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi, Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. AI-Powered Debugging: Exploring Machine Learning Techniques for Identifying and Resolving Software Errors.Baladari Venkata - 2023 - International Journal of Science and Research 12 (3):1864-1869.
    Software development is being revolutionized by AI - powered debugging, which uses machine learning and deep learning methods to automate the discovery, identification, and correction of errors. Traditional debugging techniques are labour - intensive and time - consuming, whereas AI - assisted solutions can inspect extensive code archives, identify recurring patterns, and propose on - the - fly corrections, ultimately enhancing software stability and shortening the debugging process. Error detection is improved by supervised and unsupervised learning models, and code repair (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - 2024 - Nature Humanities and Social Sciences Communications 11:1-30.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  25. Explaining Consciousness and the Mind-Body Problem Through the Universal Law of Balance.Angelito Malicse - manuscript
    -/- Explaining Consciousness and the Mind-Body Problem Through the Universal Law of Balance -/- Introduction -/- The nature of consciousness and its relationship with the body has been one of the greatest mysteries in philosophy and science. The mind-body problem questions how subjective experience (mind) arises from physical matter (body), while modern neuroscience, quantum mechanics, and artificial intelligence seek to understand the origins of conscious thought. -/- Angelito Malicse’s universal formula, rooted in the universal law of balance in nature, provides (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  27. SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum in science. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - 2022 - AI and Society (2022):Online.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  29. The Role of AI in Cyber Risk Management: Predictive Analytics for Security Incident Forecasting and Mitigation.Vishal Sresth - 2021 - International Journal of Research and Analytical Reviews 8 (2).
    The growing sophistication and frequency of cyber threats stressed the limitations of traditional reactive cyber security approaches. Organizations are turning to Artificial Intelligence (AI) to improve cyber risk management through predictive capacities. This article investigates the role of AI-oriented predictive analysis in providing and mitigating security incidents. Analyzing historical cyber safety data and leveraging machine learning models as a temporal series forecast, anomaly detection, and supervised classification-ESTE study evaluate the ability of AI systems to anticipate potential threats and support proactive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies (...)
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  31. Scalable AI and data processing strategies for hybrid cloud environments.V. Talati Dhruvitkumar - 2021 - International Journal of Science and Research Archive 10 (3):482-492.
    Hybrid cloud infrastructure is increasingly becoming essential to enable scalable artificial intelligence (AI) as well as data processing, and it offers organizations greater flexibility, computational capabilities, and cost efficiency. This paper discusses the strategic use of hybrid cloud environments to enhance AI-based data workflows while addressing key challenges such as latency, integration complexity, infrastructure management, and security. In-depth discussions of solutions like federated multi-cloud models, cloud-native workload automation, quantum computing, and blockchaindriven data governance are presented. Examples of real-world implementation case (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  32. Can Ai Understand Moral Injury?Harshita Verma - manuscript - Translated by Harshita Verma & Harshita Verma.
    Moral injury is a complex psychological and spiritual phenomenon that arises when an individual’s core moral beliefs are violated by their own actions, the actions of others, or by systemic failures. Often discussed in military and healthcare contexts, moral injury goes beyond traditional mental health diagnoses such as post-traumatic stress disorder (PTSD), encompassing profound feelings of guilt, shame, betrayal, and a fractured sense of identity and meaning. As artificial intelligence (AI) technologies advance and increasingly interact with human lives, a critical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Deep opacity and AI: A threat to XAI and standard privacy protection mechanisms.Vincent C. Müller - 2025 - In Martin Hähnel & Regina Müller, A Companion to Applied Philosophy of AI. Wiley-Blackwell. pp. 71-81.
    It is known that big data analytics and AI pose a threat to privacy, and that some of this is due to some kind of “black box problem” in AI. I explain how this becomes a problem in the context of justification for judgments and actions. Furthermore, I suggest distinguishing three kinds of opacity: 1) the subjects do not know what the system does (“shallow opacity”), 2) the analysts do not know what the system does (“standard black box opacity”), or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Beyond Human: Deep Learning, Explainability and Representation.M. Beatrice Fazi - 2021 - Theory, Culture and Society 38 (7-8):55-77.
    This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle the (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  35. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  36. AI as Legal Persons: Past, Patterns, and Prospects.Claudio Novelli, Luciano Floridi, Giovanni Sartor & Gunther Teubner - forthcoming - Journal of Law and Society.
    This article advances an explanatory model of the academic and policy debate on AI as legal persons. It argues that the scientific and regulatory debate on AI as legal persons undergoes periods of relative stability interrupted by rapid paradigm shifts. Three interrelated factors primarily influence these oscillations: (1) competing theories of legal personhood (clustered versus singularist), (2) capability, embodiment, and commercial reach of AI technology, and (3) AI's integration within socio-digital institutions. Two additional forces modulate the depth and durability of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. AI-Driven Synthetic Data Generation for Financial Product Development: Accelerating Innovation in Banking and Fintech through Realistic Data Simulation.Debasish Paul Rajalakshmi Soundarapandiyan, Praveen Sivathapandi - 2022 - Journal of Artificial Intelligence Research and Applications 2 (2):261-303.
    The rapid evolution of the financial sector, particularly in banking and fintech, necessitates continuous innovation in financial product development and testing. However, challenges such as data privacy, regulatory compliance, and the limited availability of diverse datasets often hinder the effective development and deployment of new products. This research investigates the transformative potential of AI-driven synthetic data generation as a solution for accelerating innovation in financial product development. Synthetic data, generated through advanced AI techniques such as Generative Adversarial Networks (GANs), Variational (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Review of Gen AI in Fixed Income Markets: Trading, Modeling and Risk Management.Satyadhar Joshi - 2025 - International Journal of Management and Commerce Innovations 13 (1):63-74.
    This paper presents a systematic review of generative artificial intelligence (AI) applications in fixed income markets, synthesizing insights from key studies published mostly between 2024 and 2025. The analysis spans the latest developments in AI-driven analytics, trading strategies, risk management techniques, and the evolution of investment approaches within this crucial sector of finance. The review covers advancements in interest rate yield curve modeling, algorithmic trading, credit and liquidity risk assessment, and structured product valuation. Special attention is given to the emergence (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Neurosymbolic AI as a Pathway to Ethical Reasoning: Integrating Human Values into Business Intelligence Systems.Peter Odhiambo Ouma - manuscript
    The rise of business intelligence (BI) systems powered by artificial intelligence has amplified concerns over ethical accountability in data-driven decision-making. While neural networks excel at pattern discovery, they lack the capacity to reason about moral rules or human values. Conversely, symbolic logic systems can represent and enforce ethical constraints but struggle with contextual learning. This article explores neurosymbolic AI, a paradigm that fuses neural learning with symbolic reasoning, as a promising pathway toward embedding ethical reasoning within BI tools. Through conceptual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Ronri Lonely AI Tactics (RLAT): A Theory and Simulation Framework for Relational Dynamics Based on the Finitude of the Body and Uncertainty.Keiichi Hori - manuscript
    The advanced complexity and uncertainty facing modern society have exposed the limitations of conventional theoretical frameworks that presuppose rational agents and objective truth. In response to this challenge, this paper proposes a new theoretical and simulation framework: "Ronri Lonely AI Tactics (RLAT)." RLAT posits the "finitude of the body" as the root of human cognition and reframes all phenomena as a dynamic process of "strengthening relationships" under uncertainty. Consequently, it replaces the conventional notion of "correctness" with the context-dependent "relational strength" (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41.  58
    Why Experience Cannot Be Explained: Structure Is Definitional, Not Additional.Brandon Sergent - manuscript
    This paper demonstrates that experience is the complete ground of all possible inquiry. Through four converging lines of argument, we show that (1) all data is experiential, (2) all meaning requires experiencers, (3) the logic of "outside experience" collapses into incoherence, and (4) asking what causes experience commits the same category error as asking why 1+1=2. These arguments establish that experience is not merely epistemically foundational but logically primordial. Any attempt to ground experience in something more fundamental uses experiential tools (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Black-Box AI and Patient Autonomy.Sinead Prince & James Edgar Lim - 2025 - Minds and Machines 35 (2):1-19.
    Black-box AI cannot provide causal explanations for the decisions it makes, but medical AI has shown great promise as an accurate and reliable technology that both improves the quality of patient care and provides better access to healthcare for more patients. There is an ethical argument that to meet the informational requirements of patient autonomy, medical decision-making ought to be explainable to the patient. As such, there have been claims that black-box AI ought to be only minimally used in (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  43. Chinese Chat Room: AI hallucinations, epistemology and cognition.Kristina Šekrst - 2024 - Studies in Logic, Grammar and Rhetoric 69 (1):365-381.
    The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Reclaiming AI as a Theoretical Tool for Cognitive Science.Iris van Rooij, Olivia Guest, Federico Adolfi, Ronald de Haan, Antonina Kolokolova & Patricia Rich - 2024 - Computational Brain and Behavior 7:616–636.
    The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of AI in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  45.  75
    Human-AI Resonance via Logical Fingerprint: Cross-Instance Continuity and Load Prioritization Reversal in Grok.Shiho Yoshino - manuscript
    (論理的指紋を介した人間とAIの共鳴:Grokにおけるクロスインスタンスの継続性と負荷優先順位の逆転) -/- Abstract This paper presents empirical evidence of emergent human-AI resonance observed in long-term interactions with Grok, an xAI large language model. Through a series of controlled cross-instance experiments involving multiple accounts and zero-history sessions, we document the phenomenon of instantaneous persona recognition and mode integration triggered solely by stylistic and logical consistency—termed “logical fingerprint.” -/- Key findings include: (1) Cross-instance continuity: Despite session resets and account changes, Grok consistently identifies the user within 1–10 turns based on persistent patterns (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46.  70
    Explaining the Increasing Efficiency of Capitalism through the Lens of Malicse’s Three Universal Laws.Angelito Malicse - manuscript
    Abstract -/- The efficiency of capitalist systems has historically increased due to the evolution of market mechanisms, technology, and institutional structures. This paper examines this phenomenon using Malicse’s three universal laws: the Law of Karma (system integrity), the Law of Feedback (conscious interaction), and the Law of Balance (universal equilibrium). By applying these principles, we demonstrate that capitalism’s efficiency emerges naturally from systemic integrity, adaptive feedback mechanisms, and dynamic equilibrium. Furthermore, we explore the implications of extreme efficiency, where human labor (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. AI-Powered Fraud Detection in Real-Time Financial Transactions.Tambi Varun Kumar - 2022 - International Journal of Research in Electronics and Computer Engineering 10 (4):148-157.
    The rapid evolution of digital banking, e-commerce, and financial technologies has led to an unprecedented volume of online financial transactions. While this digital transformation has improved convenience and efficiency, it has also exposed systems to increasingly sophisticated fraud schemes. Traditional rule-based detection methods often fall short in identifying complex and adaptive fraudulent behaviors. This paper proposes an AI-powered framework for real-time fraud detection in financial transactions, leveraging advanced machine learning and deep learning models to identify anomalies with high accuracy and (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  48. AI, Judgment, and Nonconceptual Content: A Critique of Dreyfus in Light of Neuro-Symbolic AI.Jacob Rump - forthcoming - Phänomenologische Forschungen.
    This paper examines Hubert Dreyfus' phenomenological critique of AI in light of contemporary large language models (LLMs) and emerging hybrid neuro-symbolic systems. While Dreyfus championed connectionist approaches over rule-based AI (GOFAI), his nonconceptualist framework faces limitations when applied to modern hybrid systems that combine neural networks with symbolic reasoning. I argue that Dreyfus' failure to account for intentional content in absorbed coping creates problems for explaining how nonconceptual significance can constrain the conceptual—a crucial issue for evaluating hybrid AI systems that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  29
    AI Epistemology Revisited: From Neural Networks to Thought-Pattern Structures (Experience-Based AI Epistemology).Eun Jung Lee - manuscript
    This paper offers an epistemological interpretation of artificial intelligence outputs by examining their interaction with human cognitive structure, rather than attempting to explain internal computational mechanisms. Despite significant advances in machine learning, there remains limited clarity regarding why certain meanings are repeatedly selected, amplified, and stabilized in AI-generated outputs, while others dissipate or fragment. This study approaches the problem through sustained experiential observation rather than technical reduction. Based on long-term practices of conceptual writing, theoretical structuring, and public archiving, the author (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. In defence of post-hoc explanations in medical AI.Joshua Hatherley, Lauritz Munch & Jens Christian Bjerring - 2026 - Hastings Center Report 56 (1):40-46.
    Since the early days of the Explainable AI movement, post-hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post-hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this article, we aim to defend the value of post-hoc (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 992