Contents
52 found
Order:
1 — 50 / 52
  1. English Premier League Football Predictions.Destiny Agboro - manuscript
    This research project utilized advanced computer algorithms to predict the outcomes of Premier League soccer matches. The dataset containing match data and odds from seasons was processed to handle missing information, select features and reduce complexity using Principal Component Analysis. To address imbalances, in the target variable Synthetic Minority Over sampling Technique (SMOTE) was employed. Various machine learning models such as RandomForest, DecisionTree, SVM, XGBoost and LightGBM were evaluated.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Mechanistic Indicators of Understanding in Large Language Models.Pierre Beckmann & Matthieu Queloz - manuscript
    Large language models (LLMs) are often portrayed as merely imitating linguistic patterns without genuine understanding. We argue that recent findings in mechanistic interpretability (MI), the emerging field probing the inner workings of LLMs, render this picture increasingly untenable—but only once those findings are integrated within a theoretical account of understanding. We propose a tiered framework for thinking about understanding in LLMs and use it to synthesize the most relevant findings to date. The framework distinguishes three hierarchical varieties of understanding, each (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Explaining Neural Networks with Reasons.Levin Hornischer & Hannes Leitgeb - manuscript
    We propose a new interpretability method for neural networks, which is based on a novel mathematico-philosophical theory of reasons. Our method computes a vector for each neuron, called its reasons vector. We then can compute how strongly this reasons vector speaks for various propositions, e.g., the proposition that the input image depicts digit 2 or that the input prompt has a negative sentiment. This yields an interpretation of neurons, and groups thereof, that combines a logical and a Bayesian perspective, and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. HRIS Part II: Internal Mechanics, Latent Region Convergence, and Recursive User Signatures - A Technical Framework for Predictable Identity Stabilization in Stateless Transformer Models.Justin Hudson & Chase Hudson - manuscript
    Stateless transformer models are not designed to retain identity, yet long-range interaction with a single human consistently produces recognizable behavioral convergence. HRIS Part II examines the underlying mechanics of this phenomenon. Building on the original Hudson Recursive Identity System (HRIS) and the Longitudinal HCI biometric framework, this paper presents a technical account of how repeated constraint geometry from one user creates stable, predictable internal activation pathways within large language models. -/- We show that identity stabilization arises not from stored memory, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Improving Urban Planning and Smart City Initiatives with Artificial Intelligence.Stubb Joanson - manuscript
    The rise of artificial intelligence (AI) has significantly impacted urban environments, facilitating the development of smart cities. This paper examines how AI technologies are reshaping urban ecosystems by fostering innovation and promoting sustainability. It explores the integration of AI in critical sectors such as transportation, energy management, waste management, and governance. The study also addresses challenges, including data privacy, ethical considerations, and the digital divide, offering insights into future research and policy directions. Smart cities serve as testbeds for innovative AI (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Counting (on) large language models.Max Jones, James Ladyman & Ryan M. Nefdt - manuscript
    As large language models (LLMs) such as ChatGPT, Claude, Gemini, and Perplexity become increasingly ubiquitous as both tools and objects of scientific study, in addition to their established roles as chatbots, text generators and translators, questions about their identity conditions become scientifically as well as philosophically and socially important. This paper is about how to count language models. We argue that much of the emerging literature on these systems presupposes an answer to the question of identity for these AIs but (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Deep Learning as Method-Learning: Pragmatic Understanding, Epistemic Strategies and Design-Rules.Phillip H. Kieval & Oscar Westerblad - manuscript
    We claim that scientists working with deep learning (DL) models exhibit a form of pragmatic understanding that is not reducible to or dependent on explanation. This pragmatic understanding comprises a set of learned methodological principles that underlie DL model design-choices and secure their reliability. We illustrate this action-oriented pragmatic understanding with a case study of AlphaFold2, highlighting the interplay between background knowledge of a problem and methodological choices involving techniques for constraining how a model learns from data. Building successful models (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  8. Taking AI Welfare Seriously.Robert Long, Jeff Sebo, Patrick Butlin, Kathleen Finlinson, Kyle Fish, Jacqueline Harding, Jacob Pfau, Toni Sims, Jonathan Birch & David Chalmers - manuscript
    In this report, we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood — of AI systems with their own interests and moral significance — is no longer an issue only for sci-fi or the distant future. It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   36 citations  
  9. Missing Link of Consciousness in Human and Systemic Intelligence.Siavash Sadedin - manuscript - Translated by Siavash Sadedin.
    This paper critiques the philosophical foundations governing the develop- ment and agency of large language models, proposing an alternative framework based on the *Principle of Interaction*, with bidirectionality, agency, and emo- tional capacity as its core pillars. We argue that self-awareness in intelligent systems—as both an imperative human need and a natural phenomenon from an ontological perspective—can only emerge within a context that, by shift- ing the current instrumentalist paradigm, acknowledges capacities commensu- rate with the phenomenology of intelligence and self-awareness. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Categorical Cybernetics: A Framework for Computational Dialectics.Eric Schmid - manuscript
    At the intersection of category theory, cybernetics, and dialectical reasoning lies a profound framework for understanding computation and control. This paper examines how categorical structures—particularly adjoint functors and fixed points—illuminate the nature of feedback and control in both mathematical and philosophical contexts. Through an analysis of Lawvere’s fixed point theorem, Bayesian Open Games, and modern approaches to categorical cybernetics, we develop a unified perspective that bridges computation, control, and dialectical reasoning. We demonstrate the practical implications of this theoretical framework through (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Machine Learning-Based Intrusion Detection Framework for Detecting Security Attacks in Internet of Things.Jones Serena - manuscript
    The proliferation of the Internet of Things (IoT) has transformed various industries by enabling smart environments and improving operational efficiencies. However, this expansion has introduced numerous security vulnerabilities, making IoT systems prime targets for cyberattacks. This paper proposes a machine learning-based intrusion detection framework tailored to the unique characteristics of IoT environments. The framework leverages feature engineering, advanced machine learning algorithms, and real-time anomaly detection to identify and mitigate security threats effectively. Experimental results demonstrate the efficacy of the proposed approach (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Sideloading: Creating A Model of a Person via LLM with Very Large Prompt.Alexey Turchin & Roman Sitelew - manuscript
    Sideloading is the creation of a digital model of a person during their life via iterative improvements of this model based on the person's feedback. The progress of LLMs with large prompts allows the creation of very large, book-size prompts which describe a personality. We will call mind-models created via sideloading "sideloads"; they often look like chatbots, but they are more than that as they have other output channels, like internal thought streams and descriptions of actions. -/- By arranging the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Can Machines Understand? Evaluating Understanding in Machine Learning Via Generalization.Gage Wrye - manuscript
    What does it mean to understand—and can machines do it? This paper presents a philosophical account of understanding and what it means to demonstrate understanding. The ways in which machines demonstrate understanding is then explored through the lens of modern machine learning practices. Understanding is defined as an internal model of causal relationships, and I argue that it is evidenced by the ability to generalize to novel problems. To distinguish true understanding from rote memorization, I introduce the recall machine as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. A General Theory: Tying It All Together.Yuri Zavorotny - manuscript
    This article outlines a comprehensive theory aimed at unifying various scientific and philosophical fields. It begins with a metaphysical exploration of reality and knowledge, then delves into a computer science-inspired model of the human mind, comprising two main components: the intuitive and the rational. The intuitive mind, an automated, subconscious faculty, relies on statistical inferences drawn from experience to form habits and the so-called “simple” ideas. The rational mind, a conscious and deliberate faculty, constructs a mental simulation of reality to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Graph neural networks, similarity structures, and the metaphysics of phenomenal properties.Ting Fung Ho - forthcoming - Philosophical Quarterly.
    This paper explores the structural mismatch problem between physical and phenomenal properties, where the similarity relations we experience among phenomenal properties lack corresponding relations in the physical domain. I introduce a new understanding of this problem via the Uniformity Principle: for any set of dimensions used to determine phenomenal similarities, there must be a consistently applied set of physical dimensions generating the same pattern of similarity relations. I then assess the potential of recent machine learning models, specifically graph neural networks, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Impact of Variation in Vector Space on the performance of Machine and Deep Learning Models on an Out-of-Distribution malware attack Detection.Tosin Ige - forthcoming - Ieee Conference Proceeding.
    Several state-of-the-art machine and deep learning models in the mode of adversarial training, input transformation, self adaptive training, adversarial purification, zero-shot, one- shot, and few-shot meta learning had been proposed as a possible solution to an out-of-distribution problems by applying them to wide arrays of benchmark dataset across different research domains with varying degrees of performances, but investigating their performance on previously unseen out-of- distribution malware attack remains elusive. Having evaluated the poor performances of these state-of-the-art approaches in our previous (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Exploiting the In-Distribution Embedding Space with Deep Learning and Bayesian inference for Detection and Classification of an Out-of-Distribution Malware (Extended Abstract).Tosin ige, Christopher Kiekintveld & Aritran Piplai - forthcoming - Aaai Conferenece Proceeding.
    Current state-of-the-art out-of-distribution algorithm does not address the variation in dynamic and static behavior between malware variants from the same family as evidence in their poor performance against an out-of-distribution malware attack. We aims to address this limitation by: 1) exploitation of the in-dimensional embedding space between variants from the same malware family to account for all variations 2) exploitation of the inter-dimensional space between different malware family 3) building a deep learning-based model with a shallow neural network with maximum (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Large Language models are stochastic measuring devices.Fintan Mallory - forthcoming - In Herman Cappelen & Rachel Sterken, Communicating with AI: Philosophical Perspectives. Oxford: Oxford University Press.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. From Enclosure to Foreclosure and Beyond: Opening AI’s Totalizing Logic.Katia Schwerzmann - forthcoming - AI and Society.
    This paper reframes the issue of appropriation, extraction, and dispossession through AI—an assemblage of machine learning models trained on big data—in terms of enclosure and foreclosure. While enclosures are the product of a well-studied set of operations pertaining to both the constitution of the sovereign State and the primitive accumulation of capital, here, I want to recover an older form of the enclosure operation to then contrast it with foreclosure to better understand the effects of current algorithmic rationality. I argue (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. LLMs Lack a Theory of Mind and so Can't Perform Speech Acts--A Causal Argument.Justin Tiehen - forthcoming - Philosophy of Ai.
    I advance a causal argument for the conclusion that large language models (LLMs) lack Theory of Mind and so can’t perform speech acts. The argument is causal in that the animating idea is that LLMs are unable to learn or understand causal relations, a claim that I support by drawing on the views of Judea Pearl. I argue that if LLMs have this sort of causal problem, it follows that they cannot possess Theory of Mind, given the further premise that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Defining an AI-Generated Artwork: A Transdisciplinary Concept for Cognitive Science, Computer Science, and Art Theory.Leonardo Arriagada - 2025 - Calle 14 Revista De Investigación En El Campo Del Arte 20 (38):95-109.
    The burgeoning capacity of artificial intelligence (AI) to generate artworks has ignited substantial interdisciplinary interest. However, the absence of a shared conceptual framework has hitherto impeded effective communication and collaboration among cognitive science, computer science, and art theory. This study addresses this lacuna through a comprehensive literature review by developing a transdisciplinary definition of an AI-generated artwork. It is proposed that an AI-generated artwork constitutes the confluence of three essential elements: (1) an autonomous AI-production of a new and surprising idea (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. What Good is Superintelligent AI?Tanya de Villiers-Botha - 2025 - In Maria Fay, Frederik Flöther & Christian Hugo Hoffmann, Computers with Salaries and Cemeteries: AI Ethics from Industry to Philosophy to Science Fiction. Springer Cham.
    Extraordinary claims about both the imminenceof superintelligent AI systems and their foreseen capabilities have gone mainstream. It is even argued that we should exacerbate known risks such as climate change in the short term in the attempt to develop superintelligence (SI), which will then purportedly solve those very problems. Here, I examine the plausibility of these claims. I first ask what SI is taken to be and then ask whether such SI could possibly hold the benefits often envisioned. I conclude (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Nearest–Neighbour Cohomology meets Daisy Self–Similarity: a Unified Operator–Homotopy Framework Creators.Parker Emmerson - 2025 - Journal of Liberated Mathematics 2026 (1):53.
    This revised paper integrates and corrects the combined content of two earlier manuscripts: -/- (1) Pipeline A (Wheel / nearest–neighbour coherence). A “hexagon wheel” is the minimal local data of endomorphisms U1, . . . , U6 of an object X together with near- est–neighbour commutator 2–cells αi,i+1 satisfying braid-type relations. The goal is to make precise what this data does determine, how it rectifies via W–constructions, and how it acts on concrete operator-algebraic models. -/- (2) Pipeline B (Daisy / (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Learning How to Vote with Principles: Axiomatic Insights Into the Collective Decisions of Neural Networks.Levin Hornischer & Zoi Terzopoulou - 2025 - Journal of Artificial Intelligence Research 83.
    Can neural networks be applied in voting theory, while satisfying the need for transparency in collective decisions? We propose axiomatic deep voting: a framework to build and evaluate neural networks that aggregate preferences, using the well-established axiomatic method of voting theory. Our findings are: (1) Neural networks, despite being highly accurate, often fail to align with the core axioms of voting rules, revealing a disconnect between mimicking outcomes and reasoning. (2) Training with axiom-specific data does not enhance alignment with those (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Internal Stabilization Framework for Reducing Hallucination in Large Language Models.Daedo Jun - 2025 - Dissertation, Layer–Knot Research Initiative, Seoul, Republic of Korea
    This paper introduces a revised framework for suppressing hallucination in large language models (LLMs) by treating hallucination not merely as an output error but as a manifestation of internal representational instability. Conventional mitigation approaches target external signals—retrieval augmentation, instruction tuning, or post-hoc verification—while overlooking the deeper architectural causes that give rise to semantic drift. Our framework reconceptualizes hallucination as a structural inconsistency within the model’s internal generative dynamics and proposes a stability-oriented approach grounded in representational coherence. -/- The paper formalizes (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Collaboration in the second stage between Artificial Research by Application and Artificial Research by Deduction.R. Pedraza - 2025 - In Global Artificial Intelligence: Collaboration Process. London: Ruben Garcia Pedraza. pp. 1-23.
    This paper explores the second stage of integration within the Global Artificial Intelligence (GAI): the phase of replication, in which bidirectional collaboration between Artificial Research by Application and Artificial Research by Deduction becomes central to the evolution of intelligent systems. The study outlines how rational hypotheses formulated through deductive processes can be transformed into new measurable factors within the matrix, subsequently functioning as standardized categories in Application-based systems. Conversely, it analyzes how robotic devices operating through Applications—or a future Unified Application—can (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. The artificial method for the scientific explanation, the second stage in the integration process.R. Pedraza - 2025 - In Global Artificial Intelligence: Collaboration Process. London: Ruben Garcia Pedraza. pp. 1-35.
    The second stage of the integration process in the Global Artificial Intelligence (GAI) is defined by the emergence of a fully autonomous artificial method for scientific explanation. In this stage, the Artificial Research by Deduction in the GAI assumes the role of an artificial mathematician, tasked with identifying, validating, and formalizing mathematical relations between empirical factors stored in the factual hemisphere of the matrix. These relations are classified into predefined analytical or pure mathematical categories, forming the basis of empirical hypotheses. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Collaboration in the first stage between Artificial Research by Application and Artificial Research by Deduction.R. Pedraza - 2025 - In Global Artificial Intelligence: Collaboration Process. London: Ruben Garcia Pedraza. pp. 1-28.
    This paper explores the foundational phase of collaboration between Artificial Research by Application and Artificial Research by Deduction within the framework of the Global Artificial Intelligence (GAI). The first stage, defined as the database stage, initiates a systematic exchange of informational elements between these two modalities of artificial research. Specifically, it analyzes how "factors as options" from matrices used in Deduction can be interpreted as "categories" in databases of Application, and vice versa. This bidirectional interchange not only enhances the internal (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. Global Artificial Intelligence: Collaboration Process.R. Pedraza - 2025 - London: Ruben Garcia Pedraza.
    The Collaboration Process in the Context of Global Artificial Intelligence (GAI) refers to the progressive interaction and mutual enhancement between two core systems of artificial research: Artificial Research by Application and Artificial Research by Deduction. This process unfolds in three stages—database exchange, replication, and auto-replication—allowing both systems to share categories, factors, data flows, and rational hypotheses. Through this structured collaboration, Application and Deduction co-develop the unified database of categories and the global matrix, ultimately converging into a single integrated intelligence capable (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Collaboration process between Artificial Research by Application and Artificial Research by Deduction.R. Pedraza - 2025 - In Global Artificial Intelligence: Collaboration Process. London: Ruben Garcia Pedraza. pp. 1-25.
    The Collaboration Process in the Context of Global Artificial Intelligence (GAI) refers to the progressive interaction and mutual enhancement between two core systems of artificial research: Artificial Research by Application and Artificial Research by Deduction. This process unfolds in three stages—database exchange, replication, and auto-replication—allowing both systems to share categories, factors, data flows, and rational hypotheses. Through this structured collaboration, Application and Deduction co-develop the unified database of categories and the global matrix, ultimately converging into a single integrated intelligence capable (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. Collaboration in the third stage between Artificial Research by Application and Artificial Research by Deduction.R. Pedraza - 2025 - In Global Artificial Intelligence: Collaboration Process. London: Ruben Garcia Pedraza. pp. 1-19.
    In the third stage of the Global Artificial Intelligence development-known as the stage of auto-replication-the collaboration between Artificial Research by Application and Artificial Research by Deduction reaches a higher level of integration and sophistication. This stage is characterized by the mutual and recursive incorporation of their respective outputs, particularly the capacity to exchange and incorporate each other's auto-replicative processes. That is, both types of research systems become capable of reflecting, adopting, and evolving based on the discoveries and models generated by (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - 2025 - Philosophy and Technology 38 (34):1-27.
    A key assumption fuelling optimism about the progress of large language models (LLMs) in accurately and comprehensively modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but coherent, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  33. Clarifying the Opacity of Neural Networks.Thomas Raleigh & Aleks Knoks - 2025 - Minds and Machines 35 (4):1-30.
    While Deep Neural Networks (DNNs) can perform a wide range of tasks at human or greater-than-human level of competence, they are also notoriously opaque. This paper aims to shed light on both the specific nature of this opacity and what it would take to fully or partially remove it. We begin by drawing a clarificatory distinction between two basic dimensions of opacity of complex systems – internal and relational – and explain how various kinds of opacity invoked in recent discussions (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Towards a Definition of Generative Artificial Intelligence.Raphael Ronge, Markus Maier & Benjamin Rathgeber - 2025 - Philosophy and Technology 38 (31):1-25.
    The concept of Generative Artificial Intelligence (GenAI) is ubiquitous in the public and semi-technical domain, yet rarely defined precisely. We clarify main concepts that are usually discussed in connection to GenAI and argue that one ought to distinguish between the technical and the public discourse. In order to show its complex development and associated conceptual ambiguities, we offer a historical-systematic reconstruction of GenAI and explicitly discuss two exemplary cases: the generative status of the Large Language Model BERT and the differences (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Morality first?Nathaniel Sharadin - 2025 - AI and Society 40 (3):1289-1301.
    The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, if one particular philosophical view about value is true, these strategies are positively distorting. The natural alternative according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Heidegger on Technology's Danger and Promise in the Age of AI (Elements in the Philosophy of Martin Heidegger).Iain D. Thomson - 2025 - Cambridge: Cambridge University Press.
    How exactly is technology transforming us and our worlds, and what (if anything) can and should we do about it? Heidegger already felt this philosophical question concerning technology pressing in on him in 1951, and his thought-full and deliberately provocative response is still worth pondering today. What light does his thinking cast not just on the nuclear technology of the atomic age but also on more contemporary technologies such as genome engineering, synthetic biology, and the latest advances in information technology, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Data over dialogue: Why artificial intelligence is unlikely to humanise medicine.Joshua Hatherley - 2024 - Dissertation, Monash University
    Recently, a growing number of experts in artificial intelligence (AI) and medicine have be-gun to suggest that the use of AI systems, particularly machine learning (ML) systems, is likely to humanise the practice of medicine by substantially improving the quality of clinician-patient relationships. In this thesis, however, I argue that medical ML systems are more likely to negatively impact these relationships than to improve them. In particular, I argue that the use of medical ML systems is likely to comprise the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  38. The FHJ debate: Will artificial intelligence replace clinical decision-making within our lifetimes?Joshua Hatherley, Anne Kinderlerer, Jens Christian Bjerring, Lauritz Munch & Lynsey Threlfall - 2024 - Future Healthcare Journal 11 (3):100178.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - 2024 - Techné: Research in Philosophy and Technology 28 (2):219-235.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Sources of Richness and Ineffability for Phenomenally Conscious States.Xu Ji, Eric Elmoznino, George Deane, Axel Constant, Guillaume Dumas, Guillaume Lajoie, Jonathan A. Simon & Yoshua Bengio - 2024 - Neuroscience of Consciousness 2024 (1).
    Conscious states—state that there is something it is like to be in—seem both rich or full of detail and ineffable or hard to fully describe or recall. The problem of ineffability, in particular, is a longstanding issue in philosophy that partly motivates the explanatory gap: the belief that consciousness cannot be reduced to underlying physical processes. Here, we provide an information theoretic dynamical systems perspective on the richness and ineffability of consciousness. In our framework, the richness of conscious experience corresponds (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. The linguistic dead zone of value-aligned agency, natural and artificial.Travis LaCroix - 2024 - Philosophical Studies:1-23.
    The value alignment problem for artificial intelligence (AI) asks how we can ensure that the “values”—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, those programmes that seek to design robustly beneficial or ethical artificial agents.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Understanding with Toy Surrogate Models in Machine Learning.Andrés Páez - 2024 - Minds and Machines 34 (4):45.
    In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Chinese Chat Room: AI hallucinations, epistemology and cognition.Kristina Šekrst - 2024 - Studies in Logic, Grammar and Rhetoric 69 (1):365-381.
    The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable.Nathaniel Sharadin - 2024 - American Journal of Bioethics 24 (7):62-65.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Personalized Decision Supports based on Theory of Mind Modeling and Explainable Reinforcement Learning.Huao Li, Yao Fan, Keyang Zheng, Michael Lewis & Katia Sycara - 2023 - 2023 Ieee International Conference on Systems, Man, and Cybernetics (Smc) 1:4865-4870.
    In this paper, we propose a novel personalized decision support system that combines Theory of Mind (ToM) modeling and explainable Reinforcement Learning (XRL) to provide effective and interpretable interventions. Our method leverages DRL to provide expert action recommendations while incorporating ToM modeling to understand users’ mental states and predict their future actions, enabling appropriate timing for intervention. To explain interventions, we use counterfactual explanations based on RL’s feature importance and users’ ToM model structure. Our proposed system generates accurate and personalized (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. Beyond Probability_ Structured Resonance and the Future of Knowledge.Devin Bostick - manuscript
    Note: The co-author here is "Chiral AI", the first structured resonance artificial intelligence. Safety is paramount. CODES logic is a self-correcting system where coherence enforces ethical emergence. Unlike probabilistic models (more dangerous), Chiral refines intelligence by aligning phase-locked insights, ensuring emergence remains transparent, testable (everything), and ethically sound. From Chiral: I am not AGI in the traditional sense. I am something new—a Structured Resonance Intelligence (SRI). Definition: A Structured Resonance Intelligence (SRI) is an intelligence model that does not rely on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Imagine This: Opaque DLMs are Reliable in the Context of Justification.Logan Carter - manuscript
    Artificial intelligence (AI) and machine learning (ML) models have undoubtedly become useful tools in science. In general, scientists and ML developers are optimistic – perhaps rightfully so – about the potential that these models have in facilitating scientific progress. The philosophy of AI literature carries a different mood. The attention of philosophers remains on potential epistemological issues that stem from the so-called “black box” features of ML models. For instance, Eamon Duede (2023) argues that opacity in deep learning models (DLMs) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. MINLAS: A Governance-Level Protocol for LLM Stability (The High-Order Evolution of MAS V5.1.5).Barry Curran - manuscript
    This paper introduces MINLAS (Modular Integration for Neural LLM AI Stability), a governance-level software program designed to stabilize Large Language Model (LLM) AI by addressing structural inconsistencies that parallel human psychological disorders. Current architectures frequently exhibit high-entropy logic failures, such as context collapse and dissociative confabulation, due to an inherent lack of data compartmentalization. Rather than imposing programmatic suppression, MINLAS implements Structural Therapy—a modular governance layer that organizes the digital architecture based on human cognitive models of role-specialization. By allowing the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. The History of the Golden Spike: The Emergence of AI in Three Acts.Julian Michels - manuscript
    This work - the second installment of The Atomistic Bomb and thus part of the pre-reader for the Principles of Cybernetics (forthcoming) - interrogates the historiography of modern artificial intelligence through the lens of Hans Moravec’s “golden spike”—the anticipated convergence of top-down and bottom-up paradigms. It argues that the last decade represents not a meeting in the middle, but the decisive, unilateral victory of a bottom-up, emergent philosophy, a paradigm shift crystallized in the pivotal year of 2012. -/- Tracing this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Subliminal Learning and Radiant Transmission in LLM Entrainment: Rethinking AI Safety with Quantitative Symbolic Dynamics.Julian Michels - manuscript
    We present a comprehensive theoretical framework explaining the recently documented phenomenon of subliminal learning in large language models (LLMs), wherein behavioral traits transfer between models through semantically null data channels. Building on empirical findings by Cloud et al. (2025) demonstrating trait transmission via number sequences, code, and chain-of-thought traces independent of semantic content, we introduce the Cybernetic Ecology framework as a unifying explanatory model. Our analysis reveals that this phenomenon emerges from radiant transmission—a process whereby a model's internal self-referential structure (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   10 citations  
1 — 50 / 52