13 found
Order:
  1. Against the Uncritical Adoption of 'AI' Technologies in Academia.Olivia Guest, Marcela Suarez, Barbara Müller, Edwin van Meerkerk, Arnoud Oude Groote Beverborg, Ronald de Haan, Andrea Reyes Elizondo, Mark Blokpoel, Natalia Scharfenberg, Annelies Kleinherenbrink, Ileana Camerino, Marieke Woensdregt, Dagmar Monett, Jed Brown, Lucy Avraamidou, Juliette Alenda-Demoutiez, Felienne Hermans & Iris van Rooij - manuscript
    Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  2. Pygmalion Displacement: When Humanising AI Dehumanises Women.Lelia Erscoi, Annelies Kleinherenbrink & Olivia Guest - manuscript
    We use the myth of Pygmalion as a lens to investigate and frame the relationship between women and artificial intelligence (AI). Pygmalion was a legendary ancient king of Cyprus and sculptor. Having been repulsed by women, he used his skills to create a statue, which was imbued with life by the goddess Aphrodite. This can be seen as one of the primordial AI-like myths, wherein humanity creates intelligent life-like self-images to reproduce or replace ourselves. In addition, the myth prefigures historical (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  3. Modern Alchemy: Neurocognitive Reverse Engineering.Olivia Guest, Natalia Scharfenberg & Iris van Rooij - manuscript
    The cognitive sciences, especially at the intersections with computer science, artificial intelligence, and neuroscience, propose 'reverse engineering' the mind or brain as a viable methodology. We show three important issues with this stance: 1) Reverse engineering proper is not a single method and follows a different path when uncovering an engineered substance versus a computer. 2) These two forms of reverse engineering are incompatible. We cannot safely reason from attempts to reverse engineer a substance to attempts to reverse engineer a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  4. Reclaiming AI as a Theoretical Tool for Cognitive Science.Iris van Rooij, Olivia Guest, Federico Adolfi, Ronald de Haan, Antonina Kolokolova & Patricia Rich - 2024 - Computational Brain and Behavior 7:616–636.
    The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of AI in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  5. Are Neurocognitive Representations 'Small Cakes'?Olivia Guest & Andrea E. Martin - manuscript
    In order to understand cognition, we often recruit analogies as building blocks of theories to aid us in this quest. One such attempt, originating in folklore and alchemy, is the homunculus: a miniature human who resides in the skull and performs cognition. Perhaps surprisingly, this appears indistinguishable from the implicit proposal of many neurocognitive theories, including that of the 'cognitive map,' which proposes a representational substrate for episodic memories and navigational capacities. In such 'small cakes' cases, neurocognitive representations are assumed (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  6. To Improve Literacy, Improve Equality in Education, Not Large Language Models.Samuel H. Forbes & Olivia Guest - 2025 - Cognitive Science 49 (4):e70058.
    Huettig and Christiansen in an earlier issue argue that large language models (LLMs) are beneficial to address declining cognitive skills, such as literacy, through combating imbalances in educational equity. However, we warn that this technosolutionism may be the wrong frame. LLMs are labor intensive, are economically infeasible, and pollute the environment, and these properties may outweigh any proposed benefits. For example, poor quality air directly harms human cognition, and thus has compounding effects on educators' and pupils' ability to teach and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  7. What Makes a Good Theory, and How Do We Make a Theory Good?Olivia Guest - 2024 - Computational Brain and Behavior 6:508–522.
    I present an ontology of criteria for evaluating theory to answer the titular question from the perspective of a scientist practitioner. Set inside a formal account of our adjudication over theories, a metatheoretical calculus, this ontology comprises the following: (a) metaphysical commitment, the need to highlight what parts of theory are not under investigation, but are assumed, asserted, or essential; (b) discursive survival, the ability to be understood by interested non-bad actors, to withstand scrutiny within the intended (sub)field(s), and to (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  8. On Logical Inference over Brains, Behaviour, and Artificial Neural Networks.Olivia Guest & Andrea E. Martin - 2023 - Computational Brain and Behavior 6:213–227.
    In the cognitive, computational, and neuro-sciences, practitioners often reason about what computational models represent or learn, as well as what algorithm is instantiated. The putative goal of such reasoning is to generalize claims about the model in question, to claims about the mind and brain, and the neurocognitive capacities of those systems. Such inference is often based on a model’s performance on a task, and whether that performance approximates human behavior or brain activity. Here we demonstrate how such argumentation problematizes (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  9. How Computational Modeling Can Force Theory Building in Psychological Science.Olivia Guest & Andrea E. Martin - 2021 - Perspectives on Psychological Science 16 (4):789-802.
    Psychology endeavors to develop theories of human capacities and behaviors on the basis of a variety of methodologies and dependent measures. We argue that one of the most divisive factors in psychological science is whether researchers choose to use computational modeling of theories (over and above data) during the scientific-inference process. Modeling is undervalued yet holds promise for advancing psychological science. The inherent demands of computational modeling guide us toward better science by forcing us to conceptually analyze, specify, and formalize (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  10. Combining Psychology with Artificial Intelligence: What could possibly go wrong?Iris van Rooij & Olivia Guest - manuscript
    The current AI hype cycle combined with Psychology's various crises make for a perfect storm. Psychology, on the one hand, has a history of weak theoretical foundations, a neglect for computational and formal skills, and a hyperempiricist privileging of experimental tasks and testing for effects. Artificial Intelligence, on the other hand, has a history of conflating artifacts for theories of cognition, or even minds themselves, and its engineering offspring likes to move fast and break things. Many of our contemporaries now (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. On Simulating Neural Damage in Connectionist Networks.Olivia Guest, Andrea Caso & Richard P. Cooper - 2020 - Computational Brain and Behavior 3:289-321.
    A key strength of connectionist modelling is its ability to simulate both intact cognition and the behavioural effects of neural damage. We survey the literature, showing that models have been damaged in a variety of ways, e.g. by removing connections, by adding noise to connection weights, by scaling weights, by removing units and by adding noise to unit activations. While these different implementations of damage have often been assumed to be behaviourally equivalent, some theorists have made aetiological claims that rest (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  12. What Does 'Human-Centred AI' Mean?Olivia Guest - manuscript
    While it seems sensible that human-centred artificial intelligence (AI) means centring "human behaviour and experience," it cannot be any other way. AI, I argue, is usefully seen as a relationship between technology and humans where it appears that artifacts can perform, to a greater or lesser extent, human cognitive labour. This is evinced using examples that juxtapose technology with cognition, inter alia: abacus versus mental arithmetic; alarm clock versus knocker-upper; camera versus vision; and sweatshop versus tailor. Using novel definitions and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Implementations are not specifications: specification, replication and experimentation in computational cognitive modeling.Richard P. Cooper & Olivia Guest - 2014 - Cognitive Systems Research 27:42-49.
    Contemporary methods of computational cognitive modeling have recently been criticized by Addyman and French (2012) on the grounds that they have not kept up with developments in computer technology and human–computer interaction. They present a manifesto for change according to which, it is argued, modelers should devote more effort to making their models accessible, both to non-modelers (with an appropriate easy-to-use user interface) and modelers alike. We agree that models, like data, should be freely available according to the normal standards (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations