Results for 'RAG'

10 found
Order:
  1. Barah Maha - The Changing Phases of Nature.Devinder Pal Singh - 2020 - The Sikh Review 68 (10):9-15..
    Barah Maha (Twelve months) is a form of folk poetry that describes the emotions and yearnings of the human heart, expressed in terms of the changing moods of nature over the twelve months of a year. In this form of poetry, the mood of nature in each particular month, of the Indian calendar, depicts the inner agony of the human heart which in most cases happens to be a lovelorn young woman separated from her spouse or beloved. In other words, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. The Gift of Language: Large Language Models and the Extended Mind.Paul R. Smart & Robert William Clowes - 2025 - In Vitor Santos & Paulo Castro, Advances in Philosophy of Artificial Intelligence. Bradford, UK: Ethics Press.
    Proponents of the extended mind insist that human states and cognitive processes can, at times, include non-biological resources that lie external to the bodily boundaries. In the present chapter, we apply this idea to large language models (LLMs), suggesting that some LLMs exist as extended cognitive (or computational) systems. We focus in particular on LLMs that exploit retrieval-augmented generation (RAG) techniques and online computational tools, proposing that these systems constitute extended architectures whose capabilities are realized, in part, by external structures. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Generative Artificial Intelligence Empowering Cross-Cultural Communication: Potential, Challenges, and Ethical Considerations (<<生成式人工智能赋能跨文化沟通:潜力、挑战及伦理审思>>) in the journal 'Cross-cultural Communication'/<<跨文化传播研究 >>.D. E. Weissglass & Xie Tian - forthcoming - Култура.
    As globalization and artificial intelligence technologies become increasingly intertwined, the use of generative AI (GenAI)—especially large language models (LLMs)—has made it possible to construct new forms of intercultural communication tools, but it has also raised significant ethical concerns. This paper aims to systematically explore the potential, challenges, and ethical framework surrounding “Generative Intercultural Communication Assistants” (GICAs). It begins by arguing for the technical feasibility and practical promise of GICAs, while also highlighting their potential risks. It then analyzes how LLMs represent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. RAGGAE for HERBS: Testing the Explanatory Performance of Ontology-powered LLMs for Human Explanation of Robotic Behaviors.Agnese Augello, Edoardo Datteri, Antonio Lieto, Maria Rausa & Nicola Zagni - 2025 - Proceedings of the 17Th International Conference on Social Robotics, Icsr 2025, Springer 1 (1):12.
    In this work we present and test a RAG-based model called RAGGAE (i.e. RAG for the General Analysis of Explanans) tested in the context of Human Explanation of Robotic BehaviorS (HERBS). The RAGGAE model makes use of an ontology of explanations, enriching the knowledge of state of the art general purpose Large Language Models like Google Gemini 2.0 Flash, DeepSeek R1 and GPT-4o. The results show that the combination of a general LLM with a symbolic, and philosophically grounded, ontology can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Comprehensive Review of AI Hallucinations: Impacts and Mitigation Strategies for Financial and Business Applications.Satyadhar Joshi - 2025 - International Journal of Computer Applications Technology and Research 14 (6):38-50.
    This paper investigates the causes, implications, and mitigation strategies of AI hallucinations, with a focus on generative AI systems. This paper examines the phenomenon of AI hallucinations in large language models, analyzing root causes and evaluating mitigation strategies. We synthesize insights from recent academic research and industry findings to explain how hallucinations often arise due to problems in the data used to train language models, limitations in model architecture, and the way large language models (LLMs) generate text. Through a systematic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Structural Bullshit and the Duty to Doubt: A Theory of Epistemic Responsibility in Dealing with Generative AI.Gina Bronner-Martin - manuscript
    In the current debate on generative AI, the tendency of language models toward false statements is frequently anthropomorphically labeled as "lying." This paper argues that this terminology is not only ontologically incorrect but normatively dangerous, as it diffuses responsibility. While Hicks et al. (2024) correctly provide the diagnosis of "bullshit," this paper delivers the necessary operationalizable theory of responsibility. -/- Starting from an ontological analysis (Bronner-Martin 2025), it is shown that AI errors should be understood as "Structural Bullshit" and "Confabulation." (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Sideloading: Creating A Model of a Person via LLM with Very Large Prompt.Alexey Turchin & Roman Sitelew - manuscript
    Sideloading is the creation of a digital model of a person during their life via iterative improvements of this model based on the person's feedback. The progress of LLMs with large prompts allows the creation of very large, book-size prompts which describe a personality. We will call mind-models created via sideloading "sideloads"; they often look like chatbots, but they are more than that as they have other output channels, like internal thought streams and descriptions of actions. -/- By arranging the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. How AI Can Implement the Universal Formula in Education and Leadership Training.Angelito Malicse - manuscript
    How AI Can Implement the Universal Formula in Education and Leadership Training -/- If AI is programmed based on your universal formula, it can serve as a powerful tool for optimizing human intelligence, education, and leadership decision-making. Here’s how AI can be integrated into your vision: -/- 1. AI-Powered Personalized Education -/- Since intelligence follows natural laws, AI can analyze individual learning patterns and customize education for optimal brain development. -/- Adaptive Learning Systems – AI can adjust lessons in real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek, Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac. Brill.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, that is, the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. The Rise of Generative AI: Evaluating Large Language Models for Code and Content Generation.Mittal Mohit - 2023 - International Journal of Advancedresearch in Science, Engineering and Technology 10 (4):20643-20649.
    Large language models (LLMs) lead a new era of computational innovation brought forth by generative artificial intelligence (AI). Designed around transformer architectures and trained on large-scale data, these models shine in producing both creative and functional code. This work examines the emergence of LLMs with an emphasis on their two uses in content generation and software development. Key results show great mastery in daily activities, balanced by restrictions in logic, security, and uniqueness. We forecast future developments, therefore concluding with ramifications (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations