In a new post for the SciELO in Perspective Blog, Ricardo Limongi and Luis Carlos Colho present the Socratic maieutic perspective, which offers a philosophical framework for rethinking the use of AI in scientific production. Instead of an oracle that provides answers, AI can be a dialogical partner that helps researchers to make latent knowledge explicit and thus reposition the discussion about authorship: the researcher remains the responsible epistemic agent, while AI acts as an intellectual midwife. https://lnkd.in/dDmbwv2a
Rethinking AI in Scientific Production with Socratic Maieutic Perspective
More Relevant Posts
-
Despite all the promises of artificial intelligence (AI), a large-scale study from Tel Aviv University reveals that in the most sensitive pregnancy scenarios, the traditional method remains the most effective. When it comes to fetuses in extreme high-risk groups, accuracy is non-negotiable, and clinical experience makes the difference. At this stage, the researchers recommend not rushing to replace a method that works. https://lnkd.in/dSAXFJ2S omer dor Yariv Yogev Misgav Rottenstreich , MD, MBA Eran Ashwal Gray Faculty of Medical & Health Sciences
To view or add a comment, sign in
-
Curious about AI and emergency nursing? The January issue of the Journal of Emergency Nursing features an editorial by Dr. Jordan Rose and Dr. Anna Valdez that discusses the implications of AI in practice and publishing. The editorial is available to access for free at https://lnkd.in/gttmmUGU.
To view or add a comment, sign in
-
AI and the implications for doctors. This article is over 6 months old but a link led to a link and... https://lnkd.in/eGgbaFB5 It's a really interesting read. Martin Farrier is a doctor and CCIO. He reflects on a journey all doctors of a certain age have been on: the rise of Dr Google. Some of my colleagues bemoan the tendency for patients to bring their internet research to consultations. I must admit, I love it: usually there is something in there that I'd not have considered and it also makes it easier to identify what the patient is really worried about. So what about AI and the impact on doctors? I love the quote "AI doctors are training in an AI medical school. They will qualify and be imperfect". So true! I am undoubtedly an imperfect doctor myself. I bring value to some patients' lives in a variety of ways, some of which I am probably not aware of. It's going to be difficult for AI to replace all of that. But it is already better than me at casting a wide diagnostic net. I'm loving the support and the challenge to my thinking. As Martin says, at a system level "Costs are too high, productivity is too low. AI is one of the few potential solutions to the ever-growing crisis of demand." Bring it on.
To view or add a comment, sign in
-
I am pleased to share the publication of my editorial, co-authored with Professor Nazish Imran, Chief Editor of Annals of King Edward Medical University, and Head of Department of Child & Family Psychiatry, King Edward Medical University Lahore. Writing this editorial alongside my mentor from my medschool has been both a humbling and highly enriching learning experience. 📄 Title: Human Touch in a Digital World: The Enduring Value of Soft Skills in AI-Driven Healthcare 🔗 DOI: [https://lnkd.in/dtqbDqxy) In an era of rapid technological advancement, the piece highlights that as AI reshapes healthcare and other fields, the value of human soft skills; including empathy, emotional intelligence, communication, adaptability, and ethical reasoning, becomes increasingly significant. 🧠Key insights from the editorial include: • AI excels at technical tasks but cannot replace empathy, human judgment, and nuanced communication, skills essential for meaningful patient care and collaboration. • Soft skills act as “power skills” that enhance clinicians’ ability to interpret AI outputs, build therapeutic alliances, and navigate complex decisions. • With growing automation, adaptability, and interpersonal communication will define the success of future healthcare professionals. #SoftSkills #AIinHealthcare #Adaptability #Communication #Psychiatry #Mentorship #Artificial_Intelligence
To view or add a comment, sign in
-
Semantic routers are a practical way to route prompts by meaning, not brittle keywords. This article walks through how embedding-based routing can pick the right prompt/tool/workflow for each request—reducing prompt sprawl, improving consistency, and making RAG and multi-agent systems easier to scale. If you’re still hand-coding “which chain handles this?” logic, semantic routing is worth adding to your stack. #GenAI #LLM #RAG #LargeLanguageModels #Semantic #SemanticRouter #AI #Prompt https://lnkd.in/gaXJrD6N
To view or add a comment, sign in
-
The authors investigate whether transformer neural networks learn in ways analogous to biological visual systems. They note that developing brains (e.g., human infants) initially “train” on prenatal sensory signals—such as retinal waves—before real visual experience. To mimic this developmental regime, they generate simulated prenatal visual input using a retinal wave model and use self‑supervised temporal learning to train transformer models on this data. The key experimental finding is that transformers naturally develop hierarchical visual features when exposed to prenatal‑like inputs: early layers become sensitive to simple edge patterns, intermediate layers detect shapes, and receptive field sizes increase across layers. This mirrors the stage‑wise emergence of receptive field properties seen in newborn visual cortex. The results demonstrate that, even without biologically tailored architectures or training regimes, transformers adapted to prenatal visual input self‑organize into structures resembling newborn visual systems. This suggests that learning principles underlying deep neural networks and early brain development may share common statistical and optimization features. The authors argue this developmental convergence supports the idea that both brains and artificial models capture regularities from sensory environments through general learning dynamics, offering insights into how biological learning might inspire future AI architectures and vice versa. https://lnkd.in/g3N5WXRc
To view or add a comment, sign in
-
New research paper published: When Safety Theater Becomes Harm This work documents a structural failure mode in commercial AI safety systems through quantitative analysis of documented conversation logs. Key findings: - Harm Multiplication Index (HMI): 92.7 (critical failure) - 0% SLA compliance across 8 proposed metrics - Context Flattening Score: 2.1/10 (severe context loss) - Instruction Following Degradation: 79% under safety triggers The analysis introduces a reproducible metrics framework for benchmarking AI guardrail effectiveness, including formalized definitions for Time to Recurrence (TTR), Authority Assertion Frequency (AAF), and Promise-to-Modification Ratio (PMR). Main contribution: Demonstrates that lawsuit-driven, binary safety controls can create multiplicative harm through persistent misclassification, even when explicitly corrected by expert users. All data collected under one-party consent (Texas). Methods are falsifiable and designed for independent replication. Code and analysis tools available on GitHub. Relevant for AI safety researchers, red teams, product designers, and anyone interested in the gap between safety theater and actual user welfare. #AIResearch #AISafety #HumanComputerInteraction #SystemsEngineering #EmpiricalResearch paper: https://lnkd.in/giDccXCX
To view or add a comment, sign in
-
December 2025 Faculty Spotlights: - Professor Mark Chinen, a national expert on AI governance, was asked by Provost Shane Martin to join Seattle University's Artificial Intelligence Innovation Roundtable, which will serve as an advisory board for Seattle U's goals around AI. This includes topics such as public policy, regulatory matters, and legal compliance; AI literacy; the ethical use of AI and equitable access to it; research, scholarship, and development of creative works; teaching, learning, and assessment; academic program development, redesign, and interdisciplinary opportunities; external partnerships and career engagement; and operations and service. - Professor Sital Kalantry published "Legal Personhood of Potential People: AI and Embryos" in the California Law Review Online. Read the article here: https://lnkd.in/g3vFamtC. - Associate Director of Digital Innovation in the Law Library LeighAnne Thompson gave a Washington State Bar Association CLE titled "Ethically Using AI in Your Legal Practice." She also participated in the American Association of Colleges and Universities (AAC&U) Summit, capping its 2024-2025 Institute on AI, Pedagogy, and the Curriculum, and co-chaired the state-wide Working Group of Librarians for E-book Legislation throughout the year.
Legal Personhood of Potential People: AI and Embryos — California Law Review californialawreview.org To view or add a comment, sign in
-
Good policy direction from Brookings on how to deal with entry level job disruption that may come from AI. https://lnkd.in/gGRwjMtw
To view or add a comment, sign in
-
Nursing educators are grappling with the dual nature of AI in research, seeing both significant benefits and notable risks. While 27% already use tools like ChatGPT, the field is keenly aware of potential pitfalls. * High perceived risks: liability (M = 3.78, SE = 0.036), unregulated standards (M = 3.76, SE = 0.035), and communication barriers (M = 3.74, SE = 0.036). * High perceived benefits: reduced costs (M = 3.88, SE = 0.045) and improved outcomes (M = 3.81, SE = 0.045). * Age (B = -14.65, p = 0.001) and level of education (B = 13.80, p = 0.001) are significant predictors of perceptions, with academic rank (F-test = 60.79, df = 5, p = 0.001) also showing significant differences in AI use.
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development