A weekend dive into AI trends, ethics, and real-world experience Sometimes it’s good to step out of daily architecture work and see what’s going on in the wider AI world. That’s exactly what I did last weekend. Data Sanity Talks, Belgrade — October 4 https://lnkd.in/g2vQumQn Went to the AI conference this Saturday — Data Sanity Talks. Can’t say I’m some kind of hardcore AI expert, but the conference was really interesting, and most of the talks were quite understandable even without deep ML background. A bit of context — a few years ago I was leading a predictive analytics team at a bank, working on quite complex projects. My main focus was on architecture efficiency, team organization, and process automation rather than tuning models myself — but it was still very much a hands-on experience with real ML systems in production. So while I wouldn’t call myself a deep AI expert, I’ve seen how these things actually work in practice — with all their challenges and trade-offs. For an IT architect, it’s essential to stay up to date with almost every direction in tech — and AI is definitely among the top ones. The event brought together not only Serbian experts but also speakers from Montenegro, Latvia, Azerbaijan, France, and the UK — a truly international setup. And I must say — the organization was excellent: everything went smoothly and thoughtfully. What impressed me the most wasn’t the technical depth (though there was plenty of it), but the humanistic and ethical questions around how we apply AI. It was also valuable to hear about organizational challenges in bringing AI into real companies — and to learn from others’ experiences. Healthcare is a domain I’m not deeply involved in as an IT professional, but as a regular person it was fascinating to hear about the complexities and approaches used there. Here’s a list of the talks that I found the most interesting — along with links to the speakers’ profiles, in case you’d like to follow their posts: “Business in the Age of AI: Trends, Management, and Safety” — Ana Stojkovic-Knezevic https://lnkd.in/ge-Q-rd7 “Building a Data Platform as a Single Source of Truth” — Dusan Dzamurovic https://lnkd.in/gk7Yj6EP “Mesa Optimisation in Large Models and AI Safety” — Elena Ericheva https://lnkd.in/gisn2d65 “When Updates Feel Like Breakups: Designing Safe and Responsible AI Companions” — Olga Titova https://lnkd.in/gJQG-2kF “AI Meets Healthcare: The Good, the Bad, and the Ugly” — Ivan Drokin https://lnkd.in/gek4fKVd “Are Multimodal Transformers Disruptive Technology in Radiology AI?” — Yaroslav Nikulin https://lnkd.in/ggeQw7Cc #AI #Architecture #AIEthics #DataSanity #BelgradeTech #ArtificialIntelligence #Conference
Attended Data Sanity Talks in Belgrade, exploring AI trends and ethics
More Relevant Posts
- 
                
      If an algorithm makes the right decision but can’t explain why, should we still trust it? In machine learning, we often celebrate performance above all else. But here’s a question that keeps coming back to me – does high accuracy automatically mean high trust? Or does a decision still feel incomplete if we can’t understand why it was made? 𝗔 𝗾𝘂𝗶𝗰𝗸 𝗿𝗲𝗺𝗶𝗻𝗱𝗲𝗿: 𝘄𝗲 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝗲𝘅𝗽𝗹𝗮𝗶𝗻 𝗺𝗼𝗱𝗲𝗹𝘀 (𝘁𝗼 𝗮 𝗹𝗶𝗺𝗶𝘁) I think almost everyone who works with ML has used feature importance at some point (that basic technique which tells us which features influence a model the most overall). It’s useful, it builds trust. But it only explains the model in general not a single decision. And sometimes, that difference matters more than we think. 𝗪𝗵𝘆 𝗲𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝘀𝘁𝗶𝗹𝗹 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 In low-risk systems (like movie recommendations), using a black-box model may be acceptable. But when algorithms are used to decide who gets a loan, who receives medical attention, or who is flagged as “high risk” by the justice system, the stakes are very different. That’s why researchers developed tools like: • 𝙇𝙄𝙈𝙀 (explains single predictions locally)– it answers: “Why this decision?” [1] • 𝙎𝙃𝘼𝙋 (shows how each feature influenced the outcome) – it answers: “What mattered most?” [2] These tools weren’t designed to boost performance but to give humans a window into the model’s reasoning. Because people don’t just want correct answers, they want answers they can question, verify, and challenge. 𝗧𝗵𝗲 𝗱𝗶𝗹𝗲𝗺𝗺𝗮 𝘄𝗲 𝗳𝗮𝗰𝗲 Some experts (like Cynthia Rudin [3]) are very clear: If a decision affects humans at a high-stakes level, we should not use black-box models when interpretable ones exist. However, as Zachary Lipton points out in his work on interpretability [4], transparent models like linear or rule-based systems may still oversimplify complex real-world patterns, especially compared to deep learning models optimized purely for performance. So we’re stuck with a real question: Is it ethical to prefer a more accurate model that no one understands? Or should explainability sometimes outweigh accuracy? 𝗦𝗼, 𝘄𝗵𝗮𝘁 𝗱𝗼 𝘄𝗲 𝗮𝗰𝗰𝗲𝗽𝘁 𝗳𝗿𝗼𝗺 𝗔𝗜? • Accuracy alone is not enough (at least not when real people are affected) • Explainability is not just for compliance but it’s for human trust • The future of AI ethics is about balancing intelligence with clarity, and not choosing one over the other Would you trust a model that makes great decisions even if it can’t explain a single one of them? #ExplainableAI #AIEthics #AlgorithmicAccountability #TransparencyInAI #DataScience #AI #RoadmapInAction To view or add a comment, sign in 
- 
                  
- 
                
      🚀 New Blog Alert from 9techh! We have just published our latest article “Building Trust in AI: The 2025 Roadmap for Ethical & Responsible Artificial Intelligence.” In this blog, we explore how IT leaders and organizations can prioritize transparency, fairness, and accountability in an AI-driven world. From responsible data handling to building trustworthy AI systems, it is a deep dive into what truly matters for the future of technology. Read the full post on our website and discover how ethical innovation can shape smarter, safer AI solutions for 2025 and beyond. 🌐 👉 Visit our blog at: https://lnkd.in/eBSdpwtn #9techh #ArtificialIntelligence #EthicalAI #ResponsibleTech #FutureOfAI #Innovation #TechLeadership #DigitalTransformation #AITools #MachineLearning #AI2025 To view or add a comment, sign in 
- 
                
      Good structured approaches for organizations to implement AI responsibly and effectively. I thought to share for everyone Interest along with the specific links. 1. Government & Policy-Oriented AI Frameworks: · #OECD AI Principles (OECD): https://lnkd.in/dma_7hVG · EU #Ethics Guidelines for #Trustworthy AI (European Commission): Based on 7 key requirements for ethical AI. https://lnkd.in/dTnGUVyK · US Executive Order on AI (EO 14110) (White House): Focuses on AI safety, security, and innovation leadership. https://lnkd.in/dSBZYsJg · UK AI #Regulation Framework (UK Government): A principles-based approach to AI governance. https://lnkd.in/daVE24Ha 2. Industry & Enterprise AI Adoption Frameworks · Microsoft’s Responsible AI Standard: Covers fairness, reliability, privacy, and inclusiveness. https://lnkd.in/dMzBdx23 · Google’s AI Principles: Focuses on socially beneficial, unbiased, and accountable AI. https://lnkd.in/dQtfzHAH · IBM’s AI Ethics Framework: Emphasizes fairness, explainability, and robustness. https://lnkd.in/dJWQ4y58 3. Technical & #Risk Management AI Frameworks · NIST AI Risk Management Framework (RMF) (US): A structured approach to managing AI risks. https://lnkd.in/dZVKSpJ2 · ISO/IEC 23053:2022 (AI System Engineering Standard): Provides guidelines for AI system development. https://lnkd.in/dNwZX54E. 4. Sector-Specific AI Frameworks · Health AI: WHO’s Ethics & Governance of AI for Health: Guidelines for AI in healthcare. https://lnkd.in/dhgxsryh · Financial Services: FINRA’s AI in Financial Services Report: Best practices for AI in finance. https://lnkd.in/db5xnprp · Defense & Military: NATO’s AI Strategy: Principles for military AI applications. https://lnkd.in/dK_GJ7Qx 5. AI Maturity & Readiness Frameworks · Gartner’s AI Maturity Model: Assesses AI adoption stages. These frameworks help organizations adopt AI responsibly, addressing ethical, technical, and regulatory challenges. #Artificialintelligence #Ethics #Adoptionframeworks #Responsible #AIMaturity To view or add a comment, sign in 
- 
                
      The risks of AI “hallucinations” and unvetted models are not abstract. Two recent cases that made the news highlight a systemic danger: 1. Biosecurity zero-day in AI protein folding: Protein folding is one of my favourite AI application areas and has huge benefits for medicine. However, AI tools have been used to generate protein variants that evade detection by standard screening software. That exposes a novel vulnerability, one where an AI “bug” becomes a biothreat. The failed safeguard went unknown for a long time. 2. Deloitte’s report with fake references: A publicly funded report commissioned by the Australian government included fabricated citations and misquoted court rulings. Embarrassing. Deloitte later revised the report and offered a partial refund... --- Why this matters in social housing Social housing providers increasingly rely on AI/LLMs for drafting tenant communications, policy briefings, predictive maintenance, and compliance reports. If AI “makes up” references, misstates legal or regulatory obligations, or gets data summaries wrong, the reputational, legal, and financial fallout is significant. Vulnerabilities like the protein zero-day case remind us that AI risks sometimes come from unexpected domains, yet their downstream effects can hit even non-technical sectors. What social housing leaders must do (initial list generated by GPT-5, but very plausible!): 1. Demand provenance & traceability. Every AI-generated claim, figure, or reference must be anchored to verifiable sources, for example in your own knowledge base. 2. Embed domain expert review. Never publish or act on AI outputs without scrutiny by suitable experts, whether legal, compliance, repairs or housing experts. 3. Introduce audit & red-team testing. Simulate “what if the AI lies here?” scenarios. Challenge it with adversarial prompts. This is part of the data-driven shift the sector needs to unlock customer value from the data it looks after. 4. Adopt governance & policy frameworks. Define what AI can and cannot be used for (e.g. donor reports vs. tenant-facing legal notices). This shouldn't only be compliance, but about ethics. In my view this should involve tenants in co-creation. 5. Educate staff in AI use, including ethics and “hallucination awareness”. Make everyone aware that AI is not infallible and must be questioned, especially in sensitive contexts. --- AI has enormous upside for social housing: optimising maintenance, freeing staff from rote tasks, predicting risks. But turning that upside requires humility and curiousity. When AI claims facts, we must check. When it cites sources, we must verify. We require more critical thinking than ever. Otherwise we trade efficiency for fallibility, and that’s a bad bargain when people’s homes are involved. Image by GPT-5. To view or add a comment, sign in 
- 
                  
- 
                
      I keep coming back to one question: what separates companies that experiment with AI from those that truly scale it responsibly? It’s becoming clear that success isn’t just about the quality of the technology — it’s about the strength of the governance behind it. A recent Wharton @ Work piece, The Business Case for Proactive AI Governance, makes this point well — emphasizing that accountability, transparency, and cross-functional alignment aren’t barriers to innovation, but the foundation for it. Organizations that invest early in clear frameworks, data stewardship, and responsible oversight aren’t just managing risk — they’re building trust, confidence, and long-term value in how they deploy AI. Curious to hear others’ views — what governance challenges or opportunities are you seeing as your organization explores AI adoption? #AI #ResponsibleAI #AIGovernance #DigitalGovernance #Privacy https://lnkd.in/ewYrf5eD To view or add a comment, sign in 
- 
                
      We often talk about responsible AI, but what about the AI we don’t see? “Shadow AI” - the unsanctioned, invisible use of generative tools inside organizations - is quietly reshaping how work happens. It’s born of good intent (efficiency, creativity, problem-solving) but carries deep ethical risks, like privacy, bias, accountability, and moral drift. Where do you stand? To view or add a comment, sign in 
- 
                
      🤖 AI is transforming industries — but without guardrails, trust erodes fast. Our latest blog breaks down AI governance — the rules, policies, and oversight that keep AI fair, transparent, and accountable.💡 Read the full guide 👉 https://lnkd.in/eUtRpu2u To view or add a comment, sign in 
- 
                
      Trust in AI Trusting Artificial Intelligence (AI) is no longer a futuristic concern; it’s a present-day necessity. Algorithms influence everything from business operations to healthcare, education, and information access. However, trust in AI isn’t automatic; it must be earned through deliberate design, governance, and accountability. The Foundations of Trust in AI: Reliability, transparency, and ethics are the cornerstones of AI trust. A system can only be trusted if it functions consistently, produces reliable outcomes, and aligns with human values and societal norms. Organisations adopting AI must ensure their models are auditable, explainable, and free from hidden biases. Trustworthiness is built when users can see how decisions are made and understand the reasoning behind them. AI Governance: AI governance frameworks define how algorithms are built, deployed, and managed. These frameworks set clear policies that regulate everything from data privacy to model retraining processes. Governance ensures accountability and prevents misuse by defining oversight mechanisms and assigning responsibilities to specific roles, such as data officers or ethics boards. A multidisciplinary team, including data scientists, legal experts, ethicists, and business leaders, helps ensure responsible AI development and alignment with organisational values. Transparency: Transparency means that AI systems’ inner workings aren’t hidden behind opaque algorithms. Organisations must document data sources, training methods, and model decision logic. Explainability goes further, allowing users to understand why systems make decisions, especially in sensitive applications like finance, recruitment, or healthcare. Clear explanations reduce fear and promote informed trust.. Privacy, Security, and Data Integrity: AI systems rely on vast datasets that often contain personal or sensitive information. Ensuring privacy and data security is paramount to maintaining trust. This involves robust data encryption, access controls, and continuous monitoring to detect breaches or misuse. Strong data governance practices protect both the integrity of algorithms and the confidence of stakeholders who rely on them. Bias Detection and Ethical Vigilance: AI inherits biases present in its training data, which can lead to unfair or discriminatory results. Continuous auditing, diverse datasets, and ethical review boards help mitigate these risks. Ethical vigilance ensures AI is designed with fairness and inclusivity in mind, reducing harm and building long-term credibility. Continuous Monitoring and Iterative Improvement: Trust in AI cannot be established once and forgotten. It requires ongoing validation. Continuous monitoring and iterative improvement are necessary to identify inaccuracies, biases, or ethical issues as systems evolve. Feedback loops and version control mechanisms keep the AI aligned with changing contexts and user expectations. #TrustAI #EthicalAI #AIDATA #AISECURITY To view or add a comment, sign in 
- 
                
      AI Governance and the Ethics of Expression: Reflecting on OpenAI’s Latest Policy Shift OpenAI’s recent decision to relax restrictions on adult-themed creative expression marks one of the most complex and revealing moments in AI governance. At its core, this is not just about content moderation; it’s about where we draw the line between human creativity, moral norms, and machine agency. As generative AI models become storytellers, artists, and companions, the boundaries of what constitutes “acceptable creativity” are being rewritten in real time. From a CAIO (Chief AI Officer) lens, this shift raises key questions: Governance: How do we define ethical frameworks for AI creativity that respect cultural diversity while avoiding harm? Agency: As AI-generated art becomes increasingly personal and emotional, how do we ensure creators, not algorithms, remain in control of the narrative? Societal Impact: What are the implications for identity, intimacy, and digital well-being in a world where human connection can be simulated by code? This moment underscores the broader tension every AI leader faces: balancing freedom of expression with social responsibility. As AI systems grow more capable, these are not questions for engineers alone, but for ethicists, policymakers, and cultural leaders shaping the future of digital society. To view or add a comment, sign in 
- 
                  
- 
                
      AI Governance and the Ethics of Expression: Reflecting on OpenAI’s Latest Policy Shift OpenAI’s recent decision to relax restrictions on adult-themed creative expression marks one of the most complex and revealing moments in AI governance. At its core, this is not just about content moderation; it’s about where we draw the line between human creativity, moral norms, and machine agency. As generative AI models become storytellers, artists, and companions, the boundaries of what constitutes “acceptable creativity” are being rewritten in real time. From a CAIO (Chief AI Officer) lens, this shift raises key questions: Governance: How do we define ethical frameworks for AI creativity that respect cultural diversity while avoiding harm? Agency: As AI-generated art becomes increasingly personal and emotional, how do we ensure creators, not algorithms, remain in control of the narrative? Societal Impact: What are the implications for identity, intimacy, and digital well-being in a world where human connection can be simulated by code? This moment underscores the broader tension every AI leader faces: balancing freedom of expression with social responsibility. As AI systems grow more capable, these are not questions for engineers alone, but for ethicists, policymakers, and cultural leaders shaping the future of digital society. To view or add a comment, sign in 
- 
                  
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development