I spoke on two panels yesterday, offering a privacy practitioner's perspective on AI governance and third-party risk management. 3 takeaways: 1️⃣ Data governance is multidisciplinary. There are lessons to be learned from all walks of life. Our panels wove together stories and takeaways from H. Bryan Cunningham (policy, strategy, podcast on obscure history), Mike Grant (cybersecurity insurance, CPA), Alyssa Coon (legal, privacy operations), Mark Kasperowicz (history, humor, and curiosity), Steve Kerler (pragmatic leadership, change management), and myself (compliance, regulatory/enforcement analysis, privacy operations). Look for the universal threads in your own experience. Chances are, there's a way for them to apply across data governance and data privacy as well. 2️⃣ Create resilience through foundations. Both the panels I participated in came back to core principles. When they're in place, business leaders can make decisions with full awareness of how they fit in the policies. > Know Your Data/AI/Vendors Where it goes, what you're allowed to do with it, and what you're actually doing with it. > North Star Values Decisions should align with company values. Leadership, committees, stakeholders, and operators should all align on what this looks like in practice. This includes risk appetite. > Risk Assessment Review the legislative, regulatory, cybersecurity, and market landscape. Assess against your data, your values, your risk appetite. What changes do you need to make to get yourself aligned? > Iterate. (These panels were sponsored by Privageo, where the Discover-Build-Manage framework maps to these ideas. Align on priorities; Bridge the gap; review and Course-Correct or Carry On.) 3️⃣ AI isn't going anywhere. Bryan Cunningham noted that forbidding staff from using AI tools won't work. Perhaps, he suggests, you can create a sandbox environment for exploration, without risk to data. For our part, Privageo recommends structuring your guidance to employees in three buckets - but the line in the sand between the buckets will vary by organization! > No permission required: Low-risk activities that do not involve trade secrets, company data, personal information, or other risk? Ok. e.g. asking a genAI tool to assist with drafting an email. > Strictly forbidden: High-risk activities where company control and audit trails must be maintained. e.g. anything involving sensitive personal information or company schematics > "Navigating with Care": Where most real-world AI applications reside, the gray area between those clear-cut options. Go back to takeaway 2, get your foundations in place, and bring together stakeholders to assess how your values, data, risk appetite, and business needs interact. It's critical to define your boundaries. --- It was a pleasure to discuss the above with the sharp minds at ELN's Cybersecurity Retreat! Thank you to everyone on the panels for the thought-provoking discussions.
Best Practices for AI Governance and Risk Management
Explore top LinkedIn content from expert professionals.
-
-
#GRC Today I led a session focused on rolling out a new Standard Operating Procedure (SOP) for the use of artificial intelligence tools, including generative AI, within our organization. AI tools offer powerful benefits (faster analysis, automation, improved communication) but without guidance, they can introduce major risks: • Data leakage • IP exposure • Regulatory violations • Inconsistent use across teams That’s why a well-crafted SOP isn’t just nice to have .. it’s a requirement for responsible AI governance. I walked the team through the objective: 1. To outline clear expectations and minimum requirements for engaging with AI tools in a way that protects company data, respects ethical standards, and aligns with core values. We highlighted the dual nature of AI (high value, high risk) and positioned the SOP as a safeguard, not a blocker. 2. Next, I made sure everyone understood who this applied to: • All employees • Contractors • Anyone using or integrating AI into business operations We talked through scenarios like writing reports, drafting code, automating tasks, or summarizing client info using AI. 3. We broke down risk into: • Operational Risk: Using AI tools that aren’t vendor-reviewed • Compliance Risk: Feeding regulated or confidential data into public tools • Reputational Risk: Inaccurate or biased outputs tied to brand use • Legal Risk: Violation of third-party data handling agreements 4. We outlined what “responsible use” looks like: • No uploading of confidential data into public-facing AI tools • Clear tagging of AI-generated content in internal deliverables • Vendor-approved tools only • Security reviews for integrations • Mandatory acknowledgment of the SOP 5. I closed the session with action items: • Review and digitally sign the SOP • Identify all current AI use cases on your team • Flag any tools or workflows that may require deeper evaluation Don’t assume everyone understands the risk just because they use the tools. Frame your SOP rollout as an enablement strategy, not a restriction. Show them how strong governance creates freedom to innovate .. safely. Want a copy of the AI Tool Risk Matrix or the Responsible Use Checklist? Drop a comment below.
-
Decide. Communicate. Act. This is the motto of the United States Marine Corps’ infantry officer course (below is a very green - literally and figuratively - Second Lieutenant Haydock wrapping up his final training exercise at the IOC). Because these simple steps served me well in my previous career, I apply them when building AI security and governance programs for my clients. 1️⃣ DECIDE If you don't make decisions about how you are going to deploy AI, they are going to be made for you. Will you: - Run open-source models in IaaS because of observability? - Use SaaS products due to superior usability? - Push the envelope or be slow-mover? - Ban AI entirely (not recommended)? - Establish a clear risk appetite? 2️⃣ COMMUNICATE Once your organization has decided how it is going to use AI, you'll need to make sure everyone on the same page. This requires: - Developing an AI policy (and/or modifying existing ones). - Training staff on how to follow and implement it. - Explaining your security posture to regulators. - Reassuring customers. - Convincing auditors. 3️⃣ ACT Avoid a "paper tiger" governance program by following up words with clear actions like: - Deploying technical controls to enforce policy compliance. - Creating procedures for daily/weekly/monthly actions. - Penetration testing your and your vendor's software. - Vetting and managing risk from 3rd parties. - Continuously evolving and improving. TL;DR - you are in trouble if you can't: 1/ Decide 2/ Communicate 3/ Act on and about your AI governance goals. The Marine Corps gave me the tools and discipline to apply these in the face of change and uncertainty.
-
This new white paper "Introduction to AI assurance" by the UK Department for Science, Innovation, and Technology from Feb 12, 2024, provides an EXCELLENT overview of assurance methods and international technical standards that can be utilized to create and implement ethical AI systems. The new guidance is based on the UK AI governance framework, laid out in the 2023 white paper "A pro-innovation approach to AI regulation". This white paper defined 5 universal principles applicable across various sectors to guide and shape the responsible development and utilization of AI technologies throughout the economy: - Safety, Security, and Robustness - Appropriate Transparency and Explainability - Fairness - Accountability and Governance - Contestability and Redress The 2023 white paper also introduced a suite of tools designed to aid organizations in understanding "how" these outcomes can be achieved in practice, emphasizing tools for trustworthy AI, including assurance mechanisms and global technical standards. See: https://lnkd.in/gydvi9Tt The new publication, "Introduction to AI assurance," is a deep dive into these assurance mechanisms and standards. AI assurance encompasses a spectrum of techniques for evaluating AI systems throughout their lifecycle. These range from qualitative assessments for evaluating potential risks and societal impacts to quantitative assessments for measuring performance and legal compliance. Key techniques include: - Risk Assessment: Identifies potential risks like bias, privacy, misuse of technology, and reputational damage. - Impact Assessment: Anticipates broader effects on the environment, human rights, and data protection. - Bias Audit: Examines data and outcomes for unfair biases. - Compliance Audit: Reviews adherence to policies, regulations, and legal requirements. - Conformity Assessment: Verifies if a system meets required standards, often through performance testing. - Formal Verification: Uses mathematical methods to confirm if a system satisfies specific criteria. The white paper also explains how organizations in the UK can ensure their AI systems are responsibly governed, risk-assessed, and compliant with regulations: 1.) For demonstrating good internal governance processes around AI, a conformity assessment against standards like ISO/IEC 42001 (AI Management System) is recommended. 2.) To understand the potential risks of AI systems being acquired, an algorithmic impact assessment by a accredited conformity assessment body is advised. This involves (self) assessment against a proprietary framework or responsible AI toolkit. 3.) Ensuring AI systems adhere to existing data protection regulations involves a compliance audit by a third-party assurance provider. This white paper also has exceptional infographics! Pls, check it out, and TY Victoria Beckman for posting and providing us with great updates as always!
-
The Secure AI Lifecycle (SAIL) Framework is one of the actionable roadmaps for building trustworthy and secure AI systems. Key highlights include: • Mapping over 70 AI-specific risks across seven phases: Plan, Code, Build, Test, Deploy, Operate, Monitor • Introducing “Shift Up” security to protect AI abstraction layers like agents, prompts, and toolchains • Embedding AI threat modeling, governance alignment, and secure experimentation from day one • Addressing critical risks including prompt injection, model evasion, data poisoning, plugin misuse, and cross-domain prompt attacks • Integrating runtime guardrails, red teaming, sandboxing, and telemetry for continuous protection • Aligning with NIST AI RMF, ISO 42001, OWASP Top 10 for LLMs, and DASF v2.0 • Promoting cross-functional accountability across AppSec, MLOps, LLMOps, Legal, and GRC teams Who should take note: • Security architects deploying foundation models and AI-enhanced apps • MLOps and product teams working with agents, RAG pipelines, and autonomous workflows • CISOs aligning AI risk posture with compliance and regulatory needs • Policymakers and governance leaders setting enterprise-wide AI strategy Noteworthy aspects: • Built-in operational guidance with security embedded across the full AI lifecycle • Lifecycle-aware mitigations for risks like context evictions, prompt leaks, model theft, and abuse detection • Human-in-the-loop checkpoints, sandboxed execution, and audit trails for real-world assurance • Designed for both code and no-code AI platforms with complex dependency stacks Actionable step: Use the SAIL Framework to create a unified AI risk and security model with clear roles, security gates, and monitoring practices across teams. Consideration: Security in the AI era is more than a tech problem. It is an organizational imperative that demands shared responsibility, executive alignment, and continuous vigilance.
-
A New Path for Agile AI Governance To avoid the rigid pitfalls of past IT Enterprise Architecture governance, AI governance must be built for speed and business alignment. These principles create a framework that enables, rather than hinders, transformation: 1. Federated & Flexible Model: Replace central bottlenecks with a federated model. A small central team defines high-level principles, while business units handle implementation. This empowers teams closest to the data, ensuring both agility and accountability. 2. Embedded Governance: Integrate controls directly into the AI development lifecycle. This "governance-by-design" approach uses automated tools and clear guidelines for ethics and bias from the project's start, shifting from a final roadblock to a continuous process. 3. Risk-Based & Adaptive Approach: Tailor governance to the application's risk level. High-risk AI systems receive rigorous review, while low-risk applications are streamlined. This framework must be adaptive, evolving with new AI technologies and regulations. 4. Proactive Security Guardrails: Go beyond traditional security by implementing specific guardrails for unique AI vulnerabilities like model poisoning, data extraction attacks, and adversarial inputs. This involves securing the entire AI/ML pipeline—from data ingestion and training environments to deployment and continuous monitoring for anomalous behavior. 5. Collaborative Culture: Break down silos with cross-functional teams from legal, data science, engineering, and business units. AI ethics boards and continuous education foster shared ownership and responsible practices. 6. Focus on Business Value: Measure success by business outcomes, not just technical compliance. Demonstrating how good governance improves revenue, efficiency, and customer satisfaction is crucial for securing executive support. The Way Forward: Balancing Control & Innovation Effective AI governance balances robust control with rapid innovation. By learning from the past, enterprises can design a resilient framework with the right guardrails, empowering teams to harness AI's full potential and keep pace with business. How does your Enterprise handle AI governance?
-
In light of the recent discussions around the European Union's Artificial Intelligence Act (EUAI Act), it's critical for brands, especially those in the fashion industry, to understand the implications of AI usage in marketing and beyond. The EU AI Act categorizes AI risks into four levels: unacceptable, high, limited, and minimal risks. For brands employing AI for marketing content, this predominantly falls under limited risks. While not as critical as high or unacceptable risks, limited risks still necessitate a conscientious approach. Here’s what brands need to consider: Transparency: As the backbone of customer trust, transparency in AI-generated content is non-negotiable. Brands must clearly label AI-generated services or content to maintain an open dialogue with consumers. Understanding AI Tools: It's not enough to use AI tools; brands must deeply understand their mechanisms, limitations, and potential biases to ensure ethical use and compliance with the EUAI Act. Documentation and Frameworks: Implementing thorough documentation of AI workflows and frameworks is essential for demonstrating compliance and guiding internal teams on best practices. Actionable Tips for Compliance: Label AI-Generated Content: Ensure any AI-generated marketing material is clearly marked, helping customers distinguish between human and AI-created content. Educate Your Team: Conduct regular training sessions for your team on the ethical use of AI tools, focusing on understanding AI systems to avoid unintentional risks. Document Everything: Maintain detailed records of AI usage, decision-making processes, and the tools' roles in content creation. This will not only aid in compliance but also in refining your AI strategy. Engage in Dialogue with Consumers: Foster an environment where consumers can express their views on AI-generated content, using feedback to guide future practices. For brands keen on adopting AI responsibly in their marketing, it's important to focus on transparency and consumer trust. Ensure AI-generated content is clearly labeled, allowing consumers to distinguish between human and AI contributions. Invest in understanding AI's capabilities and limitations, ensuring content aligns with brand values and ethics. Regular training for your team on ethical AI use and clear documentation of AI's role in content creation processes are essential. These steps not only comply with regulations like the EU AI Act but also enhance brand integrity and consumer confidence. To learn more about more about EU AI act impact on brands check out https://lnkd.in/gTypRvmu
-
🚨 AI Governance Isn’t Optional Anymore — CISOs and Boards, Take Note As AI systems become core to business operations, regulators are catching up fast — and CISOs are now squarely in the spotlight. Whether you're facing the EU AI Act, U.S. Executive Orders, or the new ISO/IEC 42001, here’s what CISOs need to start doing today: ✅ Inventory all AI/ML systems – Know where AI is being used internally and by your vendors. ✅ Establish AI governance – Form a cross-functional team and own the AI risk management policy. ✅ Secure the ML pipeline – Protect training data, defend against poisoning, and monitor model drift. ✅ Ensure transparency & explainability – Especially for high-risk systems (e.g., hiring, finance, health). ✅ Update third-party risk assessments – Require AI-specific controls, model documentation, and data handling practices. ✅ Control GenAI & Shadow AI – Set usage policies, monitor access, and prevent unintentional data leaks. ✅ Stay ahead of regulations – Track the EU AI Act, NIST AI RMF, ISO 42001, and others. 🔐 AI is no longer just a data science topic — it’s a core risk domain under the CISO’s scope. The question is: Are you securing the models that are shaping your business decisions? #AICompliance #CISO #CyberSecurity #AIRegulations #EUAIAct #NIST #ISO42001 #MLOpsSecurity #Governance #ThirdPartyRisk #GenAI #AIAccountability #SecurityLeadership
-
Are you curious about how to create safe and effective artificial intelligence and machine learning (AI/ML) devices? Let's demystify the essential guiding principles outlined by the U.S. FDA, Health Canada | Santé Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) for Good Machine Learning Practice (GMLP). These principles aim to ensure the development of safe, effective, and high-quality medical devices. 1. Multi-Disciplinary Expertise Drives Success: Throughout the lifecycle of a product, it's crucial to integrate expertise from diverse fields. This ensures a deep understanding of how a model fits into clinical workflows, its benefits, and potential patient risks. 2. Prioritize Good Software Engineering and Security Practices: The foundation of model design lies in solid software engineering practices, coupled with robust data quality assurance, management, and cybersecurity measures. 3.Representative Data is Key: When collecting clinical study data, it's imperative to ensure it accurately represents the intended patient population. This means capturing relevant characteristics and ensuring an adequate sample size for meaningful insights. 4.Independence of Training and Test Data: To prevent bias, training and test datasets should be independent. While the FDA permits multiple uses of training data, it's crucial to justify each use to avoid inadvertently training on test data. 5. Utilize Best Available Reference Datasets: Developing reference datasets based on accepted methods ensures the collection of clinically relevant and well-characterized data, understanding their limitations. 6. Tailor Model Design to Data and Intended Use: Designing the model should align with available data and intended device usage. Human factors and interpretability should be prioritized, focusing on the performance of the Human-AI team. 7. Test Under Clinically Relevant Conditions: Rigorous testing plans should be in place to assess device performance under conditions reflecting real-world usage, independent of training data. 8. Provide Clear Information to Users: Users should have access to clear, relevant information tailored to their needs, including the product’s intended use, performance characteristics, data insights, limitations, and user interface interpretation. 9. Monitor Deployed Models for Performance: Deployed models should be continuously monitored in real-world scenarios to ensure safety and performance. Additionally, managing risks such as overfitting, bias, or dataset drift is crucial for sustained efficacy. These principles provide a robust framework for the development of AI/ML-driven medical devices, emphasizing safety, efficacy, and transparency. For further insights, dive into the full paper from FDA, MHRA, and Health Canada. #AI #MachineLearning #HealthTech #MedicalDevices #FDA #MHRA #HealthCanada
-
The UK Department for Science, Innovation and Technology published the guide "Introduction to AI assurance," to provide an overview of assurance mechanisms and global technical standards for industry and #regulators to build and deploy responsible #AISystems. #Artificialintelligence assurance processes can help to build confidence in #AI systems by measuring and evaluating reliable, standardized, and accessible evidence about their capabilities. It measures whether such systems will work as intended, hold limitations, or pose potential risks; as well as how those #risks are being mitigated to ensure that ethical considerations are built-in throughout the AI development #lifecycle. The guide outlines different AI assurance mechanisms, including: - Risk assessments - Algorithmic impact assessment - Bias and compliance audits - Conformity assessment - Formal verification It also provides some recommendations for organizations interested in developing their understanding of AI assurance: 1. Consider existing regulations relevant for AI systems (#privacylaws, employment laws, etc) 2. Develop necessary internal skills to understand AI assurance and anticipate future requirements. 3. Review internal governance and #riskmanagement practices and ensure effective decision-making at appropriate levels. 4. Keep abreast of sector-specific guidance on how to operationalize and implement proposed principles in each regulatory domain. 5. Consider engaging with global standards development organizations to ensure the development of robust and universally accepted standard protocols. https://lnkd.in/eiwRZRXz
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development