SlideShare a Scribd company logo
Responsible Machine
Learning
Eng Teong Cheah
Microsoft MVP
What is Responsible AI?
Responsible Artificial Intelligence (Responsible AI) is an approach to developing,
assessing, and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are
the product of many decisions made by those who develop and deploy them. From
system purpose to how people interact with AI systems, Responsible AI can help
proactively guide these decisions toward more beneficial and equitable outcomes. That
means keeping people and their goals at the center of system design decisions and
respecting enduring values like fairness, reliability, and transparency.
What is Responsible AI?
Microsoft has developed a Responsible AI Standard. It's a framework for building AI
systems according to six principles: fairness, reliability and safety, privacy and security,
inclusiveness, transparency, and accountability. For Microsoft, these principles are the
cornerstone of a responsible and trustworthy approach to AI, especially as intelligent
technology becomes more prevalent in products and services that people use every day.
Fairness and inclusiveness
AI systems should treat everyone fairly and avoid affecting similarly situated groups of
people in different ways. For example, when AI systems provide guidance on medical
treatment, loan applications, or employment, they should make the same
recommendations to everyone who has similar symptoms, financial circumstances, or
professional qualifications.
Fairness and inclusiveness in Azure Machine Learning: The fairness
assessment component of the Responsible AI dashboard enables data scientists and
developers to assess model fairness across sensitive groups defined in terms of gender,
ethnicity, age, and other characteristics.
Reliability and safety
To build trust, it's critical that AI systems operate reliably, safely, and consistently. These
systems should be able to operate as they were originally designed, respond safely to
unanticipated conditions, and resist harmful manipulation. How they behave and the
variety of conditions they can handle reflect the range of situations and circumstances
that developers anticipated during design and testing
Reliability and safety
Reliability and safety in Azure Machine Learning: The error analysis component of
the Responsible AI dashboard enables data scientists and developers to:
• Get a deep understanding of how failure is distributed for a model.
• Identify cohorts (subsets) of data with a higher error rate than the overall benchmark.
These discrepancies might occur when the system or model underperforms for specific
demographic groups or for infrequently observed input conditions in the training data.
Transparency
When AI systems help inform decisions that have tremendous impacts on people's lives,
it's critical that people understand how those decisions were made. For example, a bank
might use an AI system to decide whether a person is creditworthy. A company might use
an AI system to determine the most qualified candidates to hire.
A crucial part of transparency is interpretability: the useful explanation of the behavior of
AI systems and their components. Improving interpretability requires stakeholders to
comprehend how and why AI systems function the way they do. The stakeholders can
then identify potential performance issues, fairness issues, exclusionary practices, or
unintended outcomes.
Transparency
Transparency in Azure Machine Learning: The model interpretability and counterfactual
what-if components of the Responsible AI dashboard enable data scientists and
developers to generate human-understandable descriptions of the predictions of a
model.
Transparency
The model interpretability component provides multiple views into a model's behavior:
• Global explanations. For example, what features affect the overall behavior of a loan
allocation model?
• Local explanations. For example, why was a customer's loan application approved or
rejected?
• Model explanations for a selected cohort of data points. For example, what features
affect the overall behavior of a loan allocation model for low-income applicants?
Transparency
The counterfactual what-if component enables understanding and debugging a machine
learning model in terms of how it reacts to feature changes and perturbations.
Azure Machine Learning also supports a Responsible AI scorecard. The scorecard is a
customizable PDF report that developers can easily configure, generate, download, and
share with their technical and non-technical stakeholders to educate them about their
datasets and models health, achieve compliance, and build trust. This scorecard can also
be used in audit reviews to uncover the characteristics of machine learning models.
Privacy and security
As AI becomes more prevalent, protecting privacy and securing personal and business
information are becoming more important and complex. With AI, privacy and data
security require close attention because access to data is essential for AI systems to make
accurate and informed predictions and decisions about people. AI systems must comply
with privacy laws that:
• Require transparency about the collection, use, and storage of data.
• Mandate that consumers have appropriate controls to choose how their data is used.
Privacy and security
Privacy and security in Azure Machine Learning: Azure Machine Learning enables
administrators and developers to create a secure configuration that complies with their
companies' policies. With Azure Machine Learning and the Azure platform, users can:
• Restrict access to resources and operations by user account or group.
• Restrict incoming and outgoing network communications.
• Encrypt data in transit and at rest.
• Scan for vulnerabilities.
• Apply and audit configuration policies.
Accountability
The people who design and deploy AI systems must be accountable for how their
systems operate. Organizations should draw upon industry standards to develop
accountability norms. These norms can ensure that AI systems aren't the final authority on
any decision that affects people's lives. They can also ensure that humans maintain
meaningful control over otherwise highly autonomous AI systems.
Accountability
Accountability in Azure Machine Learning: Machine learning operations (MLOps) is
based on DevOps principles and practices that increase the efficiency of AI workflows.
Azure Machine Learning provides the following MLOps capabilities for better
accountability of your AI systems:
• Register, package, and deploy models from anywhere. You can also track the
associated metadata that's required to use the model.
• Capture the governance data for the end-to-end machine learning lifecycle. The
logged lineage information can include who is publishing models, why changes were
made, and when models were deployed or used in production.
• Notify and alert on events in the machine learning lifecycle. Examples include
experiment completion, model registration, model deployment, and data drift
detection.
• Monitor applications for operational issues and issues related to machine learning.
Compare model inputs between training and inference, explore model-specific
metrics, and provide monitoring and alerts on your machine learning infrastructure.
Explore Differential
Privacy
https://wordpress.com/post/ceteongvanness.w
ordpress.com/4417
References
Microsoft Docs

More Related Content

Similar to Responsible Machine Learning (20)

DOCX
How AI Programmers Can Develop Responsible AI.docx
rrhea7205
 
PDF
Professional Ethics------------------.pdf
wazakerforazkar
 
PDF
Choosing the right machine learning development service for healthcare industry
julliepotter
 
PDF
AI model security.pdf
StephenAmell4
 
PDF
Artificial Intelligence Ethical Issues in Focus | ashokveda.pdf
gchaitya21
 
PPTX
Responsible AI in Industry: Practical Challenges and Lessons Learned
Krishnaram Kenthapadi
 
PDF
What regulation for Artificial Intelligence?
Nozha Boujemaa
 
PDF
“Identifying and Mitigating Bias in AI,” a Presentation from Intel
Edge AI and Vision Alliance
 
PPTX
paper leak vccb 213 fkrtirejkrge wdqtfgt56wdsg by67gedrwygberf hubhy7hdey7
iqbalbaba73
 
PPTX
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
DataScienceConferenc1
 
PPTX
machine learning ppt for the bca sem 6 students
nakranik34
 
PPTX
Artificial Intelligence (AI) and Transparency.pptx
Dr.A.Prabaharan Professor & Research Director, Public Action
 
PPTX
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...
Jon Mead
 
PDF
AI Governance – The Responsible Use of AI
NUS-ISS
 
PDF
What is explainable AI.pdf
StephenAmell4
 
DOCX
Salesforce AI Associate 1 of 2 Certification.docx
JosĂŠ Enrique LĂłpez Rivera
 
PPTX
it-Condust-an-AI-Privacy-Risk-Assessment-Phases-1-3.pptx
FahadHasan83
 
PDF
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
Edge AI and Vision Alliance
 
PDF
solulab.com-How to Build an AI Agent System.pdf
RamayaRam
 
PDF
How to Build an AI Agent System - Overview.pdf
imoliviabennett
 
How AI Programmers Can Develop Responsible AI.docx
rrhea7205
 
Professional Ethics------------------.pdf
wazakerforazkar
 
Choosing the right machine learning development service for healthcare industry
julliepotter
 
AI model security.pdf
StephenAmell4
 
Artificial Intelligence Ethical Issues in Focus | ashokveda.pdf
gchaitya21
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Krishnaram Kenthapadi
 
What regulation for Artificial Intelligence?
Nozha Boujemaa
 
“Identifying and Mitigating Bias in AI,” a Presentation from Intel
Edge AI and Vision Alliance
 
paper leak vccb 213 fkrtirejkrge wdqtfgt56wdsg by67gedrwygberf hubhy7hdey7
iqbalbaba73
 
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
DataScienceConferenc1
 
machine learning ppt for the bca sem 6 students
nakranik34
 
Artificial Intelligence (AI) and Transparency.pptx
Dr.A.Prabaharan Professor & Research Director, Public Action
 
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...
Jon Mead
 
AI Governance – The Responsible Use of AI
NUS-ISS
 
What is explainable AI.pdf
StephenAmell4
 
Salesforce AI Associate 1 of 2 Certification.docx
JosĂŠ Enrique LĂłpez Rivera
 
it-Condust-an-AI-Privacy-Risk-Assessment-Phases-1-3.pptx
FahadHasan83
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
Edge AI and Vision Alliance
 
solulab.com-How to Build an AI Agent System.pdf
RamayaRam
 
How to Build an AI Agent System - Overview.pdf
imoliviabennett
 

More from Eng Teong Cheah (20)

PDF
Modern Cross-Platform Apps with .NET MAUI
Eng Teong Cheah
 
PDF
Efficiently Removing Duplicates from a Sorted Array
Eng Teong Cheah
 
PDF
Monitoring Models
Eng Teong Cheah
 
PDF
Training Optimal Models
Eng Teong Cheah
 
PDF
Deploying Models
Eng Teong Cheah
 
PDF
Machine Learning Workflows
Eng Teong Cheah
 
PDF
Working with Compute
Eng Teong Cheah
 
PDF
Working with Data
Eng Teong Cheah
 
PDF
Experiments & TrainingModels
Eng Teong Cheah
 
PDF
Automated Machine Learning
Eng Teong Cheah
 
PDF
Getting Started with Azure Machine Learning
Eng Teong Cheah
 
PDF
Hacking Containers - Container Storage
Eng Teong Cheah
 
PDF
Hacking Containers - Looking at Cgroups
Eng Teong Cheah
 
PDF
Hacking Containers - Linux Containers
Eng Teong Cheah
 
PDF
Data Security - Storage Security
Eng Teong Cheah
 
PDF
Application Security- App security
Eng Teong Cheah
 
PDF
Application Security - Key Vault
Eng Teong Cheah
 
PDF
Compute Security - Container Security
Eng Teong Cheah
 
PDF
Compute Security - Host Security
Eng Teong Cheah
 
PDF
Virtual Networking Security - Network Security
Eng Teong Cheah
 
Modern Cross-Platform Apps with .NET MAUI
Eng Teong Cheah
 
Efficiently Removing Duplicates from a Sorted Array
Eng Teong Cheah
 
Monitoring Models
Eng Teong Cheah
 
Training Optimal Models
Eng Teong Cheah
 
Deploying Models
Eng Teong Cheah
 
Machine Learning Workflows
Eng Teong Cheah
 
Working with Compute
Eng Teong Cheah
 
Working with Data
Eng Teong Cheah
 
Experiments & TrainingModels
Eng Teong Cheah
 
Automated Machine Learning
Eng Teong Cheah
 
Getting Started with Azure Machine Learning
Eng Teong Cheah
 
Hacking Containers - Container Storage
Eng Teong Cheah
 
Hacking Containers - Looking at Cgroups
Eng Teong Cheah
 
Hacking Containers - Linux Containers
Eng Teong Cheah
 
Data Security - Storage Security
Eng Teong Cheah
 
Application Security- App security
Eng Teong Cheah
 
Application Security - Key Vault
Eng Teong Cheah
 
Compute Security - Container Security
Eng Teong Cheah
 
Compute Security - Host Security
Eng Teong Cheah
 
Virtual Networking Security - Network Security
Eng Teong Cheah
 
Ad

Recently uploaded (20)

PDF
Per Axbom: The spectacular lies of maps
Nexer Digital
 
PDF
GDG Cloud Munich - Intro - Luiz Carneiro - #BuildWithAI - July - Abdel.pdf
Luiz Carneiro
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PPTX
Farrell_Programming Logic and Design slides_10e_ch02_PowerPoint.pptx
bashnahara11
 
PDF
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
PPTX
The Future of AI & Machine Learning.pptx
pritsen4700
 
PPTX
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
PDF
Build with AI and GDG Cloud Bydgoszcz- ADK .pdf
jaroslawgajewski1
 
PDF
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
PDF
Market Insight : ETH Dominance Returns
CIFDAQ
 
PDF
State-Dependent Conformal Perception Bounds for Neuro-Symbolic Verification
Ivan Ruchkin
 
PDF
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
PDF
The Future of Artificial Intelligence (AI)
Mukul
 
PPTX
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
PDF
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
PDF
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
PDF
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
PDF
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 
PDF
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
Per Axbom: The spectacular lies of maps
Nexer Digital
 
GDG Cloud Munich - Intro - Luiz Carneiro - #BuildWithAI - July - Abdel.pdf
Luiz Carneiro
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
Farrell_Programming Logic and Design slides_10e_ch02_PowerPoint.pptx
bashnahara11
 
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
The Future of AI & Machine Learning.pptx
pritsen4700
 
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
Build with AI and GDG Cloud Bydgoszcz- ADK .pdf
jaroslawgajewski1
 
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
Market Insight : ETH Dominance Returns
CIFDAQ
 
State-Dependent Conformal Perception Bounds for Neuro-Symbolic Verification
Ivan Ruchkin
 
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
The Future of Artificial Intelligence (AI)
Mukul
 
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
Ad

Responsible Machine Learning

  • 3. What is Responsible AI? Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are the product of many decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, Responsible AI can help proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.
  • 4. What is Responsible AI? Microsoft has developed a Responsible AI Standard. It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day.
  • 5. Fairness and inclusiveness AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. For example, when AI systems provide guidance on medical treatment, loan applications, or employment, they should make the same recommendations to everyone who has similar symptoms, financial circumstances, or professional qualifications. Fairness and inclusiveness in Azure Machine Learning: The fairness assessment component of the Responsible AI dashboard enables data scientists and developers to assess model fairness across sensitive groups defined in terms of gender, ethnicity, age, and other characteristics.
  • 6. Reliability and safety To build trust, it's critical that AI systems operate reliably, safely, and consistently. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation. How they behave and the variety of conditions they can handle reflect the range of situations and circumstances that developers anticipated during design and testing
  • 7. Reliability and safety Reliability and safety in Azure Machine Learning: The error analysis component of the Responsible AI dashboard enables data scientists and developers to: • Get a deep understanding of how failure is distributed for a model. • Identify cohorts (subsets) of data with a higher error rate than the overall benchmark. These discrepancies might occur when the system or model underperforms for specific demographic groups or for infrequently observed input conditions in the training data.
  • 8. Transparency When AI systems help inform decisions that have tremendous impacts on people's lives, it's critical that people understand how those decisions were made. For example, a bank might use an AI system to decide whether a person is creditworthy. A company might use an AI system to determine the most qualified candidates to hire. A crucial part of transparency is interpretability: the useful explanation of the behavior of AI systems and their components. Improving interpretability requires stakeholders to comprehend how and why AI systems function the way they do. The stakeholders can then identify potential performance issues, fairness issues, exclusionary practices, or unintended outcomes.
  • 9. Transparency Transparency in Azure Machine Learning: The model interpretability and counterfactual what-if components of the Responsible AI dashboard enable data scientists and developers to generate human-understandable descriptions of the predictions of a model.
  • 10. Transparency The model interpretability component provides multiple views into a model's behavior: • Global explanations. For example, what features affect the overall behavior of a loan allocation model? • Local explanations. For example, why was a customer's loan application approved or rejected? • Model explanations for a selected cohort of data points. For example, what features affect the overall behavior of a loan allocation model for low-income applicants?
  • 11. Transparency The counterfactual what-if component enables understanding and debugging a machine learning model in terms of how it reacts to feature changes and perturbations. Azure Machine Learning also supports a Responsible AI scorecard. The scorecard is a customizable PDF report that developers can easily configure, generate, download, and share with their technical and non-technical stakeholders to educate them about their datasets and models health, achieve compliance, and build trust. This scorecard can also be used in audit reviews to uncover the characteristics of machine learning models.
  • 12. Privacy and security As AI becomes more prevalent, protecting privacy and securing personal and business information are becoming more important and complex. With AI, privacy and data security require close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that: • Require transparency about the collection, use, and storage of data. • Mandate that consumers have appropriate controls to choose how their data is used.
  • 13. Privacy and security Privacy and security in Azure Machine Learning: Azure Machine Learning enables administrators and developers to create a secure configuration that complies with their companies' policies. With Azure Machine Learning and the Azure platform, users can: • Restrict access to resources and operations by user account or group. • Restrict incoming and outgoing network communications. • Encrypt data in transit and at rest. • Scan for vulnerabilities. • Apply and audit configuration policies.
  • 14. Accountability The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren't the final authority on any decision that affects people's lives. They can also ensure that humans maintain meaningful control over otherwise highly autonomous AI systems.
  • 15. Accountability Accountability in Azure Machine Learning: Machine learning operations (MLOps) is based on DevOps principles and practices that increase the efficiency of AI workflows. Azure Machine Learning provides the following MLOps capabilities for better accountability of your AI systems: • Register, package, and deploy models from anywhere. You can also track the associated metadata that's required to use the model. • Capture the governance data for the end-to-end machine learning lifecycle. The logged lineage information can include who is publishing models, why changes were made, and when models were deployed or used in production. • Notify and alert on events in the machine learning lifecycle. Examples include experiment completion, model registration, model deployment, and data drift detection. • Monitor applications for operational issues and issues related to machine learning. Compare model inputs between training and inference, explore model-specific metrics, and provide monitoring and alerts on your machine learning infrastructure.