Explainable AI (XAI)
A quick guide to understand how to interpret ML models
Mansour Saffar
ML Developer - AltaML Inc.
In the near future...
Image source:
https://www.scoopnest.com/user/NewYorker/684741246443778048-a-cartoon-by-paul-noth-find-more-cartoons-from-
this-week39s-issue-here
ML Model Categories
White Box Models
● Easy to interpret
● Sometimes can not learn the patterns in the
data well (low accuracy) due to their
simplicity
Black Box Models
● Hard (or impossible) to interpret
● Mostly more powerful and effective
compared to white box models
○ e.g. neural networks
Accuracy vs Interpretability
Image source: https://towardsdatascience.com/interpretability-vs-accuracy-the-friction-that-defines-deep-learning-
dae16c84db5c
What is XAI?
Image source: https://github.com/LumousAI/Lantern
● Think of XAI as a Lantern inside your ML model
● XAI helps you explain the results of your ML
model
● You will know Why and How does your ML
model work when you use XAI!
XAI Symbol!
Note: Actually, I made this symbol up :)
Some XAI Use Cases
Image source: https://blog.lifeextension.com/2019/01/machine-learning-and-medicine-is-ai.html
https://algorithmxlab.com/blog/the-big-problems-with-machine-learning-algorithms-in-finance/
https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal-
sector/#56d5191a32c3
https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152
https://www.technologyreview.com/f/612915/chinas-military-is-rushing-to-use-artificial-intelligence/
Medicine Finance Legal
Autonomous Cars Military
Some XAI Use Cases
Image source: https://blog.lifeextension.com/2019/01/machine-learning-and-medicine-is-ai.html
https://algorithmxlab.com/blog/the-big-problems-with-machine-learning-algorithms-in-finance/
https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal-
sector/#56d5191a32c3
https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152
https://www.technologyreview.com/f/612915/chinas-military-is-rushing-to-use-artificial-intelligence/
Medicine Finance Legal
Autonomous Cars Military
Why do we need XAI?! (Medical Application)
Image sources: https://msaffarm.github.io/projects/ml-
mri/Sahebzamani.Saffar.HSZ.Jan.2019.pdf
https://artemisinccouk.wordpress.com/author/torak289/
Brain MRI data Complex ML model
Report:
Patient is
diagnosed
with Epilepsy
with %85
confidence.
But why?!
Can I trust this
prediction?
Epilepsy Detection Model with Brain MRI Data
Why do we need XAI?! (Finance Application)
Image sources:
https://artemisinccouk.wordpress.com/author/torak289/
https://www.finder.com/credit-report
Financial and Demographic Data Complex ML model
Report:
Customer is
not eligible
for the loan!
Loan Model with Financial Records
But why?!
XAI to Help With Legal Implications
Can I trust the
epilepsy model
predictions?
● General Data Protection
Regulation (GDPR)
● GDPR imposes companies to
provide exaplanations of their
ML models to their
customers.
● Legal implication of wrong
diagnosis in medical
applications can be tough! (of
course the human loss is
more sad!)
● How can the doctor trust the
ML model predictions?
What if the
customer asks
why he was
rejected?
Image sources:
https://www.leaprate.com/financial-services/fines/occ-assesses-70-million-civil-money-penalty-
citibank/
https://venturebeat.com/2018/01/27/gdpr-a-playbook-for-compliance/
XAI General Overview
Image source: Modified https://github.com/slundberg/shap
XAI
What Answers Does XAI Provide?
● Why did the model make
that prediction?
WHY
● How can I correct an
error?
HOW
● When can I trust ML
model predictions?
● When will the ML
model fail to make the
right prediction?
WHEN
XAI Off vs XAI On!
Image source: Modified https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
What Problems of ML can XAI Address?
● Understand decision
making process
● Make sure model is
looking at the right
features
TRUST
● Understand what
features are taken into
account
● Detect biased patterns
in data
BIAS & FAIRNESS
● Explain the decision
making process
EXPLAINABILITY
XAI Categories
Global Explanation
● Explain the behaviour of the ML model in
overall
Model-Agnostic
● It does not care what the models is and how
it works!
Local Explanation
● Explain the prediction results for each
desired instance
Model-Dependant
● Interpretations are based on the
model’s learning process
XAI Categories
Global Explanation
● Explain the behaviour of the ML model in
overall
Model-Agnostic
● It does not care what the models is and how
it works!
Local Explanation
● Explain the prediction results for each
desired instance
Model-Dependant
● Interpretations are based on the
model’s learning process
XAI Example 1 (Global Explanation)
● Let’s say you have a bike
renting business and want to
know how much weather
affects your business!
● Partial Dependence Plot (PDP)
to the rescue
● How change in one attribute
(feature) can change your
prediction
XAI Example 2 (Global Explanation)
● Let’s say you want to explain
the behaviour of a HUGE
random forest model
○ 10000 trees!
● Surrogate Model
○ Decision tree
XAI Example 3 (Local Explanation)
● Let’s say you trained a image classification model!
● How does the model know where is the labrador?
● How does the model know there is guitar in the picture?
Image sources:
https://github.com/marcotcr/lime
XAI Example 3 (Local Explanation)
● Locally Interpretable Model-agnostic
Explanations (LIME)
Image sources:
https://github.com/marcotcr/lime
Shapley Values
● Let’s share a ride to AltaML
○ Mansour (Ma) => $40
○ Mohammad (Mo) => $5
○ Mina (Mi) => $25
Image sources:
https://www.barrieyellowtaxi.com/
Shapley Values
(Ma, Mi, Mo) 40 0 0
(Ma, Mo, Mi) 40 0 0
(Mi, Ma, Mo) 15 0 25
(Mi, Mo, Ma) 15 0 25
(Mo, Mi, Ma) 15 5 20
(Mo, Ma, Mi) 35 5 0
26.66 1.66 11.66
Ma 40
Mo 5
Mi 25
Ma, Mi 40
Ma, Mo 40
Mi, Mo 25
Ma, Mi, Mo 40
Ma Mo Mi
XAI Example 4 (Local Explanation)
● SHAP (SHapley Additive exPlanations)
● It’s idea is based on the Shapley Values
in game theory!
● How much each feature contributes to
the final prediction
● It’s like the ride sharing example but
features are passengers and the
amount paid is the model prediction!
XAI Toolsets and Libraries
● LIME
● SHAP
● Microsoft’s InterpretML
● H2O’s Driverless AI
● Google’s What-If tool
● Tf-explain (Tensorboard)
● IBM Watson
XAI Future
● Some models will be able to explain their results! (think of it as saying ‘Analysis’ to robots in
Westworld)
● More interpretable models which you can interact with and modify (or improve) their results
● Debugging ML models will shift more into high-level (semantic level) debugging since you can tell
which parts of the model are not functioning properly!
● You might be able to inject your knowledge into the model since it is interpretable and you know
how it makes the decision!
Acknowledgements
● AltaML team for their support and constructive feedback
Thank You!
Any Questions?
Image source: https://killerinnovations.com/what-is-the-innovation-behind-chatbots-and-why-is-it-important-s12-ep5/
References
● https://github.com/h2oai/mli-resources
● https://www.darpa.mil/program/explainable-artificial-intelligence
● https://christophm.github.io/interpretable-ml-book/
● https://github.com/marcotcr/lime
● https://github.com/slundberg/shap
● https://github.com/microsoft/interpret

An Introduction to XAI! Towards Trusting Your ML Models!

  • 1.
    Explainable AI (XAI) Aquick guide to understand how to interpret ML models Mansour Saffar ML Developer - AltaML Inc.
  • 2.
    In the nearfuture... Image source: https://www.scoopnest.com/user/NewYorker/684741246443778048-a-cartoon-by-paul-noth-find-more-cartoons-from- this-week39s-issue-here
  • 3.
    ML Model Categories WhiteBox Models ● Easy to interpret ● Sometimes can not learn the patterns in the data well (low accuracy) due to their simplicity Black Box Models ● Hard (or impossible) to interpret ● Mostly more powerful and effective compared to white box models ○ e.g. neural networks
  • 4.
    Accuracy vs Interpretability Imagesource: https://towardsdatascience.com/interpretability-vs-accuracy-the-friction-that-defines-deep-learning- dae16c84db5c
  • 5.
    What is XAI? Imagesource: https://github.com/LumousAI/Lantern ● Think of XAI as a Lantern inside your ML model ● XAI helps you explain the results of your ML model ● You will know Why and How does your ML model work when you use XAI! XAI Symbol! Note: Actually, I made this symbol up :)
  • 6.
    Some XAI UseCases Image source: https://blog.lifeextension.com/2019/01/machine-learning-and-medicine-is-ai.html https://algorithmxlab.com/blog/the-big-problems-with-machine-learning-algorithms-in-finance/ https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal- sector/#56d5191a32c3 https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152 https://www.technologyreview.com/f/612915/chinas-military-is-rushing-to-use-artificial-intelligence/ Medicine Finance Legal Autonomous Cars Military
  • 7.
    Some XAI UseCases Image source: https://blog.lifeextension.com/2019/01/machine-learning-and-medicine-is-ai.html https://algorithmxlab.com/blog/the-big-problems-with-machine-learning-algorithms-in-finance/ https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal- sector/#56d5191a32c3 https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152 https://www.technologyreview.com/f/612915/chinas-military-is-rushing-to-use-artificial-intelligence/ Medicine Finance Legal Autonomous Cars Military
  • 8.
    Why do weneed XAI?! (Medical Application) Image sources: https://msaffarm.github.io/projects/ml- mri/Sahebzamani.Saffar.HSZ.Jan.2019.pdf https://artemisinccouk.wordpress.com/author/torak289/ Brain MRI data Complex ML model Report: Patient is diagnosed with Epilepsy with %85 confidence. But why?! Can I trust this prediction? Epilepsy Detection Model with Brain MRI Data
  • 9.
    Why do weneed XAI?! (Finance Application) Image sources: https://artemisinccouk.wordpress.com/author/torak289/ https://www.finder.com/credit-report Financial and Demographic Data Complex ML model Report: Customer is not eligible for the loan! Loan Model with Financial Records But why?!
  • 10.
    XAI to HelpWith Legal Implications Can I trust the epilepsy model predictions? ● General Data Protection Regulation (GDPR) ● GDPR imposes companies to provide exaplanations of their ML models to their customers. ● Legal implication of wrong diagnosis in medical applications can be tough! (of course the human loss is more sad!) ● How can the doctor trust the ML model predictions? What if the customer asks why he was rejected? Image sources: https://www.leaprate.com/financial-services/fines/occ-assesses-70-million-civil-money-penalty- citibank/ https://venturebeat.com/2018/01/27/gdpr-a-playbook-for-compliance/
  • 11.
    XAI General Overview Imagesource: Modified https://github.com/slundberg/shap XAI
  • 12.
    What Answers DoesXAI Provide? ● Why did the model make that prediction? WHY ● How can I correct an error? HOW ● When can I trust ML model predictions? ● When will the ML model fail to make the right prediction? WHEN
  • 13.
    XAI Off vsXAI On! Image source: Modified https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
  • 14.
    What Problems ofML can XAI Address? ● Understand decision making process ● Make sure model is looking at the right features TRUST ● Understand what features are taken into account ● Detect biased patterns in data BIAS & FAIRNESS ● Explain the decision making process EXPLAINABILITY
  • 15.
    XAI Categories Global Explanation ●Explain the behaviour of the ML model in overall Model-Agnostic ● It does not care what the models is and how it works! Local Explanation ● Explain the prediction results for each desired instance Model-Dependant ● Interpretations are based on the model’s learning process
  • 16.
    XAI Categories Global Explanation ●Explain the behaviour of the ML model in overall Model-Agnostic ● It does not care what the models is and how it works! Local Explanation ● Explain the prediction results for each desired instance Model-Dependant ● Interpretations are based on the model’s learning process
  • 17.
    XAI Example 1(Global Explanation) ● Let’s say you have a bike renting business and want to know how much weather affects your business! ● Partial Dependence Plot (PDP) to the rescue ● How change in one attribute (feature) can change your prediction
  • 18.
    XAI Example 2(Global Explanation) ● Let’s say you want to explain the behaviour of a HUGE random forest model ○ 10000 trees! ● Surrogate Model ○ Decision tree
  • 19.
    XAI Example 3(Local Explanation) ● Let’s say you trained a image classification model! ● How does the model know where is the labrador? ● How does the model know there is guitar in the picture? Image sources: https://github.com/marcotcr/lime
  • 20.
    XAI Example 3(Local Explanation) ● Locally Interpretable Model-agnostic Explanations (LIME) Image sources: https://github.com/marcotcr/lime
  • 21.
    Shapley Values ● Let’sshare a ride to AltaML ○ Mansour (Ma) => $40 ○ Mohammad (Mo) => $5 ○ Mina (Mi) => $25 Image sources: https://www.barrieyellowtaxi.com/
  • 22.
    Shapley Values (Ma, Mi,Mo) 40 0 0 (Ma, Mo, Mi) 40 0 0 (Mi, Ma, Mo) 15 0 25 (Mi, Mo, Ma) 15 0 25 (Mo, Mi, Ma) 15 5 20 (Mo, Ma, Mi) 35 5 0 26.66 1.66 11.66 Ma 40 Mo 5 Mi 25 Ma, Mi 40 Ma, Mo 40 Mi, Mo 25 Ma, Mi, Mo 40 Ma Mo Mi
  • 23.
    XAI Example 4(Local Explanation) ● SHAP (SHapley Additive exPlanations) ● It’s idea is based on the Shapley Values in game theory! ● How much each feature contributes to the final prediction ● It’s like the ride sharing example but features are passengers and the amount paid is the model prediction!
  • 24.
    XAI Toolsets andLibraries ● LIME ● SHAP ● Microsoft’s InterpretML ● H2O’s Driverless AI ● Google’s What-If tool ● Tf-explain (Tensorboard) ● IBM Watson
  • 25.
    XAI Future ● Somemodels will be able to explain their results! (think of it as saying ‘Analysis’ to robots in Westworld) ● More interpretable models which you can interact with and modify (or improve) their results ● Debugging ML models will shift more into high-level (semantic level) debugging since you can tell which parts of the model are not functioning properly! ● You might be able to inject your knowledge into the model since it is interpretable and you know how it makes the decision!
  • 26.
    Acknowledgements ● AltaML teamfor their support and constructive feedback
  • 27.
    Thank You! Any Questions? Imagesource: https://killerinnovations.com/what-is-the-innovation-behind-chatbots-and-why-is-it-important-s12-ep5/
  • 28.
    References ● https://github.com/h2oai/mli-resources ● https://www.darpa.mil/program/explainable-artificial-intelligence ●https://christophm.github.io/interpretable-ml-book/ ● https://github.com/marcotcr/lime ● https://github.com/slundberg/shap ● https://github.com/microsoft/interpret

Editor's Notes

  • #9 Mention legal
  • #11 Mention the pneumonia case!