SlideShare a Scribd company logo
Explainable AI (XAI)
A quick guide to understand how to interpret ML models
Mansour Saffar
ML Developer - AltaML Inc.
In the near future...
Image source:
https://www.scoopnest.com/user/NewYorker/684741246443778048-a-cartoon-by-paul-noth-find-more-cartoons-from-
this-week39s-issue-here
ML Model Categories
White Box Models
● Easy to interpret
● Sometimes can not learn the patterns in the
data well (low accuracy) due to their
simplicity
Black Box Models
● Hard (or impossible) to interpret
● Mostly more powerful and effective
compared to white box models
○ e.g. neural networks
Accuracy vs Interpretability
Image source: https://towardsdatascience.com/interpretability-vs-accuracy-the-friction-that-defines-deep-learning-
dae16c84db5c
What is XAI?
Image source: https://github.com/LumousAI/Lantern
● Think of XAI as a Lantern inside your ML model
● XAI helps you explain the results of your ML
model
● You will know Why and How does your ML
model work when you use XAI!
XAI Symbol!
Note: Actually, I made this symbol up :)
Some XAI Use Cases
Image source: https://blog.lifeextension.com/2019/01/machine-learning-and-medicine-is-ai.html
https://algorithmxlab.com/blog/the-big-problems-with-machine-learning-algorithms-in-finance/
https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal-
sector/#56d5191a32c3
https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152
https://www.technologyreview.com/f/612915/chinas-military-is-rushing-to-use-artificial-intelligence/
Medicine Finance Legal
Autonomous Cars Military
Some XAI Use Cases
Image source: https://blog.lifeextension.com/2019/01/machine-learning-and-medicine-is-ai.html
https://algorithmxlab.com/blog/the-big-problems-with-machine-learning-algorithms-in-finance/
https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal-
sector/#56d5191a32c3
https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152
https://www.technologyreview.com/f/612915/chinas-military-is-rushing-to-use-artificial-intelligence/
Medicine Finance Legal
Autonomous Cars Military
Why do we need XAI?! (Medical Application)
Image sources: https://msaffarm.github.io/projects/ml-
mri/Sahebzamani.Saffar.HSZ.Jan.2019.pdf
https://artemisinccouk.wordpress.com/author/torak289/
Brain MRI data Complex ML model
Report:
Patient is
diagnosed
with Epilepsy
with %85
confidence.
But why?!
Can I trust this
prediction?
Epilepsy Detection Model with Brain MRI Data
Why do we need XAI?! (Finance Application)
Image sources:
https://artemisinccouk.wordpress.com/author/torak289/
https://www.finder.com/credit-report
Financial and Demographic Data Complex ML model
Report:
Customer is
not eligible
for the loan!
Loan Model with Financial Records
But why?!
XAI to Help With Legal Implications
Can I trust the
epilepsy model
predictions?
● General Data Protection
Regulation (GDPR)
● GDPR imposes companies to
provide exaplanations of their
ML models to their
customers.
● Legal implication of wrong
diagnosis in medical
applications can be tough! (of
course the human loss is
more sad!)
● How can the doctor trust the
ML model predictions?
What if the
customer asks
why he was
rejected?
Image sources:
https://www.leaprate.com/financial-services/fines/occ-assesses-70-million-civil-money-penalty-
citibank/
https://venturebeat.com/2018/01/27/gdpr-a-playbook-for-compliance/
XAI General Overview
Image source: Modified https://github.com/slundberg/shap
XAI
What Answers Does XAI Provide?
● Why did the model make
that prediction?
WHY
● How can I correct an
error?
HOW
● When can I trust ML
model predictions?
● When will the ML
model fail to make the
right prediction?
WHEN
XAI Off vs XAI On!
Image source: Modified https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
What Problems of ML can XAI Address?
● Understand decision
making process
● Make sure model is
looking at the right
features
TRUST
● Understand what
features are taken into
account
● Detect biased patterns
in data
BIAS & FAIRNESS
● Explain the decision
making process
EXPLAINABILITY
XAI Categories
Global Explanation
● Explain the behaviour of the ML model in
overall
Model-Agnostic
● It does not care what the models is and how
it works!
Local Explanation
● Explain the prediction results for each
desired instance
Model-Dependant
● Interpretations are based on the
model’s learning process
XAI Categories
Global Explanation
● Explain the behaviour of the ML model in
overall
Model-Agnostic
● It does not care what the models is and how
it works!
Local Explanation
● Explain the prediction results for each
desired instance
Model-Dependant
● Interpretations are based on the
model’s learning process
XAI Example 1 (Global Explanation)
● Let’s say you have a bike
renting business and want to
know how much weather
affects your business!
● Partial Dependence Plot (PDP)
to the rescue
● How change in one attribute
(feature) can change your
prediction
XAI Example 2 (Global Explanation)
● Let’s say you want to explain
the behaviour of a HUGE
random forest model
○ 10000 trees!
● Surrogate Model
○ Decision tree
XAI Example 3 (Local Explanation)
● Let’s say you trained a image classification model!
● How does the model know where is the labrador?
● How does the model know there is guitar in the picture?
Image sources:
https://github.com/marcotcr/lime
XAI Example 3 (Local Explanation)
● Locally Interpretable Model-agnostic
Explanations (LIME)
Image sources:
https://github.com/marcotcr/lime
Shapley Values
● Let’s share a ride to AltaML
○ Mansour (Ma) => $40
○ Mohammad (Mo) => $5
○ Mina (Mi) => $25
Image sources:
https://www.barrieyellowtaxi.com/
Shapley Values
(Ma, Mi, Mo) 40 0 0
(Ma, Mo, Mi) 40 0 0
(Mi, Ma, Mo) 15 0 25
(Mi, Mo, Ma) 15 0 25
(Mo, Mi, Ma) 15 5 20
(Mo, Ma, Mi) 35 5 0
26.66 1.66 11.66
Ma 40
Mo 5
Mi 25
Ma, Mi 40
Ma, Mo 40
Mi, Mo 25
Ma, Mi, Mo 40
Ma Mo Mi
XAI Example 4 (Local Explanation)
● SHAP (SHapley Additive exPlanations)
● It’s idea is based on the Shapley Values
in game theory!
● How much each feature contributes to
the final prediction
● It’s like the ride sharing example but
features are passengers and the
amount paid is the model prediction!
XAI Toolsets and Libraries
● LIME
● SHAP
● Microsoft’s InterpretML
● H2O’s Driverless AI
● Google’s What-If tool
● Tf-explain (Tensorboard)
● IBM Watson
XAI Future
● Some models will be able to explain their results! (think of it as saying ‘Analysis’ to robots in
Westworld)
● More interpretable models which you can interact with and modify (or improve) their results
● Debugging ML models will shift more into high-level (semantic level) debugging since you can tell
which parts of the model are not functioning properly!
● You might be able to inject your knowledge into the model since it is interpretable and you know
how it makes the decision!
Acknowledgements
● AltaML team for their support and constructive feedback
Thank You!
Any Questions?
Image source: https://killerinnovations.com/what-is-the-innovation-behind-chatbots-and-why-is-it-important-s12-ep5/
References
● https://github.com/h2oai/mli-resources
● https://www.darpa.mil/program/explainable-artificial-intelligence
● https://christophm.github.io/interpretable-ml-book/
● https://github.com/marcotcr/lime
● https://github.com/slundberg/shap
● https://github.com/microsoft/interpret

More Related Content

What's hot (20)

PPTX
Explainable AI in Industry (AAAI 2020 Tutorial)
Krishnaram Kenthapadi
 
PPTX
Explainable AI in Industry (FAT* 2020 Tutorial)
Krishnaram Kenthapadi
 
PDF
DC02. Interpretation of predictions
Anton Kulesh
 
PDF
Machine Learning Interpretability / Explainability
Raouf KESKES
 
PPTX
Explainable Machine Learning (Explainable ML)
Hayim Makabee
 
PDF
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
taozen
 
PDF
Explainability and bias in AI
Bill Liu
 
PPTX
Interpretable Machine Learning
Sri Ambati
 
PDF
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Databricks
 
PDF
And then there were ... Large Language Models
Leon Dohmen
 
PDF
generative-ai-fundamentals and Large language models
AdventureWorld5
 
PDF
Large Language Models Bootcamp
Data Science Dojo
 
PDF
Deep Learning - The Past, Present and Future of Artificial Intelligence
Lukas Masuch
 
PDF
Generative-AI-in-enterprise-20230615.pdf
Liming Zhu
 
PDF
Introduction to LLMs
Loic Merckel
 
PPTX
Interpretable machine learning
Sri Ambati
 
PDF
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Sri Ambati
 
PDF
Responsible AI
Neo4j
 
PDF
Unlocking the Power of Generative AI An Executive's Guide.pdf
PremNaraindas1
 
PPTX
Using Generative AI
Mark DeLoura
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Krishnaram Kenthapadi
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Krishnaram Kenthapadi
 
DC02. Interpretation of predictions
Anton Kulesh
 
Machine Learning Interpretability / Explainability
Raouf KESKES
 
Explainable Machine Learning (Explainable ML)
Hayim Makabee
 
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
taozen
 
Explainability and bias in AI
Bill Liu
 
Interpretable Machine Learning
Sri Ambati
 
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Databricks
 
And then there were ... Large Language Models
Leon Dohmen
 
generative-ai-fundamentals and Large language models
AdventureWorld5
 
Large Language Models Bootcamp
Data Science Dojo
 
Deep Learning - The Past, Present and Future of Artificial Intelligence
Lukas Masuch
 
Generative-AI-in-enterprise-20230615.pdf
Liming Zhu
 
Introduction to LLMs
Loic Merckel
 
Interpretable machine learning
Sri Ambati
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Sri Ambati
 
Responsible AI
Neo4j
 
Unlocking the Power of Generative AI An Executive's Guide.pdf
PremNaraindas1
 
Using Generative AI
Mark DeLoura
 

Similar to An Introduction to XAI! Towards Trusting Your ML Models! (20)

PPTX
Understanding Black Box Models with Shapley Values
Jonathan Bechtel
 
PDF
Hacking Predictive Modeling - RoadSec 2018
HJ van Veen
 
PPTX
Using SHAP to Understand Black Box Models
Jonathan Bechtel
 
PDF
Explainability for Learning to Rank
Sease
 
PDF
Human in the loop: Bayesian Rules Enabling Explainable AI
Pramit Choudhary
 
PDF
Balancing Automation and Explanation in Machine Learning
Databricks
 
PDF
Human-Centered Interpretable Machine Learning
Przemek Biecek
 
PPTX
ODSC APAC 2022 - Explainable AI
Aditya Bhattacharya
 
PPTX
"AI in the browser: predicting user actions in real time with TensorflowJS", ...
Fwdays
 
PDF
Explore, Explain, and Debug aka Interpretable Machine Learning
Przemek Biecek
 
PPTX
2018.01.25 rune sætre_triallecture_xai_v2
Rune Sætre
 
PDF
A few questions about large scale machine learning
Theodoros Vasiloudis
 
PDF
Towards Human-Centered Machine Learning
Sri Ambati
 
PDF
Explainable AI - making ML and DL models more interpretable
Aditya Bhattacharya
 
PDF
ExplorerPatcher 22621.4317.67.1 Free Download
blouch52kp
 
PDF
Cadence Fidelity Pointwise Free Download
mohsinrazakpa69
 
PDF
Serato DJ Pro Download Free crack 2025
mohsinrazakpa85
 
PPTX
[DSC Europe 24] Orsalia Andreou - Fostering Trust in AI-Driven Finance
DataScienceConferenc1
 
PPTX
Interpretable ML
Mayur Sand
 
PDF
AI/ML Fundamentals to advanced Slides by GDG Amrita Mysuru.pdf
Lakshay14663
 
Understanding Black Box Models with Shapley Values
Jonathan Bechtel
 
Hacking Predictive Modeling - RoadSec 2018
HJ van Veen
 
Using SHAP to Understand Black Box Models
Jonathan Bechtel
 
Explainability for Learning to Rank
Sease
 
Human in the loop: Bayesian Rules Enabling Explainable AI
Pramit Choudhary
 
Balancing Automation and Explanation in Machine Learning
Databricks
 
Human-Centered Interpretable Machine Learning
Przemek Biecek
 
ODSC APAC 2022 - Explainable AI
Aditya Bhattacharya
 
"AI in the browser: predicting user actions in real time with TensorflowJS", ...
Fwdays
 
Explore, Explain, and Debug aka Interpretable Machine Learning
Przemek Biecek
 
2018.01.25 rune sætre_triallecture_xai_v2
Rune Sætre
 
A few questions about large scale machine learning
Theodoros Vasiloudis
 
Towards Human-Centered Machine Learning
Sri Ambati
 
Explainable AI - making ML and DL models more interpretable
Aditya Bhattacharya
 
ExplorerPatcher 22621.4317.67.1 Free Download
blouch52kp
 
Cadence Fidelity Pointwise Free Download
mohsinrazakpa69
 
Serato DJ Pro Download Free crack 2025
mohsinrazakpa85
 
[DSC Europe 24] Orsalia Andreou - Fostering Trust in AI-Driven Finance
DataScienceConferenc1
 
Interpretable ML
Mayur Sand
 
AI/ML Fundamentals to advanced Slides by GDG Amrita Mysuru.pdf
Lakshay14663
 
Ad

Recently uploaded (20)

PDF
Plant growth promoting bacterial non symbiotic
psuvethapalani
 
PPTX
GB1 Q1 04 Life in a Cell (1).pptx GRADE 11
JADE ACOSTA
 
PDF
Unit-3 ppt.pdf organic chemistry unit 3 heterocyclic
visionshukla007
 
PDF
soil and environmental microbiology.pdf
Divyaprabha67
 
PDF
Insect Behaviour : Patterns And Determinants
SheikhArshaqAreeb
 
PDF
Preserving brand authenticity amid AI-driven misinformation: Sustaining consu...
Selcen Ozturkcan
 
PPTX
CNS.pptx Central nervous system meninges ventricles of brain it's structure a...
Ashwini I Chuncha
 
PDF
Carbon-richDustInjectedintotheInterstellarMediumbyGalacticWCBinaries Survives...
Sérgio Sacani
 
DOCX
Critical Book Review (CBR) - "Hate Speech: Linguistic Perspectives"
Sahmiral Amri Rajagukguk
 
PPTX
Class12_Physics_Chapter2 electric potential and capacitance.pptx
mgmahati1234
 
PPTX
Bacillus thuringiensis.crops & golden rice
priyadharshini87125
 
PDF
Asthamudi lake and its fisheries&importance .pdf
J. Bovas Joel BFSc
 
PPT
Restriction digestion of DNA for students of undergraduate and post graduate ...
DrMukeshRameshPimpli
 
PPTX
770043401-q1-Ppt-pe-and-Health-7-week-1-lesson-1.pptx
AizaRazonado
 
PPTX
Entner-Doudoroff pathway by Santosh .pptx
santoshpaudel35
 
PPTX
Ghent University Global Campus: Overview
Ghent University Global Campus
 
PPTX
Akshay tunneling .pptx_20250331_165945_0000.pptx
akshaythaker18
 
PDF
FYS 100 final presentation on Afro cubans
RowanSales
 
PPTX
abdominal compartment syndrome presentation and treatment.pptx
LakshmiMounicaGrandh
 
PDF
A High-Caliber View of the Bullet Cluster through JWST Strong and Weak Lensin...
Sérgio Sacani
 
Plant growth promoting bacterial non symbiotic
psuvethapalani
 
GB1 Q1 04 Life in a Cell (1).pptx GRADE 11
JADE ACOSTA
 
Unit-3 ppt.pdf organic chemistry unit 3 heterocyclic
visionshukla007
 
soil and environmental microbiology.pdf
Divyaprabha67
 
Insect Behaviour : Patterns And Determinants
SheikhArshaqAreeb
 
Preserving brand authenticity amid AI-driven misinformation: Sustaining consu...
Selcen Ozturkcan
 
CNS.pptx Central nervous system meninges ventricles of brain it's structure a...
Ashwini I Chuncha
 
Carbon-richDustInjectedintotheInterstellarMediumbyGalacticWCBinaries Survives...
Sérgio Sacani
 
Critical Book Review (CBR) - "Hate Speech: Linguistic Perspectives"
Sahmiral Amri Rajagukguk
 
Class12_Physics_Chapter2 electric potential and capacitance.pptx
mgmahati1234
 
Bacillus thuringiensis.crops & golden rice
priyadharshini87125
 
Asthamudi lake and its fisheries&importance .pdf
J. Bovas Joel BFSc
 
Restriction digestion of DNA for students of undergraduate and post graduate ...
DrMukeshRameshPimpli
 
770043401-q1-Ppt-pe-and-Health-7-week-1-lesson-1.pptx
AizaRazonado
 
Entner-Doudoroff pathway by Santosh .pptx
santoshpaudel35
 
Ghent University Global Campus: Overview
Ghent University Global Campus
 
Akshay tunneling .pptx_20250331_165945_0000.pptx
akshaythaker18
 
FYS 100 final presentation on Afro cubans
RowanSales
 
abdominal compartment syndrome presentation and treatment.pptx
LakshmiMounicaGrandh
 
A High-Caliber View of the Bullet Cluster through JWST Strong and Weak Lensin...
Sérgio Sacani
 
Ad

An Introduction to XAI! Towards Trusting Your ML Models!

  • 1. Explainable AI (XAI) A quick guide to understand how to interpret ML models Mansour Saffar ML Developer - AltaML Inc.
  • 2. In the near future... Image source: https://www.scoopnest.com/user/NewYorker/684741246443778048-a-cartoon-by-paul-noth-find-more-cartoons-from- this-week39s-issue-here
  • 3. ML Model Categories White Box Models ● Easy to interpret ● Sometimes can not learn the patterns in the data well (low accuracy) due to their simplicity Black Box Models ● Hard (or impossible) to interpret ● Mostly more powerful and effective compared to white box models ○ e.g. neural networks
  • 4. Accuracy vs Interpretability Image source: https://towardsdatascience.com/interpretability-vs-accuracy-the-friction-that-defines-deep-learning- dae16c84db5c
  • 5. What is XAI? Image source: https://github.com/LumousAI/Lantern ● Think of XAI as a Lantern inside your ML model ● XAI helps you explain the results of your ML model ● You will know Why and How does your ML model work when you use XAI! XAI Symbol! Note: Actually, I made this symbol up :)
  • 6. Some XAI Use Cases Image source: https://blog.lifeextension.com/2019/01/machine-learning-and-medicine-is-ai.html https://algorithmxlab.com/blog/the-big-problems-with-machine-learning-algorithms-in-finance/ https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal- sector/#56d5191a32c3 https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152 https://www.technologyreview.com/f/612915/chinas-military-is-rushing-to-use-artificial-intelligence/ Medicine Finance Legal Autonomous Cars Military
  • 7. Some XAI Use Cases Image source: https://blog.lifeextension.com/2019/01/machine-learning-and-medicine-is-ai.html https://algorithmxlab.com/blog/the-big-problems-with-machine-learning-algorithms-in-finance/ https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal- sector/#56d5191a32c3 https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152 https://www.technologyreview.com/f/612915/chinas-military-is-rushing-to-use-artificial-intelligence/ Medicine Finance Legal Autonomous Cars Military
  • 8. Why do we need XAI?! (Medical Application) Image sources: https://msaffarm.github.io/projects/ml- mri/Sahebzamani.Saffar.HSZ.Jan.2019.pdf https://artemisinccouk.wordpress.com/author/torak289/ Brain MRI data Complex ML model Report: Patient is diagnosed with Epilepsy with %85 confidence. But why?! Can I trust this prediction? Epilepsy Detection Model with Brain MRI Data
  • 9. Why do we need XAI?! (Finance Application) Image sources: https://artemisinccouk.wordpress.com/author/torak289/ https://www.finder.com/credit-report Financial and Demographic Data Complex ML model Report: Customer is not eligible for the loan! Loan Model with Financial Records But why?!
  • 10. XAI to Help With Legal Implications Can I trust the epilepsy model predictions? ● General Data Protection Regulation (GDPR) ● GDPR imposes companies to provide exaplanations of their ML models to their customers. ● Legal implication of wrong diagnosis in medical applications can be tough! (of course the human loss is more sad!) ● How can the doctor trust the ML model predictions? What if the customer asks why he was rejected? Image sources: https://www.leaprate.com/financial-services/fines/occ-assesses-70-million-civil-money-penalty- citibank/ https://venturebeat.com/2018/01/27/gdpr-a-playbook-for-compliance/
  • 11. XAI General Overview Image source: Modified https://github.com/slundberg/shap XAI
  • 12. What Answers Does XAI Provide? ● Why did the model make that prediction? WHY ● How can I correct an error? HOW ● When can I trust ML model predictions? ● When will the ML model fail to make the right prediction? WHEN
  • 13. XAI Off vs XAI On! Image source: Modified https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
  • 14. What Problems of ML can XAI Address? ● Understand decision making process ● Make sure model is looking at the right features TRUST ● Understand what features are taken into account ● Detect biased patterns in data BIAS & FAIRNESS ● Explain the decision making process EXPLAINABILITY
  • 15. XAI Categories Global Explanation ● Explain the behaviour of the ML model in overall Model-Agnostic ● It does not care what the models is and how it works! Local Explanation ● Explain the prediction results for each desired instance Model-Dependant ● Interpretations are based on the model’s learning process
  • 16. XAI Categories Global Explanation ● Explain the behaviour of the ML model in overall Model-Agnostic ● It does not care what the models is and how it works! Local Explanation ● Explain the prediction results for each desired instance Model-Dependant ● Interpretations are based on the model’s learning process
  • 17. XAI Example 1 (Global Explanation) ● Let’s say you have a bike renting business and want to know how much weather affects your business! ● Partial Dependence Plot (PDP) to the rescue ● How change in one attribute (feature) can change your prediction
  • 18. XAI Example 2 (Global Explanation) ● Let’s say you want to explain the behaviour of a HUGE random forest model ○ 10000 trees! ● Surrogate Model ○ Decision tree
  • 19. XAI Example 3 (Local Explanation) ● Let’s say you trained a image classification model! ● How does the model know where is the labrador? ● How does the model know there is guitar in the picture? Image sources: https://github.com/marcotcr/lime
  • 20. XAI Example 3 (Local Explanation) ● Locally Interpretable Model-agnostic Explanations (LIME) Image sources: https://github.com/marcotcr/lime
  • 21. Shapley Values ● Let’s share a ride to AltaML ○ Mansour (Ma) => $40 ○ Mohammad (Mo) => $5 ○ Mina (Mi) => $25 Image sources: https://www.barrieyellowtaxi.com/
  • 22. Shapley Values (Ma, Mi, Mo) 40 0 0 (Ma, Mo, Mi) 40 0 0 (Mi, Ma, Mo) 15 0 25 (Mi, Mo, Ma) 15 0 25 (Mo, Mi, Ma) 15 5 20 (Mo, Ma, Mi) 35 5 0 26.66 1.66 11.66 Ma 40 Mo 5 Mi 25 Ma, Mi 40 Ma, Mo 40 Mi, Mo 25 Ma, Mi, Mo 40 Ma Mo Mi
  • 23. XAI Example 4 (Local Explanation) ● SHAP (SHapley Additive exPlanations) ● It’s idea is based on the Shapley Values in game theory! ● How much each feature contributes to the final prediction ● It’s like the ride sharing example but features are passengers and the amount paid is the model prediction!
  • 24. XAI Toolsets and Libraries ● LIME ● SHAP ● Microsoft’s InterpretML ● H2O’s Driverless AI ● Google’s What-If tool ● Tf-explain (Tensorboard) ● IBM Watson
  • 25. XAI Future ● Some models will be able to explain their results! (think of it as saying ‘Analysis’ to robots in Westworld) ● More interpretable models which you can interact with and modify (or improve) their results ● Debugging ML models will shift more into high-level (semantic level) debugging since you can tell which parts of the model are not functioning properly! ● You might be able to inject your knowledge into the model since it is interpretable and you know how it makes the decision!
  • 26. Acknowledgements ● AltaML team for their support and constructive feedback
  • 27. Thank You! Any Questions? Image source: https://killerinnovations.com/what-is-the-innovation-behind-chatbots-and-why-is-it-important-s12-ep5/
  • 28. References ● https://github.com/h2oai/mli-resources ● https://www.darpa.mil/program/explainable-artificial-intelligence ● https://christophm.github.io/interpretable-ml-book/ ● https://github.com/marcotcr/lime ● https://github.com/slundberg/shap ● https://github.com/microsoft/interpret

Editor's Notes

  • #9: Mention legal
  • #11: Mention the pneumonia case!