SlideShare a Scribd company logo
What If?
Demystifying AI Decisions with
Counterfactuals
David Martens
admantwerp.github.io
Advanced AI in our lives
Black Box?
 Deep learning: large artificial neural network with massive number of
parameters
o MobileNetV2: 4 million parameters
o GPT-4: estimated 100 trillion parameters
 Non-linear models
Complex AI models are creeping into our lives
Whose decisions are extremely difficult to explain
But Explanations are needed if we want to trust AI
Explain individual predictions
Counterfactual explanation:
What needs to change in your data, to reach a different decision?
Explain individual predictions
Example: credit scoring using sociodemo and financial data
User xi: Sam, with income $ 32,000 and 39 years old.
Sam is denied credit
WHY?
Counterfactual explanation:
What needs to change in your data, to reach a different decision?
Explain individual predictions
Example: credit scoring using sociodemo and financial data
User xi: Sam
Sam is denied credit
WHY?
Dieter Brughmans, Pieter Leyman, David Martens (2023) NICE : an algorithm for nearest instance
counterfactual explanations. Data mining and knowledge discovery p. 1-39
IF Sam would make 8,000$ more
THEN his predicted class would change from denied to granted
Explain individual predictions
Counterfactual Explanation
LIME: Linear Interpretable Model-Agnostic
Explainer (k=10)
Example: gender prediction using movie viewing data
User xi: Sam
Sam watched 120 movies
Sam is predicted as male
IF Sam would not have watched
{Taxi driver, The Dark Knight, Die Hard,
Terminator 2, Now You See Me, Interstellar},
THEN his predicted class would change from male to female
WHY?
Martens D, Provost F. (2014) Explaining Data-Driven Document
Classifications. MIS Quarterly 38(1):73-99.
admantwerp.github.io
M.T. Ribeiro, S. Singh, C. Guestrin (2016) Model-Agnostic
Interpretability of Machine Learning
2016 ICML Workshop on Human Interpretability in Machine
Learning (WHI 2016), New York, NY
github.com/marcotcr/lime
Ramon Y., Martens D., Provost F., Evgeniou T. (2019) Instance-level explanation algorithms SEDC, LIME, SHAP for behavioral and
textual data: a counterfactual-oriented comparison, Advances in Data Analysis and Classification 14:4, p. 801-819.
Complex AI models are creeping into our lives
Whose decisions are extremely difficult to explain
But explanations are needed if we want to trust AI
Counterfactuals explain a decision of a model for an instance
Counterfactual generating algorithms
 Yet more algorithms to explain complex algorithms
 Ill-defined, measures: sparse/short, near, diverse,
actionable, feasible, plausible, justified, fast, model-
agnostic, data-agnostic, truthful, complete, stable, robust,
…
Karimi et al (2021) A survey of algorithmic recourse: contrastive
explanations and consequential recommendations
https://arxiv.org/pdf/2010.04050.pdf
Complex AI models are creeping into our lives
Whose decisions are extremely difficult to explain
But Explanations are needed if we want to trust AI
Counterfactuals explain a decision of a model for an instance
What these should look like is ill-defined and open research
The Counterfactual Explains Wrong Decisions
 To improve the predictive performance of the model
 Example
o Data: image
o Task: predict if missile in image
Tom Vermeire, Dieter Brughmans, Sofie Goethals, Raphael Mazzine de Oliveira, David Martens (2022)
Explainable Image Classication with Evidence Counterfactual, Pattern Anal Applic 25, 315–335
The Counterfactual Explains Wrong Decisions
 To improve the predictive performance of the model
 Example
o Data: image
o Task: predict if missile in image
o Mainly interested in improving misclassifications
o Issue: Lighthouse wrongly classified as missile
o Pattern learnt: line of smoke indicates missile
Complex AI models are creeping into our lives
Whose decisions are extremely difficult to explain
But Explanations are needed if we want to trust AI
Counterfactuals explain a decision of a model for an instance
What these should look like is ill-defined and open research
Very useful to explain (incorrect or unfair) decisions
Imagine a world with CF explanations
Well, if your wife would also have had a 20+ year
relationship with our bank, and would have been
regarded as Premium customer at some point in time,
she would also receive a 20x credit limit
Well, if your wife’s relationship status would have
been “husband” instead of “wife”, she would also
receive a 20x credit limit
We clearly messed up, we’re updating our models
now.
Ah, ok, thanks for the additional feedback!
Glad you found this and react responsibly.
It’s how big tech should be in the 21st century.
Q&A
David Martens (2022)
Data Science Ethics: Concepts, Techniques and Cautionary Tales
Oxford University Press, 272 pages.
 Play around with CF expanations: admantwerp.github.io
 Book on Data Science Ethics: www.dsethics.com

More Related Content

PDF
Responsible AI
Neo4j
 
PPTX
Explainable AI in Industry (FAT* 2020 Tutorial)
Krishnaram Kenthapadi
 
PDF
Big data camp la futures so bright tim-shea
Data Con LA
 
PDF
Neo4j - Responsible AI
Neo4j
 
PDF
Explainability and bias in AI
Bill Liu
 
PPTX
Explaining Black-Box Machine Learning Predictions - Sameer Singh, Assistant P...
Sri Ambati
 
PPTX
The Need for Explainable AI - Dorothea Wisemann
Institute of Contemporary Sciences
 
PDF
AI Leaderboards for Truth 20241220 v1.pdf
home
 
Responsible AI
Neo4j
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Krishnaram Kenthapadi
 
Big data camp la futures so bright tim-shea
Data Con LA
 
Neo4j - Responsible AI
Neo4j
 
Explainability and bias in AI
Bill Liu
 
Explaining Black-Box Machine Learning Predictions - Sameer Singh, Assistant P...
Sri Ambati
 
The Need for Explainable AI - Dorothea Wisemann
Institute of Contemporary Sciences
 
AI Leaderboards for Truth 20241220 v1.pdf
home
 

Similar to What If? Demystifying AI Decisions with Counterfactuals (20)

PDF
Keepler Data Tech | Entendiendo tus propios modelos predictivos
Keepler Data Tech
 
PDF
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
James Anderson
 
PDF
"I don't trust AI": the role of explainability in responsible AI
Erika Agostinelli
 
PDF
Human-Centered AI: Scalable, Interactive Tools for Interpretation and Attribu...
polochau
 
PPTX
Explainable AI in Industry (AAAI 2020 Tutorial)
Krishnaram Kenthapadi
 
PPTX
Career_Jobs_in_Data_Science.pptx
HarpreetSharma14
 
PPTX
Statistical Modeling in 3D: Explaining, Predicting, Describing
Galit Shmueli
 
PDF
Learning to Learn Model Behavior ( Capital One: data intelligence conference )
Pramit Choudhary
 
PDF
Vodafone Mathematical Modelling 2024.pdf
Florian Wilhelm
 
PPTX
Becoming Datacentric
Timothy Cook
 
PDF
The Incredible Disappearing Data Scientist
Rebecca Bilbro
 
PPTX
How to prepare for data science interviews
Jay (Jianqiang) Wang
 
PDF
Algorithmic fairness
Alexey Grigorev
 
PDF
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...
Raheel Ahmad
 
PDF
Trusted, Transparent and Fair AI using Open Source
Animesh Singh
 
PDF
AI ML algorithms for medical writers use
ssuser062235
 
PDF
Fairness in Machine Learning @Codemotion
Azzurra Ragone
 
PPTX
Ppt on CLASS IMBALANCE PROBLEM in Data Mining
bhdbd061
 
PDF
Copy of getting into ai event slides (PDF)
Matthew Miller
 
PDF
​​Explainability in AI and Recommender systems: let’s make it interactive!
Eindhoven University of Technology / JADS
 
Keepler Data Tech | Entendiendo tus propios modelos predictivos
Keepler Data Tech
 
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
James Anderson
 
"I don't trust AI": the role of explainability in responsible AI
Erika Agostinelli
 
Human-Centered AI: Scalable, Interactive Tools for Interpretation and Attribu...
polochau
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Krishnaram Kenthapadi
 
Career_Jobs_in_Data_Science.pptx
HarpreetSharma14
 
Statistical Modeling in 3D: Explaining, Predicting, Describing
Galit Shmueli
 
Learning to Learn Model Behavior ( Capital One: data intelligence conference )
Pramit Choudhary
 
Vodafone Mathematical Modelling 2024.pdf
Florian Wilhelm
 
Becoming Datacentric
Timothy Cook
 
The Incredible Disappearing Data Scientist
Rebecca Bilbro
 
How to prepare for data science interviews
Jay (Jianqiang) Wang
 
Algorithmic fairness
Alexey Grigorev
 
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...
Raheel Ahmad
 
Trusted, Transparent and Fair AI using Open Source
Animesh Singh
 
AI ML algorithms for medical writers use
ssuser062235
 
Fairness in Machine Learning @Codemotion
Azzurra Ragone
 
Ppt on CLASS IMBALANCE PROBLEM in Data Mining
bhdbd061
 
Copy of getting into ai event slides (PDF)
Matthew Miller
 
​​Explainability in AI and Recommender systems: let’s make it interactive!
Eindhoven University of Technology / JADS
 
Ad

Recently uploaded (20)

PPTX
INFO8116 - Week 10 - Slides.pptx big data architecture
guddipatel10
 
PPTX
Fuzzy_Membership_Functions_Presentation.pptx
pythoncrazy2024
 
PDF
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
PPTX
Data Security Breach: Immediate Action Plan
varmabhuvan266
 
PDF
Company Presentation pada Perusahaan ADB.pdf
didikfahmi
 
PPT
Real Life Application of Set theory, Relations and Functions
manavparmar205
 
PDF
Chad Readey - An Independent Thinker
Chad Readey
 
PPTX
short term internship project on Data visualization
JMJCollegeComputerde
 
PPTX
INFO8116 -Big data architecture and analytics
guddipatel10
 
PDF
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
PDF
Research about a FoodFolio app for personalized dietary tracking and health o...
AustinLiamAndres
 
PPTX
Data-Driven Machine Learning for Rail Infrastructure Health Monitoring
Sione Palu
 
PPTX
Complete_STATA_Introduction_Beginner.pptx
mbayekebe
 
PPTX
lecture 13 mind test academy it skills.pptx
ggesjmrasoolpark
 
PPTX
World-population.pptx fire bunberbpeople
umutunsalnsl4402
 
PPTX
Pipeline Automatic Leak Detection for Water Distribution Systems
Sione Palu
 
PPTX
Blue and Dark Blue Modern Technology Presentation.pptx
ap177979
 
PDF
blockchain123456789012345678901234567890
tanvikhunt1003
 
PPTX
Introduction-to-Python-Programming-Language (1).pptx
dhyeysapariya
 
PDF
SUMMER INTERNSHIP REPORT[1] (AutoRecovered) (6) (1).pdf
pandeydiksha814
 
INFO8116 - Week 10 - Slides.pptx big data architecture
guddipatel10
 
Fuzzy_Membership_Functions_Presentation.pptx
pythoncrazy2024
 
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
Data Security Breach: Immediate Action Plan
varmabhuvan266
 
Company Presentation pada Perusahaan ADB.pdf
didikfahmi
 
Real Life Application of Set theory, Relations and Functions
manavparmar205
 
Chad Readey - An Independent Thinker
Chad Readey
 
short term internship project on Data visualization
JMJCollegeComputerde
 
INFO8116 -Big data architecture and analytics
guddipatel10
 
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
Research about a FoodFolio app for personalized dietary tracking and health o...
AustinLiamAndres
 
Data-Driven Machine Learning for Rail Infrastructure Health Monitoring
Sione Palu
 
Complete_STATA_Introduction_Beginner.pptx
mbayekebe
 
lecture 13 mind test academy it skills.pptx
ggesjmrasoolpark
 
World-population.pptx fire bunberbpeople
umutunsalnsl4402
 
Pipeline Automatic Leak Detection for Water Distribution Systems
Sione Palu
 
Blue and Dark Blue Modern Technology Presentation.pptx
ap177979
 
blockchain123456789012345678901234567890
tanvikhunt1003
 
Introduction-to-Python-Programming-Language (1).pptx
dhyeysapariya
 
SUMMER INTERNSHIP REPORT[1] (AutoRecovered) (6) (1).pdf
pandeydiksha814
 
Ad

What If? Demystifying AI Decisions with Counterfactuals

  • 1. What If? Demystifying AI Decisions with Counterfactuals David Martens admantwerp.github.io
  • 2. Advanced AI in our lives
  • 3. Black Box?  Deep learning: large artificial neural network with massive number of parameters o MobileNetV2: 4 million parameters o GPT-4: estimated 100 trillion parameters  Non-linear models
  • 4. Complex AI models are creeping into our lives Whose decisions are extremely difficult to explain But Explanations are needed if we want to trust AI
  • 5. Explain individual predictions Counterfactual explanation: What needs to change in your data, to reach a different decision?
  • 6. Explain individual predictions Example: credit scoring using sociodemo and financial data User xi: Sam, with income $ 32,000 and 39 years old. Sam is denied credit WHY? Counterfactual explanation: What needs to change in your data, to reach a different decision?
  • 7. Explain individual predictions Example: credit scoring using sociodemo and financial data User xi: Sam Sam is denied credit WHY? Dieter Brughmans, Pieter Leyman, David Martens (2023) NICE : an algorithm for nearest instance counterfactual explanations. Data mining and knowledge discovery p. 1-39 IF Sam would make 8,000$ more THEN his predicted class would change from denied to granted
  • 8. Explain individual predictions Counterfactual Explanation LIME: Linear Interpretable Model-Agnostic Explainer (k=10) Example: gender prediction using movie viewing data User xi: Sam Sam watched 120 movies Sam is predicted as male IF Sam would not have watched {Taxi driver, The Dark Knight, Die Hard, Terminator 2, Now You See Me, Interstellar}, THEN his predicted class would change from male to female WHY? Martens D, Provost F. (2014) Explaining Data-Driven Document Classifications. MIS Quarterly 38(1):73-99. admantwerp.github.io M.T. Ribeiro, S. Singh, C. Guestrin (2016) Model-Agnostic Interpretability of Machine Learning 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY github.com/marcotcr/lime Ramon Y., Martens D., Provost F., Evgeniou T. (2019) Instance-level explanation algorithms SEDC, LIME, SHAP for behavioral and textual data: a counterfactual-oriented comparison, Advances in Data Analysis and Classification 14:4, p. 801-819.
  • 9. Complex AI models are creeping into our lives Whose decisions are extremely difficult to explain But explanations are needed if we want to trust AI Counterfactuals explain a decision of a model for an instance
  • 10. Counterfactual generating algorithms  Yet more algorithms to explain complex algorithms  Ill-defined, measures: sparse/short, near, diverse, actionable, feasible, plausible, justified, fast, model- agnostic, data-agnostic, truthful, complete, stable, robust, …
  • 11. Karimi et al (2021) A survey of algorithmic recourse: contrastive explanations and consequential recommendations https://arxiv.org/pdf/2010.04050.pdf
  • 12. Complex AI models are creeping into our lives Whose decisions are extremely difficult to explain But Explanations are needed if we want to trust AI Counterfactuals explain a decision of a model for an instance What these should look like is ill-defined and open research
  • 13. The Counterfactual Explains Wrong Decisions  To improve the predictive performance of the model  Example o Data: image o Task: predict if missile in image Tom Vermeire, Dieter Brughmans, Sofie Goethals, Raphael Mazzine de Oliveira, David Martens (2022) Explainable Image Classication with Evidence Counterfactual, Pattern Anal Applic 25, 315–335
  • 14. The Counterfactual Explains Wrong Decisions  To improve the predictive performance of the model  Example o Data: image o Task: predict if missile in image o Mainly interested in improving misclassifications o Issue: Lighthouse wrongly classified as missile o Pattern learnt: line of smoke indicates missile
  • 15. Complex AI models are creeping into our lives Whose decisions are extremely difficult to explain But Explanations are needed if we want to trust AI Counterfactuals explain a decision of a model for an instance What these should look like is ill-defined and open research Very useful to explain (incorrect or unfair) decisions
  • 16. Imagine a world with CF explanations Well, if your wife would also have had a 20+ year relationship with our bank, and would have been regarded as Premium customer at some point in time, she would also receive a 20x credit limit Well, if your wife’s relationship status would have been “husband” instead of “wife”, she would also receive a 20x credit limit We clearly messed up, we’re updating our models now. Ah, ok, thanks for the additional feedback! Glad you found this and react responsibly. It’s how big tech should be in the 21st century.
  • 17. Q&A David Martens (2022) Data Science Ethics: Concepts, Techniques and Cautionary Tales Oxford University Press, 272 pages.  Play around with CF expanations: admantwerp.github.io  Book on Data Science Ethics: www.dsethics.com

Editor's Notes

  • #3: I’ll be speaking of how the CF concept can be used to explain the decisions that are being made by very complex AI models. The irony is that in order to explain such complicated models, it seems we’re just adding more complex algorithms. In this presentaiton, I try to answer why we do this, and whether we actually should be doing this.