SlideShare a Scribd company logo
© 2019 KNIME AG. All Rights Reserved.
Building useful models for
imbalanced datasets (without
resampling)
Greg Landrum
(greg.landrum@knime.com)
COMP Together, UCSF
22 Aug 2019
© 2019 KNIME AG. All Rights Reserved. 2
First things first
• RDKit blog post with initial work:
http://rdkit.blogspot.com/2018/11/working-with-
unbalanced-data-part-i.html
• The notebooks I used for this presentation are all in
Github:
– Original notebook: https://bit.ly/2UY2u2K
– Using the balanced random forest: https://bit.ly/2tuafSc
– Plotting: https://bit.ly/2GJSeHH
• I have a KNIME workflow that does the same thing. Let
me know if you're interested
• Download links for the datasets are in the blog post
© 2019 KNIME AG. All Rights Reserved. 3
The problem
• Typical datasets for bioactivity prediction tend to
have way more inactives than actives
• This leads to a couple of pathologies:
– Overall accuracy is really not a good metric for how useful
a model is
– Many learning algorithms produce way too many false
negatives
© 2019 KNIME AG. All Rights Reserved. 4
Example dataset
• Assay CHEMBL1614421 (PUBCHEM_BIOASSAY: qHTS
for Inhibitors of Tau Fibril Formation, Thioflavin T
Binding. (Class of assay: confirmatory))
– https://www.ebi.ac.uk/chembl/assay_report_card/CHEM
BL1614166/
– https://pubchem.ncbi.nlm.nih.gov/bioassay/1460
• 43345 inactives, 5602 actives (using the annotations
from PubChem)
© 2019 KNIME AG. All Rights Reserved. 5
Data Preparation
• Structures are taken from ChEMBL
– Already some standardization done
– Processed with RDKit
• Fingerprints: RDKit Morgan-2, 2048 bits
© 2019 KNIME AG. All Rights Reserved. 6
Modeling
• Stratified 80-20 training/holdout split
• KNIME random forest classifier
– 500 trees
– Max depth 15
– Min node size 2
This is a first pass through the cycle, we will try
other fingerprints, learning algorithms, and
hyperparameters in future iterations
© 2019 KNIME AG. All Rights Reserved. 7
Results CHEMBL1614421: holdout data
© 2019 KNIME AG. All Rights Reserved. 8
Evaluation CHEMBL1614421: holdout data
AUROC=0.75
© 2019 KNIME AG. All Rights Reserved. 9
Taking stock
• Model has:
– Good overall accuracies (because of imbalance)
– Decent AUROC values
– Terrible Cohen kappas
Now what?
© 2019 KNIME AG. All Rights Reserved. 10
Quick diversion on bag classifiers
When making predictions, each tree in the
classifier votes on the result.
Majority wins
The predicted class probabilities are often the
means of the predicted probabilities from the
individual trees
We construct the ROC curve by sorting the
predictions in decreasing order of predicted
probability of being active.
Note that the actual predictions are irrelevant for an ROC curve. As long
as true actives tend to have a higher predicted probability of being active
than true inactives the AUC will be good.
© 2019 KNIME AG. All Rights Reserved. 11
Handling imbalanced data
• The standard decision rule for a random forest (or
any bag classifier) is that the majority wins1, i.e. at
the predicted probability of being active must be
>=0.5 in order for the model to predict "active"
• Shift that threshold to a lower value for models built
on highly imbalanced datasets2
1 This is only strictly true for binary classifiers
2 Chen, J. J., et al. “Decision Threshold Adjustment in Class Prediction.” SAR and
QSAR in Environmental Research 17 (2006): 337–52.
© 2019 KNIME AG. All Rights Reserved. 12
Picking a new decision threshold: approach 1
• Generate a random forest for the dataset using the
training set
• Generate out-of-bag predicted probabilities using
the training set
• Try a number of different decision thresholds1 and
pick the one that gives the best kappa
• Once we have the decision threshold, use it to
generate predictions for the test set.
1 Here we use: [0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 ]
© 2019 KNIME AG. All Rights Reserved. 13
• Balanced confusion matrix
Results CHEMBL1614421
Previously 0.005
Nice! But does it work in general?
14© 2019 KNIME AG. All Rights Reserved.
Validation experiment
© 2019 KNIME AG. All Rights Reserved. 15
• "Serotonin": 6 datasets with >900 Ki values for human
serotonin receptors
– Active: pKi > 9.0, Inactive: pKi < 8.5
– If that doesn't yield at least 50 actives: Active: pKi > 8.0, Inactive: pKi
< 7.5
• "DS1": 80 "Dataset 1" sets.1
– Active: 100 diverse measured actives ("standard_value<10uM");
Inactive: 2000 random compounds from the same property space
• "PubChem": 8 HTS Validation assays with at least 3K
"Potency" values
– Active: "active" in dataset. Inactive: "inactive", "not active", or
"inconclusive" in dataset
• "DrugMatrix": 44 DrugMatrix assays with at least 40 actives
– Active: "active" in dataset. Inactive: "not active" in dataset
The datasets (all extracted from ChEMBL_24)
1 S. Riniker, N. Fechner, G. A. Landrum. "Heterogeneous classifier fusion for ligand-based virtual screening: or, how decision
making by committee can be a good thing." Journal of chemical information and modeling 53:2829-36 (2013).
© 2019 KNIME AG. All Rights Reserved. 16
Model building and validation
• Fingerprints: 2048 bit MorganFP radius=2
• 80/20 training/test split
• Random forest parameters:
– cls = RandomForestClassifier(n_estimators=500, max_depth=15, min_samples_leaf=2, n_jobs=4, oob_score=True)
• Try threshold values of [0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35,
0.4 , 0.45, 0.5 ] with out-of-bag predictions and pick the best
based on kappa
• Generate initial kappa value for the test data using threshold
= 0.5
• Generate "balanced" kappa value for the test data with the
optimized threshold
© 2019 KNIME AG. All Rights Reserved. 17
Does it work in general?
ChEMBL data, random-split validation
© 2019 KNIME AG. All Rights Reserved. 18
Does it work in general?
Proprietary data, time-split validation
© 2019 KNIME AG. All Rights Reserved. 19
Picking a new decision threshold: approach 2
• Generate a random forest for the dataset using the
training set
• Generate out-of-bag predicted probabilities using
the training set
• Pick the threshold corresponding to the point on the
ROC curve that’s closest to the upper left corner
• Once we have the decision threshold, use it to
generate predictions for the test set.
Chen, J. J., et al. “Decision Threshold Adjustment in Class Prediction.” SAR and QSAR in
Environmental Research 17 (2006): 337–52.
© 2019 KNIME AG. All Rights Reserved. 20
Does it work in general?
ChEMBL data, random-split validation
© 2019 KNIME AG. All Rights Reserved. 21
Does it work in general?
ChEMBL data, random-split validation
© 2019 KNIME AG. All Rights Reserved. 22
Other evaluation metrics: F1 score
ChEMBL data, random-split validation
© 2019 KNIME AG. All Rights Reserved. 23
Does it work in general?
Proprietary data, time-split validation
© 2019 KNIME AG. All Rights Reserved. 24
Compare to balanced random forests
• Resampling strategy that still uses the entire training
set
• Idea: train each tree on a balanced bootstrap
sample of the training data
Chen, C., Liaw, A. & Breiman, L. Using Random Forest to Learn Imbalanced Data.
https://statistics.berkeley.edu/tech-reports/666 (2004).
© 2019 KNIME AG. All Rights Reserved. 25
How do bag classifiers end up with different models?
Each tree is built
with a different
dataset
© 2019 KNIME AG. All Rights Reserved. 26
Balanced random forests
• Take advantage of the structure of the classifier.
• Learn each tree with a balanced dataset:
– Select a bootstrap sample of the minority class (actives)
– Randomly select, with replacement, the same number of
points from the majority class (inactives)
• Prediction works the same as with a normal random
forest
• Easy to do in scikit-learn using the imbalanced-learn
contrib package: https://imbalanced-
learn.readthedocs.io/en/stable/ensemble.html#forest-of-randomized-trees
– cls = BalancedRandomForestClassifier(n_estimators=500, max_depth=15, min_samples_leaf=2, n_jobs=4, oob_score=True
Chen, C., Liaw, A. & Breiman, L. Using Random Forest to Learn Imbalanced Data. https://statistics.berkeley.edu/tech-reports/666
(2004).
© 2019 KNIME AG. All Rights Reserved. 27
Comparing to resampling: balanced random forests
ChEMBL data, random-split validation
© 2019 KNIME AG. All Rights Reserved. 28
Comparing to resampling: balanced random forests
ChEMBL data, random-split validation
© 2019 KNIME AG. All Rights Reserved. 29
What comes next
• Try the same thing with other learning methods like
logistic regression and stochastic gradient boosting
– These are more complicated since they can't do out-of-
bag classification
– We need to add another data split and loop to do
calibration and find the best threshold
• More datasets! I need *your* help with this
– I have a script for you to run that takes sets of compounds
with activity labels and outputs the summary statistics
that I'm using here
© 2019 KNIME AG. All Rights Reserved. 30
Acknowledgements
• Dean Abbott (Abbott Analytics)
• Daria Goldmann (KNIME)
• NIBR:
– Nik Stiefl
– Nadine Schneider
– Niko Fechner

More Related Content

Similar to Building useful models for imbalanced datasets (without resampling) (20)

PDF
Moving from Artisanal to Industrial Machine Learning
Greg Landrum
 
PDF
Using Optimization to find Synthetic Equity Universes that minimize Survivors...
OpenMetrics Solutions LLC
 
PDF
Building useful models for imbalanced datasets (without resampling)
Greg Landrum
 
PDF
Machine learning in the life sciences with knime
Greg Landrum
 
PDF
"Quantum Hierarchical Risk Parity - A Quantum-Inspired Approach to Portfolio ...
Quantopian
 
PDF
Dark Knowledge - Google Transference in Ml
t6z2krtd8f
 
PPTX
Using Apache Spark with IBM SPSS Modeler
Global Knowledge Training
 
PPTX
Machine learning algorithms
Shalitha Suranga
 
PDF
How Do You Build and Validate 1500 Models and What Can You Learn from Them?
Greg Landrum
 
PPTX
Performance Issue? Machine Learning to the rescue!
Maarten Smeets
 
PDF
Random forests-talk-nl-meetup
Willem Hendriks
 
PDF
Introduction to XGBoost
Joonyoung Yi
 
PPTX
Big Data Spain 2018: How to build Weighted XGBoost ML model for Imbalance dat...
Alok Singh
 
PPTX
pradeep ppt final.pptx
pandavaTirumala
 
PPTX
Detection Of Fraudlent Behavior In Water Consumption Using A Data Mining Base...
pandavaTirumala
 
PDF
Advanced Hyperparameter Optimization for Deep Learning with MLflow
Databricks
 
PPTX
Robust Design And Variation Reduction Using DiscoverSim
JohnNoguera
 
PPTX
Final edited master defense-hyun_wong choi_2019_05_23_rev21
Hyun Wong Choi
 
PDF
Machine Learning for Incident Detection: Getting Started
Sqrrl
 
PDF
GA.-.Presentation
oldmanpat
 
Moving from Artisanal to Industrial Machine Learning
Greg Landrum
 
Using Optimization to find Synthetic Equity Universes that minimize Survivors...
OpenMetrics Solutions LLC
 
Building useful models for imbalanced datasets (without resampling)
Greg Landrum
 
Machine learning in the life sciences with knime
Greg Landrum
 
"Quantum Hierarchical Risk Parity - A Quantum-Inspired Approach to Portfolio ...
Quantopian
 
Dark Knowledge - Google Transference in Ml
t6z2krtd8f
 
Using Apache Spark with IBM SPSS Modeler
Global Knowledge Training
 
Machine learning algorithms
Shalitha Suranga
 
How Do You Build and Validate 1500 Models and What Can You Learn from Them?
Greg Landrum
 
Performance Issue? Machine Learning to the rescue!
Maarten Smeets
 
Random forests-talk-nl-meetup
Willem Hendriks
 
Introduction to XGBoost
Joonyoung Yi
 
Big Data Spain 2018: How to build Weighted XGBoost ML model for Imbalance dat...
Alok Singh
 
pradeep ppt final.pptx
pandavaTirumala
 
Detection Of Fraudlent Behavior In Water Consumption Using A Data Mining Base...
pandavaTirumala
 
Advanced Hyperparameter Optimization for Deep Learning with MLflow
Databricks
 
Robust Design And Variation Reduction Using DiscoverSim
JohnNoguera
 
Final edited master defense-hyun_wong choi_2019_05_23_rev21
Hyun Wong Choi
 
Machine Learning for Incident Detection: Getting Started
Sqrrl
 
GA.-.Presentation
oldmanpat
 

More from Greg Landrum (15)

PDF
Chemical registration
Greg Landrum
 
PDF
Mike Lynch Award Lecture, ICCS 2022
Greg Landrum
 
PDF
Google BigQuery for analysis of scientific datasets: Interactive exploration ...
Greg Landrum
 
PDF
ACS San Diego - The RDKit: Open-source cheminformatics
Greg Landrum
 
PDF
Let’s talk about reproducible data analysis
Greg Landrum
 
PDF
Interactive and reproducible data analysis with the open-source KNIME Analyti...
Greg Landrum
 
PDF
Processing malaria HTS results using KNIME: a tutorial
Greg Landrum
 
PDF
Big (chemical) data? No Problem!
Greg Landrum
 
PDF
Is one enough? Data warehousing for biomedical research
Greg Landrum
 
PDF
Some "challenges" on the open-source/open-data front
Greg Landrum
 
PDF
Large scale classification of chemical reactions from patent data
Greg Landrum
 
PDF
Open-source from/in the enterprise: the RDKit
Greg Landrum
 
PDF
Open-source tools for querying and organizing large reaction databases
Greg Landrum
 
PDF
Is that a scientific report or just some cool pictures from the lab? Reproduc...
Greg Landrum
 
PDF
Reproducibility in cheminformatics and computational chemistry research: cert...
Greg Landrum
 
Chemical registration
Greg Landrum
 
Mike Lynch Award Lecture, ICCS 2022
Greg Landrum
 
Google BigQuery for analysis of scientific datasets: Interactive exploration ...
Greg Landrum
 
ACS San Diego - The RDKit: Open-source cheminformatics
Greg Landrum
 
Let’s talk about reproducible data analysis
Greg Landrum
 
Interactive and reproducible data analysis with the open-source KNIME Analyti...
Greg Landrum
 
Processing malaria HTS results using KNIME: a tutorial
Greg Landrum
 
Big (chemical) data? No Problem!
Greg Landrum
 
Is one enough? Data warehousing for biomedical research
Greg Landrum
 
Some "challenges" on the open-source/open-data front
Greg Landrum
 
Large scale classification of chemical reactions from patent data
Greg Landrum
 
Open-source from/in the enterprise: the RDKit
Greg Landrum
 
Open-source tools for querying and organizing large reaction databases
Greg Landrum
 
Is that a scientific report or just some cool pictures from the lab? Reproduc...
Greg Landrum
 
Reproducibility in cheminformatics and computational chemistry research: cert...
Greg Landrum
 
Ad

Recently uploaded (20)

PPTX
Pratik inorganic chemistry silicon based ppt
akshaythaker18
 
PDF
2025-06-10 TWDB Agency Updates & Legislative Outcomes
tagdpa
 
PPTX
How to write a research paper July 3 2025.pptx
suneeta panicker
 
PDF
Carbon-richDustInjectedintotheInterstellarMediumbyGalacticWCBinaries Survives...
Sérgio Sacani
 
PDF
Introduction of Animal Behaviour full notes.pdf
S.B.P.G. COLLEGE BARAGAON VARANASI
 
PPT
Cell cycle,cell cycle checkpoint and control
DrMukeshRameshPimpli
 
PPTX
GB1 Q1 04 Life in a Cell (1).pptx GRADE 11
JADE ACOSTA
 
PPTX
MODULE 2 Effects of Lifestyle in the Function of Respiratory and Circulator...
judithgracemangunday
 
PDF
Chemokines and Receptors Overview – Key to Immune Cell Signaling
Benjamin Lewis Lewis
 
PDF
The-Origin- of -Metazoa-vertebrates .ppt
S.B.P.G. COLLEGE BARAGAON VARANASI
 
PPTX
MICROBIOLOGY PART-1 INTRODUCTION .pptx
Mohit Kumar
 
PDF
Continuous Model-Based Engineering of Software-Intensive Systems: Approaches,...
Hugo Bruneliere
 
PPTX
Qualification of DISSOLUTION TEST APPARATUS.pptx
shrutipandit17
 
PDF
The role of the Lorentz force in sunspot equilibrium
Sérgio Sacani
 
PDF
RODENT PEST MANAGEMENT-converted-compressed.pdf
S.B.P.G. COLLEGE BARAGAON VARANASI
 
PDF
Step-by-Step Guide: How mRNA Vaccines Works
TECNIC
 
PPTX
Hypothalamus_nuclei_ structure_functions.pptx
muralinath2
 
PPTX
Immunopharmaceuticals and microbial Application
xxkaira1
 
PPTX
Vectors and applications of genetic engineering Pptx
Ashwini I Chuncha
 
PPTX
formations-of-rock-layers-grade 11_.pptx
GraceSarte
 
Pratik inorganic chemistry silicon based ppt
akshaythaker18
 
2025-06-10 TWDB Agency Updates & Legislative Outcomes
tagdpa
 
How to write a research paper July 3 2025.pptx
suneeta panicker
 
Carbon-richDustInjectedintotheInterstellarMediumbyGalacticWCBinaries Survives...
Sérgio Sacani
 
Introduction of Animal Behaviour full notes.pdf
S.B.P.G. COLLEGE BARAGAON VARANASI
 
Cell cycle,cell cycle checkpoint and control
DrMukeshRameshPimpli
 
GB1 Q1 04 Life in a Cell (1).pptx GRADE 11
JADE ACOSTA
 
MODULE 2 Effects of Lifestyle in the Function of Respiratory and Circulator...
judithgracemangunday
 
Chemokines and Receptors Overview – Key to Immune Cell Signaling
Benjamin Lewis Lewis
 
The-Origin- of -Metazoa-vertebrates .ppt
S.B.P.G. COLLEGE BARAGAON VARANASI
 
MICROBIOLOGY PART-1 INTRODUCTION .pptx
Mohit Kumar
 
Continuous Model-Based Engineering of Software-Intensive Systems: Approaches,...
Hugo Bruneliere
 
Qualification of DISSOLUTION TEST APPARATUS.pptx
shrutipandit17
 
The role of the Lorentz force in sunspot equilibrium
Sérgio Sacani
 
RODENT PEST MANAGEMENT-converted-compressed.pdf
S.B.P.G. COLLEGE BARAGAON VARANASI
 
Step-by-Step Guide: How mRNA Vaccines Works
TECNIC
 
Hypothalamus_nuclei_ structure_functions.pptx
muralinath2
 
Immunopharmaceuticals and microbial Application
xxkaira1
 
Vectors and applications of genetic engineering Pptx
Ashwini I Chuncha
 
formations-of-rock-layers-grade 11_.pptx
GraceSarte
 
Ad

Building useful models for imbalanced datasets (without resampling)

  • 1. © 2019 KNIME AG. All Rights Reserved. Building useful models for imbalanced datasets (without resampling) Greg Landrum ([email protected]) COMP Together, UCSF 22 Aug 2019
  • 2. © 2019 KNIME AG. All Rights Reserved. 2 First things first • RDKit blog post with initial work: http://rdkit.blogspot.com/2018/11/working-with- unbalanced-data-part-i.html • The notebooks I used for this presentation are all in Github: – Original notebook: https://bit.ly/2UY2u2K – Using the balanced random forest: https://bit.ly/2tuafSc – Plotting: https://bit.ly/2GJSeHH • I have a KNIME workflow that does the same thing. Let me know if you're interested • Download links for the datasets are in the blog post
  • 3. © 2019 KNIME AG. All Rights Reserved. 3 The problem • Typical datasets for bioactivity prediction tend to have way more inactives than actives • This leads to a couple of pathologies: – Overall accuracy is really not a good metric for how useful a model is – Many learning algorithms produce way too many false negatives
  • 4. © 2019 KNIME AG. All Rights Reserved. 4 Example dataset • Assay CHEMBL1614421 (PUBCHEM_BIOASSAY: qHTS for Inhibitors of Tau Fibril Formation, Thioflavin T Binding. (Class of assay: confirmatory)) – https://www.ebi.ac.uk/chembl/assay_report_card/CHEM BL1614166/ – https://pubchem.ncbi.nlm.nih.gov/bioassay/1460 • 43345 inactives, 5602 actives (using the annotations from PubChem)
  • 5. © 2019 KNIME AG. All Rights Reserved. 5 Data Preparation • Structures are taken from ChEMBL – Already some standardization done – Processed with RDKit • Fingerprints: RDKit Morgan-2, 2048 bits
  • 6. © 2019 KNIME AG. All Rights Reserved. 6 Modeling • Stratified 80-20 training/holdout split • KNIME random forest classifier – 500 trees – Max depth 15 – Min node size 2 This is a first pass through the cycle, we will try other fingerprints, learning algorithms, and hyperparameters in future iterations
  • 7. © 2019 KNIME AG. All Rights Reserved. 7 Results CHEMBL1614421: holdout data
  • 8. © 2019 KNIME AG. All Rights Reserved. 8 Evaluation CHEMBL1614421: holdout data AUROC=0.75
  • 9. © 2019 KNIME AG. All Rights Reserved. 9 Taking stock • Model has: – Good overall accuracies (because of imbalance) – Decent AUROC values – Terrible Cohen kappas Now what?
  • 10. © 2019 KNIME AG. All Rights Reserved. 10 Quick diversion on bag classifiers When making predictions, each tree in the classifier votes on the result. Majority wins The predicted class probabilities are often the means of the predicted probabilities from the individual trees We construct the ROC curve by sorting the predictions in decreasing order of predicted probability of being active. Note that the actual predictions are irrelevant for an ROC curve. As long as true actives tend to have a higher predicted probability of being active than true inactives the AUC will be good.
  • 11. © 2019 KNIME AG. All Rights Reserved. 11 Handling imbalanced data • The standard decision rule for a random forest (or any bag classifier) is that the majority wins1, i.e. at the predicted probability of being active must be >=0.5 in order for the model to predict "active" • Shift that threshold to a lower value for models built on highly imbalanced datasets2 1 This is only strictly true for binary classifiers 2 Chen, J. J., et al. “Decision Threshold Adjustment in Class Prediction.” SAR and QSAR in Environmental Research 17 (2006): 337–52.
  • 12. © 2019 KNIME AG. All Rights Reserved. 12 Picking a new decision threshold: approach 1 • Generate a random forest for the dataset using the training set • Generate out-of-bag predicted probabilities using the training set • Try a number of different decision thresholds1 and pick the one that gives the best kappa • Once we have the decision threshold, use it to generate predictions for the test set. 1 Here we use: [0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 ]
  • 13. © 2019 KNIME AG. All Rights Reserved. 13 • Balanced confusion matrix Results CHEMBL1614421 Previously 0.005 Nice! But does it work in general?
  • 14. 14© 2019 KNIME AG. All Rights Reserved. Validation experiment
  • 15. © 2019 KNIME AG. All Rights Reserved. 15 • "Serotonin": 6 datasets with >900 Ki values for human serotonin receptors – Active: pKi > 9.0, Inactive: pKi < 8.5 – If that doesn't yield at least 50 actives: Active: pKi > 8.0, Inactive: pKi < 7.5 • "DS1": 80 "Dataset 1" sets.1 – Active: 100 diverse measured actives ("standard_value<10uM"); Inactive: 2000 random compounds from the same property space • "PubChem": 8 HTS Validation assays with at least 3K "Potency" values – Active: "active" in dataset. Inactive: "inactive", "not active", or "inconclusive" in dataset • "DrugMatrix": 44 DrugMatrix assays with at least 40 actives – Active: "active" in dataset. Inactive: "not active" in dataset The datasets (all extracted from ChEMBL_24) 1 S. Riniker, N. Fechner, G. A. Landrum. "Heterogeneous classifier fusion for ligand-based virtual screening: or, how decision making by committee can be a good thing." Journal of chemical information and modeling 53:2829-36 (2013).
  • 16. © 2019 KNIME AG. All Rights Reserved. 16 Model building and validation • Fingerprints: 2048 bit MorganFP radius=2 • 80/20 training/test split • Random forest parameters: – cls = RandomForestClassifier(n_estimators=500, max_depth=15, min_samples_leaf=2, n_jobs=4, oob_score=True) • Try threshold values of [0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 ] with out-of-bag predictions and pick the best based on kappa • Generate initial kappa value for the test data using threshold = 0.5 • Generate "balanced" kappa value for the test data with the optimized threshold
  • 17. © 2019 KNIME AG. All Rights Reserved. 17 Does it work in general? ChEMBL data, random-split validation
  • 18. © 2019 KNIME AG. All Rights Reserved. 18 Does it work in general? Proprietary data, time-split validation
  • 19. © 2019 KNIME AG. All Rights Reserved. 19 Picking a new decision threshold: approach 2 • Generate a random forest for the dataset using the training set • Generate out-of-bag predicted probabilities using the training set • Pick the threshold corresponding to the point on the ROC curve that’s closest to the upper left corner • Once we have the decision threshold, use it to generate predictions for the test set. Chen, J. J., et al. “Decision Threshold Adjustment in Class Prediction.” SAR and QSAR in Environmental Research 17 (2006): 337–52.
  • 20. © 2019 KNIME AG. All Rights Reserved. 20 Does it work in general? ChEMBL data, random-split validation
  • 21. © 2019 KNIME AG. All Rights Reserved. 21 Does it work in general? ChEMBL data, random-split validation
  • 22. © 2019 KNIME AG. All Rights Reserved. 22 Other evaluation metrics: F1 score ChEMBL data, random-split validation
  • 23. © 2019 KNIME AG. All Rights Reserved. 23 Does it work in general? Proprietary data, time-split validation
  • 24. © 2019 KNIME AG. All Rights Reserved. 24 Compare to balanced random forests • Resampling strategy that still uses the entire training set • Idea: train each tree on a balanced bootstrap sample of the training data Chen, C., Liaw, A. & Breiman, L. Using Random Forest to Learn Imbalanced Data. https://statistics.berkeley.edu/tech-reports/666 (2004).
  • 25. © 2019 KNIME AG. All Rights Reserved. 25 How do bag classifiers end up with different models? Each tree is built with a different dataset
  • 26. © 2019 KNIME AG. All Rights Reserved. 26 Balanced random forests • Take advantage of the structure of the classifier. • Learn each tree with a balanced dataset: – Select a bootstrap sample of the minority class (actives) – Randomly select, with replacement, the same number of points from the majority class (inactives) • Prediction works the same as with a normal random forest • Easy to do in scikit-learn using the imbalanced-learn contrib package: https://imbalanced- learn.readthedocs.io/en/stable/ensemble.html#forest-of-randomized-trees – cls = BalancedRandomForestClassifier(n_estimators=500, max_depth=15, min_samples_leaf=2, n_jobs=4, oob_score=True Chen, C., Liaw, A. & Breiman, L. Using Random Forest to Learn Imbalanced Data. https://statistics.berkeley.edu/tech-reports/666 (2004).
  • 27. © 2019 KNIME AG. All Rights Reserved. 27 Comparing to resampling: balanced random forests ChEMBL data, random-split validation
  • 28. © 2019 KNIME AG. All Rights Reserved. 28 Comparing to resampling: balanced random forests ChEMBL data, random-split validation
  • 29. © 2019 KNIME AG. All Rights Reserved. 29 What comes next • Try the same thing with other learning methods like logistic regression and stochastic gradient boosting – These are more complicated since they can't do out-of- bag classification – We need to add another data split and loop to do calibration and find the best threshold • More datasets! I need *your* help with this – I have a script for you to run that takes sets of compounds with activity labels and outputs the summary statistics that I'm using here
  • 30. © 2019 KNIME AG. All Rights Reserved. 30 Acknowledgements • Dean Abbott (Abbott Analytics) • Daria Goldmann (KNIME) • NIBR: – Nik Stiefl – Nadine Schneider – Niko Fechner