SlideShare a Scribd company logo
WIFI SSID:Spark+AISummit | Password: UnifiedDataAnalytics
Luca Canali, CERN
Deep Learning Pipelines
for High Energy Physics
using Apache Spark with
Distributed Keras and
Analytics Zoo
#UnifiedDataAnalytics #SparkAISummit
About Luca
3#UnifiedDataAnalytics #SparkAISummit
• Data Engineer at CERN
– Hadoop and Spark service, database services
– 19+ years of experience with data engineering
• Sharing and community
– Blog, notes, tools, contributions to Apache Spark
@LucaCanaliDB – http://cern.ch/canali
CMS
ALICE
ATLAS
LHCb
CERN:
Particle Accelerators (LHC)
High Energy Physics Experiments
Experimental High Energy Physics
is Data Intensive
5
Particle Collisions Physics Discoveries
Large Scale
Computing
https://twiki.cern.ch/twiki/pub/CMSPublic/Hig13002TWiki/HZZ4l_animated.gif
And https://iopscience.iop.org/article/10.1088/1742-6596/455/1/012027
Key Data Processing Challenge
• Proton-proton collisions at LHC experiments happen at 40MHz.
• Hundreds of TB/s of electrical signals that allow physicists to investigate
particle collision events.
• Storage, limited by bandwidth
• Currently, only 1 every ~40K events stored to disk (~10 GB/s).
2018: 5 collisions/beam cross
Current LHC
2026: 400 collisions/beam cross
Future: High-Luminosity LHC upgrade
This can generate up to a petabyte of raw data per second
Reduced to GB/s by filtering in real time
Key is how to select potentially interesting events (trigger systems).
PB/s
40 million
collisions
per
second
(Raw)
100,000
selections
per
second
(L1)
TB/s
1,000
selections
per
second
(L2)
GB/s
Data Flow at LHC Experiments
R&D – Data Pipelines
• Improve the quality of filtering systems
• Reduce false positive rate
• From rule-based algorithms to classifiers based on
Deep Learning
• Advanced analytics at the edge
• Avoid wasting resources in offline computing
• Reduction of operational costs
Particle Classifiers Using Neural Networks
1
63%
2
36%
3
1%
Particle
Classifier
W + j
QCD
t-t̅
• R&D to improve the quality of filtering systems
• Develop a “Deep Learning classifier” to be used by the filtering system
• Goal: Identify events of interest for physics and reduce false positives
• False positives have a cost, as wasted storage bandwidth and computing
• “Topology classification with deep learning to improve real-time event selection at the
LHC”, Nguyen et al. Comput.Softw.Big Sci. 3 (2019) no.1, 12
Deep Learning Pipeline for Physics Data
Data
Ingestion
Feature
Preparation
Model
Development
Training
Read physics
data and
feature
engineering
Prepare
input for
Deep
Learning
network
1. Specify model
topology
2. Tune model
topology on
small dataset
Train the best
model
Technology: the pipeline uses Apache Spark + Analytics Zoo and
TensorFlow/Keras. Code on Python Notebooks.
Analytics Platform at CERN
HEP software
Experiments storage
HDFS
Personal storage
Integrating new “Big Data”
components with existing
infrastructure:
• Software distribution
• Data platforms
Text
Code
Monitoring
Visualizations
Hadoop and Spark Clusters at CERN
• Clusters:
• YARN/Hadoop
• Spark on Kubernetes
• Hardware: Intel based servers, continuous refresh and capacity expansion
Accelerator logging
(part of LHC
infrastructure)
Hadoop - YARN - 30 nodes
(Cores - 1200, Mem - 13 TB, Storage – 7.5 PB)
General Purpose Hadoop - YARN, 65 nodes
(Cores – 2.2k, Mem – 20 TB, Storage – 12.5 PB)
Cloud containers Kubernetes on Openstack VMs, Cores - 250, Mem – 2 TB
Storage: remote HDFS or EOS (for physics data)
Extending Spark to Read Physics Data
• Physics data
• Currently: >300 PBs of Physics data, increasing ~90 PB/year
• Stored in the CERN EOS storage system in ROOT Format and
accessible via XRootD protocol
• Integration with Spark ecosystem
• Hadoop-XRootD connector, HDFS compatible filesystem
• Spark Datasource for ROOT format
JNI
Hadoop
HDFS
APIHadoop-
XRootD
Connector
EOS
Storage
Service XRootD
Client
C++ Java
https://github.com/cerndb/hadoop-xrootd
https://github.com/diana-hep/spark-root
Labeled Data for Training and Test
● Simulated events
● Software simulators are used to generate events
and calculate the detector response
● Raw data contains arrays of simulated particles
and their properties, stored in ROOT format
● 54 million events
Step 1: Data Ingestion
• Read input files: 4.5 TB from custom (ROOT) format
• Feature engineering
• Python and PySpark code, using Jupyter notebooks
• Write output in Parquet format
Output:
• 25 M events
• 950 GB in Parquet format
• Target storage (HDFS)
Input:
• 54 M events
~4.5 TB
• Physics data
storage (EOS)
• Physics data
format (ROOT)
● Filtering
● Multiple filters, keep only events of interest
● Example: “events with one electrons or muon with Pt > 23 Gev”
• Prepare “Low Level Features”
• Every event is associated to a matrix of particles and features (801x19)
• High Level Features (HLF)
• Additional 14 features are computed from low level particle features
• Calculated based on domain-specific knowledge
Feature Engineering
Step 2: Feature Preparation
Features are converted to formats
suitable for training
• One Hot Encoding of categories
• MinMax scaler for High Level Features
• Sorting Low Level Features: prepare input
for the sequence classifier, using a metric
based on physics. This use a Python UDF.
• Undersampling: use the same number of
events for each of the three categories
Result
• 3.6 Million events, 317 GB
• Shuffled and split into training and test
datasets
• Code: in a Jupyter notebook using
PySpark with Spark SQL and ML
Performance and Lessons Learned
• Data preparation is CPU bound
• Heavy serialization-deserialization due to Python UDF
• Ran using 400 cores: data ingestion took ~3 hours,
• It can be optimized, but is it worth it ?
• Use Spark SQL, Scala instead of Python UDF
• Optimization: replacing parts of Python UDF code with Spark SQL
and higher order functions: run time from 3 hours to 2 hours
Neural Network Models and
1. Fully connected feed-forward deep neural
network
• Trained using High Level Features (~1 GB of data)
2. Neural network based on Gated Recurrent
Unit (GRU)
• Trained using Low Level Features (~ 300 GB of
data)
3. Inclusive classifier model
• Combination of (1) + (2)
Complexity +
Classifier
Performance
Hyper-Parameter Tuning– DNN
• Hyper-parameter tuning of the DNN model
• Trained with a subset of the data (cached in memory)
• Parallelized with Spark, using spark_sklearn.grid_search
• And scikit-learn + keras: tensorflow.keras.wrappers.scikit_learn
Deep Learning at Scale with Spark
• Investigations and constraints for our exercise
• How to run deep learning in a Spark data pipeline?
• Neural network models written using Keras API
• Deploy on Hadoop and/or Kubernetes clusters (CPU clusters)
• Distributed deep learning
• GRU-based model is complex
• Slow to train on a single commodity (CPU) server
Spark, Analytics Zoo and BigDL
• Apache Spark
• Leading tool and API for data processing at scale
• Analytics Zoo is a platform for unified analytics
and AI
• Runs on Apache Spark leveraging BigDL / Tensorflow
• For service developers: integration with infrastructure
(hardware, data access, operations)
• For users: Keras APIs to run user models, integration
with Spark data structures and pipelines
• BigDL is an open source distributed deep learning
framework for Apache Spark
BigDL Run as Standard Spark Programs
Spark
Program
DL App on Driver
Spark
Executor
(JVM)
Spark
Task
BigDL lib
Worker
Intel MKL
Standard
Spark jobs
Worker
Worker Worker
Worker
Spark
Executor
(JVM)
Spark
Task
BigDL lib
Worker
Intel MKL
BigDL
library
Spark
jobs
BigDL Program
Standard Spark jobs
• No changes to the Spark or Hadoop clusters needed
Iterative
• Each iteration of the training runs as a Spark job
Data parallel
• Each Spark task runs the same model on a subset of the data (batch)
Source: Intel BigDL Team
BigDL Parameter Synchronization
Source: https://github.com/intel-analytics/BigDL/blob/master/docs/docs/whitepaper.md
Model Development – DNN for HLF
• Model is instantiated using the Keras-
compatible API provided by Analytics Zoo
Model Development – GRU + HLF
A more complex network topology, combining a GRU of Low Level Feature + a
DNN of High Level Features
Distributed Training
Instantiate the estimator using Analytics Zoo / BigDL
The actual training is distributed to Spark executors
Storing the model for later use
Analytics Zoo/BigDL on Spark scales up in the ranges tested
Performance and Scalability of Analytics Zoo/BigDL
Inclusive classifier model DNN model, HLF features
Workload Characterization
• Training with Analytics zoo
• GRU-based model: Distributed training on YARN cluster
• Measure with Spark Dashboard: it is CPU bound
Results – Model Performance
• Trained models with
Analytics Zoo and BigDL
• Met the expected results
for model performance:
ROC curve and AUC
Training with TensorFlow 2.0
• Training and test data
• Converted from Parquet to TFRecord format using Spark
• TensorFlow: data ingestion using tf.data and tf.io
• Distributed training with tf.distribute + tool for K8S: https://github.com/cerndb/tf-spawner
Distributed training with TensorFlow
2.0 on Kubernetes (CERN cloud)
Distributed training of the Keras
model with:
tf.distribute.experimental.
MultiWorkerMirroredStrategy
Performance and Lessons Learned
• Measured distributed training elapsed time
• From a few hours to 11 hours, depending on model, number of epochs and batch
size. Hard to compare different methods and solutions (many parameters)
• Distributed training with BigDL and Analytics Zoo
• Integrates very well with Spark
• Need to cache data in memory
• Noisy clusters with stragglers can add latency to parameter synchronization
• TensorFlow 2.0
• It is straightforward to distribute training on CPUs and GPUs with tf.distribute
• Data flow: Use TFRecord format, read with TensorFlow’s tf.data and tf.io
• GRU training performance on GPU: 10x speedup in TF 2.0
• Training of the Inclusive Classifier on a single P100 in 5 hours
Data and
models from
Research
Input:
labeled
data and
DL models
Feature
engineering
at scale
Distributed
model training
Output: particle
selector model
Hyperparameter
optimization
(Random/Grid
search)
Recap: our Deep Learning Pipeline with Spark
Model Serving and Future Work
• Using Apache Kafka
and Spark?
• FPGA serving DNN models
MODEL
Streaming
platform
MODEL
RTL
translation
FPGA Output
pipeline:
to storage
/ further
online
analysis
Output
pipeline
Summary
• The use case developed addresses the needs for higher
efficiency in event filtering at LHC experiments
• Spark, Python notebooks
• Provide well-known APIs and productive environment for data preparation
• Data preparation performance, lessons learned:
• Use Spark SQL/DataFrame API, avoid Python UDF when possible
• Successfully scaled Deep Learning on Spark clusters
• Using Analytics Zoo and BigDL
• Deployed on existing Intel Xeon-based servers: Hadoop clusters and cloud
• Good results also with Tensorflow 2.0, running on Kubernetes
• Continuous evolution and improvements of DL at scale
• Data preparation and scalable distributed training are key
Acknowledgments
• Matteo Migliorini, Marco Zanetti, Riccardo Castellotti, Michał Bień, Viktor
Khristenko, CERN Spark and Hadoop service, CERN openlab
• Authors of “Topology classification with deep learning to improve real-time
event selection at the LHC”, notably Thong Nguyen, Maurizio Pierini
• Intel team for BigDL and Analytics Zoo: Jiao (Jennie) Wang, Sajan Govindan
– Analytics Zoo: https://github.com/intel-analytics/analytics-zoo
– BigDL: https://software.intel.com/bigdl
References:
– Data and code: https://github.com/cerndb/SparkDLTrigger
– Machine Learning Pipelines with Modern Big Data Tools for High Energy Physics
http://arxiv.org/abs/1909.10389
#UnifiedDataAnalytics #SparkAISummit 37
DON’T FORGET TO RATE
AND REVIEW THE SESSIONS
SEARCH SPARK + AI SUMMIT

More Related Content

What's hot (8)

PDF
pfSense, OpenSource Firewall
Erik Kirschner
 
PPTX
6TiSCH + RPL @ Telecom Bretagne 2014
Pascal Thubert
 
PPTX
Xây dụng và kết hợp Kafka, Druid, Superset để đua vào ứng dụng phân tích dữ l...
Đông Đô
 
PDF
Tom Mason (Stability AI) - Computing Large Foundational Models Unlisted
Techsylvania
 
PPTX
Social network privacy & security
nadikari123
 
PPTX
The Python ecosystem for data science - Landscape Overview
Dr. Ananth Krishnamoorthy
 
PPTX
Firebase Overview
aashutosh kumar
 
PDF
Module 1 Introduction Mikrotik dan IP.pdf
syarip4
 
pfSense, OpenSource Firewall
Erik Kirschner
 
6TiSCH + RPL @ Telecom Bretagne 2014
Pascal Thubert
 
Xây dụng và kết hợp Kafka, Druid, Superset để đua vào ứng dụng phân tích dữ l...
Đông Đô
 
Tom Mason (Stability AI) - Computing Large Foundational Models Unlisted
Techsylvania
 
Social network privacy & security
nadikari123
 
The Python ecosystem for data science - Landscape Overview
Dr. Ananth Krishnamoorthy
 
Firebase Overview
aashutosh kumar
 
Module 1 Introduction Mikrotik dan IP.pdf
syarip4
 

Similar to Deep Learning Pipelines for High Energy Physics using Apache Spark with Distributed Keras on Analytics Zoo (20)

PDF
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
Databricks
 
PDF
Build, Scale, and Deploy Deep Learning Pipelines Using Apache Spark
Databricks
 
PDF
Integrating Deep Learning Libraries with Apache Spark
Databricks
 
PDF
Build, Scale, and Deploy Deep Learning Pipelines Using Apache Spark
Databricks
 
PDF
Scalable Data Science in Python and R on Apache Spark
felixcss
 
PDF
Training Large-scale Ad Ranking Models in Spark
Patrick Pletscher
 
PPTX
Deep-Dive into Deep Learning Pipelines with Sue Ann Hong and Tim Hunter
Databricks
 
PDF
Build, Scale, and Deploy Deep Learning Pipelines with Ease Using Apache Spark
Databricks
 
PDF
Analytics Zoo: Building Analytics and AI Pipeline for Apache Spark and BigDL ...
Databricks
 
PDF
Data Science and Deep Learning on Spark with 1/10th of the Code with Roope As...
Databricks
 
PPTX
Distributed Deep Learning + others for Spark Meetup
Vijay Srinivas Agneeswaran, Ph.D
 
PDF
Build, Scale, and Deploy Deep Learning Pipelines with Ease
Databricks
 
PDF
Project Hydrogen: State-of-the-Art Deep Learning on Apache Spark
Databricks
 
PPTX
Combining Machine Learning frameworks with Apache Spark
DataWorks Summit/Hadoop Summit
 
PPTX
Spark and Deep Learning Frameworks at Scale 7.19.18
Cloudera, Inc.
 
PDF
Practical Machine Learning Pipelines with MLlib
Databricks
 
PDF
Very large scale distributed deep learning on BigDL
DESMOND YUEN
 
PDF
Project Hydrogen: Unifying State-of-the-Art AI and Big Data in Apache Spark w...
Databricks
 
PDF
Powering tensor flow with big data using apache beam, flink, and spark cern...
Holden Karau
 
PDF
Deeplearning in production
Paris Data Engineers !
 
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
Databricks
 
Build, Scale, and Deploy Deep Learning Pipelines Using Apache Spark
Databricks
 
Integrating Deep Learning Libraries with Apache Spark
Databricks
 
Build, Scale, and Deploy Deep Learning Pipelines Using Apache Spark
Databricks
 
Scalable Data Science in Python and R on Apache Spark
felixcss
 
Training Large-scale Ad Ranking Models in Spark
Patrick Pletscher
 
Deep-Dive into Deep Learning Pipelines with Sue Ann Hong and Tim Hunter
Databricks
 
Build, Scale, and Deploy Deep Learning Pipelines with Ease Using Apache Spark
Databricks
 
Analytics Zoo: Building Analytics and AI Pipeline for Apache Spark and BigDL ...
Databricks
 
Data Science and Deep Learning on Spark with 1/10th of the Code with Roope As...
Databricks
 
Distributed Deep Learning + others for Spark Meetup
Vijay Srinivas Agneeswaran, Ph.D
 
Build, Scale, and Deploy Deep Learning Pipelines with Ease
Databricks
 
Project Hydrogen: State-of-the-Art Deep Learning on Apache Spark
Databricks
 
Combining Machine Learning frameworks with Apache Spark
DataWorks Summit/Hadoop Summit
 
Spark and Deep Learning Frameworks at Scale 7.19.18
Cloudera, Inc.
 
Practical Machine Learning Pipelines with MLlib
Databricks
 
Very large scale distributed deep learning on BigDL
DESMOND YUEN
 
Project Hydrogen: Unifying State-of-the-Art AI and Big Data in Apache Spark w...
Databricks
 
Powering tensor flow with big data using apache beam, flink, and spark cern...
Holden Karau
 
Deeplearning in production
Paris Data Engineers !
 
Ad

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
Databricks
 
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
PPT
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
PPTX
Data Lakehouse Symposium | Day 2
Databricks
 
PPTX
Data Lakehouse Symposium | Day 4
Databricks
 
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks
 
PDF
Democratizing Data Quality Through a Centralized Platform
Databricks
 
PDF
Learn to Use Databricks for Data Science
Databricks
 
PDF
Why APM Is Not the Same As ML Monitoring
Databricks
 
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
PDF
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
PDF
Sawtooth Windows for Feature Aggregations
Databricks
 
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
PDF
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
PDF
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
PDF
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
DW Migration Webinar-March 2022.pptx
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
Data Lakehouse Symposium | Day 2
Databricks
 
Data Lakehouse Symposium | Day 4
Databricks
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks
 
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Learn to Use Databricks for Data Science
Databricks
 
Why APM Is Not the Same As ML Monitoring
Databricks
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
Sawtooth Windows for Feature Aggregations
Databricks
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
Ad

Recently uploaded (20)

PDF
Research Methodology Overview Introduction
ayeshagul29594
 
PPTX
Feb 2021 Ransomware Recovery presentation.pptx
enginsayin1
 
PPTX
apidays Singapore 2025 - Generative AI Landscape Building a Modern Data Strat...
apidays
 
PDF
apidays Singapore 2025 - Streaming Lakehouse with Kafka, Flink and Iceberg by...
apidays
 
PPTX
How to Add Columns and Rows in an R Data Frame
subhashenia
 
PDF
NIS2 Compliance for MSPs: Roadmap, Benefits & Cybersecurity Trends (2025 Guide)
GRC Kompas
 
PDF
apidays Singapore 2025 - Building a Federated Future, Alex Szomora (GSMA)
apidays
 
PDF
A GraphRAG approach for Energy Efficiency Q&A
Marco Brambilla
 
PDF
InformaticsPractices-MS - Google Docs.pdf
seshuashwin0829
 
PPTX
big data eco system fundamentals of data science
arivukarasi
 
PDF
JavaScript - Good or Bad? Tips for Google Tag Manager
📊 Markus Baersch
 
PDF
apidays Singapore 2025 - From API Intelligence to API Governance by Harsha Ch...
apidays
 
PPTX
01_Nico Vincent_Sailpeak.pptx_AI_Barometer_2025
FinTech Belgium
 
PPTX
SlideEgg_501298-Agentic AI.pptx agentic ai
530BYManoj
 
PPT
tuberculosiship-2106031cyyfuftufufufivifviviv
AkshaiRam
 
PDF
1750162332_Snapshot-of-Indias-oil-Gas-data-May-2025.pdf
sandeep718278
 
PPTX
Aict presentation on dpplppp sjdhfh.pptx
vabaso5932
 
PDF
Technical-Report-GPS_GIS_RS-for-MSF-finalv2.pdf
KPycho
 
PPTX
04_Tamás Marton_Intuitech .pptx_AI_Barometer_2025
FinTech Belgium
 
PPTX
apidays Helsinki & North 2025 - API access control strategies beyond JWT bear...
apidays
 
Research Methodology Overview Introduction
ayeshagul29594
 
Feb 2021 Ransomware Recovery presentation.pptx
enginsayin1
 
apidays Singapore 2025 - Generative AI Landscape Building a Modern Data Strat...
apidays
 
apidays Singapore 2025 - Streaming Lakehouse with Kafka, Flink and Iceberg by...
apidays
 
How to Add Columns and Rows in an R Data Frame
subhashenia
 
NIS2 Compliance for MSPs: Roadmap, Benefits & Cybersecurity Trends (2025 Guide)
GRC Kompas
 
apidays Singapore 2025 - Building a Federated Future, Alex Szomora (GSMA)
apidays
 
A GraphRAG approach for Energy Efficiency Q&A
Marco Brambilla
 
InformaticsPractices-MS - Google Docs.pdf
seshuashwin0829
 
big data eco system fundamentals of data science
arivukarasi
 
JavaScript - Good or Bad? Tips for Google Tag Manager
📊 Markus Baersch
 
apidays Singapore 2025 - From API Intelligence to API Governance by Harsha Ch...
apidays
 
01_Nico Vincent_Sailpeak.pptx_AI_Barometer_2025
FinTech Belgium
 
SlideEgg_501298-Agentic AI.pptx agentic ai
530BYManoj
 
tuberculosiship-2106031cyyfuftufufufivifviviv
AkshaiRam
 
1750162332_Snapshot-of-Indias-oil-Gas-data-May-2025.pdf
sandeep718278
 
Aict presentation on dpplppp sjdhfh.pptx
vabaso5932
 
Technical-Report-GPS_GIS_RS-for-MSF-finalv2.pdf
KPycho
 
04_Tamás Marton_Intuitech .pptx_AI_Barometer_2025
FinTech Belgium
 
apidays Helsinki & North 2025 - API access control strategies beyond JWT bear...
apidays
 

Deep Learning Pipelines for High Energy Physics using Apache Spark with Distributed Keras on Analytics Zoo

  • 1. WIFI SSID:Spark+AISummit | Password: UnifiedDataAnalytics
  • 2. Luca Canali, CERN Deep Learning Pipelines for High Energy Physics using Apache Spark with Distributed Keras and Analytics Zoo #UnifiedDataAnalytics #SparkAISummit
  • 3. About Luca 3#UnifiedDataAnalytics #SparkAISummit • Data Engineer at CERN – Hadoop and Spark service, database services – 19+ years of experience with data engineering • Sharing and community – Blog, notes, tools, contributions to Apache Spark @LucaCanaliDB – http://cern.ch/canali
  • 5. Experimental High Energy Physics is Data Intensive 5 Particle Collisions Physics Discoveries Large Scale Computing https://twiki.cern.ch/twiki/pub/CMSPublic/Hig13002TWiki/HZZ4l_animated.gif And https://iopscience.iop.org/article/10.1088/1742-6596/455/1/012027
  • 6. Key Data Processing Challenge • Proton-proton collisions at LHC experiments happen at 40MHz. • Hundreds of TB/s of electrical signals that allow physicists to investigate particle collision events. • Storage, limited by bandwidth • Currently, only 1 every ~40K events stored to disk (~10 GB/s). 2018: 5 collisions/beam cross Current LHC 2026: 400 collisions/beam cross Future: High-Luminosity LHC upgrade
  • 7. This can generate up to a petabyte of raw data per second Reduced to GB/s by filtering in real time Key is how to select potentially interesting events (trigger systems). PB/s 40 million collisions per second (Raw) 100,000 selections per second (L1) TB/s 1,000 selections per second (L2) GB/s Data Flow at LHC Experiments
  • 8. R&D – Data Pipelines • Improve the quality of filtering systems • Reduce false positive rate • From rule-based algorithms to classifiers based on Deep Learning • Advanced analytics at the edge • Avoid wasting resources in offline computing • Reduction of operational costs
  • 9. Particle Classifiers Using Neural Networks 1 63% 2 36% 3 1% Particle Classifier W + j QCD t-t̅ • R&D to improve the quality of filtering systems • Develop a “Deep Learning classifier” to be used by the filtering system • Goal: Identify events of interest for physics and reduce false positives • False positives have a cost, as wasted storage bandwidth and computing • “Topology classification with deep learning to improve real-time event selection at the LHC”, Nguyen et al. Comput.Softw.Big Sci. 3 (2019) no.1, 12
  • 10. Deep Learning Pipeline for Physics Data Data Ingestion Feature Preparation Model Development Training Read physics data and feature engineering Prepare input for Deep Learning network 1. Specify model topology 2. Tune model topology on small dataset Train the best model Technology: the pipeline uses Apache Spark + Analytics Zoo and TensorFlow/Keras. Code on Python Notebooks.
  • 11. Analytics Platform at CERN HEP software Experiments storage HDFS Personal storage Integrating new “Big Data” components with existing infrastructure: • Software distribution • Data platforms
  • 13. Hadoop and Spark Clusters at CERN • Clusters: • YARN/Hadoop • Spark on Kubernetes • Hardware: Intel based servers, continuous refresh and capacity expansion Accelerator logging (part of LHC infrastructure) Hadoop - YARN - 30 nodes (Cores - 1200, Mem - 13 TB, Storage – 7.5 PB) General Purpose Hadoop - YARN, 65 nodes (Cores – 2.2k, Mem – 20 TB, Storage – 12.5 PB) Cloud containers Kubernetes on Openstack VMs, Cores - 250, Mem – 2 TB Storage: remote HDFS or EOS (for physics data)
  • 14. Extending Spark to Read Physics Data • Physics data • Currently: >300 PBs of Physics data, increasing ~90 PB/year • Stored in the CERN EOS storage system in ROOT Format and accessible via XRootD protocol • Integration with Spark ecosystem • Hadoop-XRootD connector, HDFS compatible filesystem • Spark Datasource for ROOT format JNI Hadoop HDFS APIHadoop- XRootD Connector EOS Storage Service XRootD Client C++ Java https://github.com/cerndb/hadoop-xrootd https://github.com/diana-hep/spark-root
  • 15. Labeled Data for Training and Test ● Simulated events ● Software simulators are used to generate events and calculate the detector response ● Raw data contains arrays of simulated particles and their properties, stored in ROOT format ● 54 million events
  • 16. Step 1: Data Ingestion • Read input files: 4.5 TB from custom (ROOT) format • Feature engineering • Python and PySpark code, using Jupyter notebooks • Write output in Parquet format Output: • 25 M events • 950 GB in Parquet format • Target storage (HDFS) Input: • 54 M events ~4.5 TB • Physics data storage (EOS) • Physics data format (ROOT)
  • 17. ● Filtering ● Multiple filters, keep only events of interest ● Example: “events with one electrons or muon with Pt > 23 Gev” • Prepare “Low Level Features” • Every event is associated to a matrix of particles and features (801x19) • High Level Features (HLF) • Additional 14 features are computed from low level particle features • Calculated based on domain-specific knowledge Feature Engineering
  • 18. Step 2: Feature Preparation Features are converted to formats suitable for training • One Hot Encoding of categories • MinMax scaler for High Level Features • Sorting Low Level Features: prepare input for the sequence classifier, using a metric based on physics. This use a Python UDF. • Undersampling: use the same number of events for each of the three categories Result • 3.6 Million events, 317 GB • Shuffled and split into training and test datasets • Code: in a Jupyter notebook using PySpark with Spark SQL and ML
  • 19. Performance and Lessons Learned • Data preparation is CPU bound • Heavy serialization-deserialization due to Python UDF • Ran using 400 cores: data ingestion took ~3 hours, • It can be optimized, but is it worth it ? • Use Spark SQL, Scala instead of Python UDF • Optimization: replacing parts of Python UDF code with Spark SQL and higher order functions: run time from 3 hours to 2 hours
  • 20. Neural Network Models and 1. Fully connected feed-forward deep neural network • Trained using High Level Features (~1 GB of data) 2. Neural network based on Gated Recurrent Unit (GRU) • Trained using Low Level Features (~ 300 GB of data) 3. Inclusive classifier model • Combination of (1) + (2) Complexity + Classifier Performance
  • 21. Hyper-Parameter Tuning– DNN • Hyper-parameter tuning of the DNN model • Trained with a subset of the data (cached in memory) • Parallelized with Spark, using spark_sklearn.grid_search • And scikit-learn + keras: tensorflow.keras.wrappers.scikit_learn
  • 22. Deep Learning at Scale with Spark • Investigations and constraints for our exercise • How to run deep learning in a Spark data pipeline? • Neural network models written using Keras API • Deploy on Hadoop and/or Kubernetes clusters (CPU clusters) • Distributed deep learning • GRU-based model is complex • Slow to train on a single commodity (CPU) server
  • 23. Spark, Analytics Zoo and BigDL • Apache Spark • Leading tool and API for data processing at scale • Analytics Zoo is a platform for unified analytics and AI • Runs on Apache Spark leveraging BigDL / Tensorflow • For service developers: integration with infrastructure (hardware, data access, operations) • For users: Keras APIs to run user models, integration with Spark data structures and pipelines • BigDL is an open source distributed deep learning framework for Apache Spark
  • 24. BigDL Run as Standard Spark Programs Spark Program DL App on Driver Spark Executor (JVM) Spark Task BigDL lib Worker Intel MKL Standard Spark jobs Worker Worker Worker Worker Spark Executor (JVM) Spark Task BigDL lib Worker Intel MKL BigDL library Spark jobs BigDL Program Standard Spark jobs • No changes to the Spark or Hadoop clusters needed Iterative • Each iteration of the training runs as a Spark job Data parallel • Each Spark task runs the same model on a subset of the data (batch) Source: Intel BigDL Team
  • 25. BigDL Parameter Synchronization Source: https://github.com/intel-analytics/BigDL/blob/master/docs/docs/whitepaper.md
  • 26. Model Development – DNN for HLF • Model is instantiated using the Keras- compatible API provided by Analytics Zoo
  • 27. Model Development – GRU + HLF A more complex network topology, combining a GRU of Low Level Feature + a DNN of High Level Features
  • 28. Distributed Training Instantiate the estimator using Analytics Zoo / BigDL The actual training is distributed to Spark executors Storing the model for later use
  • 29. Analytics Zoo/BigDL on Spark scales up in the ranges tested Performance and Scalability of Analytics Zoo/BigDL Inclusive classifier model DNN model, HLF features
  • 30. Workload Characterization • Training with Analytics zoo • GRU-based model: Distributed training on YARN cluster • Measure with Spark Dashboard: it is CPU bound
  • 31. Results – Model Performance • Trained models with Analytics Zoo and BigDL • Met the expected results for model performance: ROC curve and AUC
  • 32. Training with TensorFlow 2.0 • Training and test data • Converted from Parquet to TFRecord format using Spark • TensorFlow: data ingestion using tf.data and tf.io • Distributed training with tf.distribute + tool for K8S: https://github.com/cerndb/tf-spawner Distributed training with TensorFlow 2.0 on Kubernetes (CERN cloud) Distributed training of the Keras model with: tf.distribute.experimental. MultiWorkerMirroredStrategy
  • 33. Performance and Lessons Learned • Measured distributed training elapsed time • From a few hours to 11 hours, depending on model, number of epochs and batch size. Hard to compare different methods and solutions (many parameters) • Distributed training with BigDL and Analytics Zoo • Integrates very well with Spark • Need to cache data in memory • Noisy clusters with stragglers can add latency to parameter synchronization • TensorFlow 2.0 • It is straightforward to distribute training on CPUs and GPUs with tf.distribute • Data flow: Use TFRecord format, read with TensorFlow’s tf.data and tf.io • GRU training performance on GPU: 10x speedup in TF 2.0 • Training of the Inclusive Classifier on a single P100 in 5 hours
  • 34. Data and models from Research Input: labeled data and DL models Feature engineering at scale Distributed model training Output: particle selector model Hyperparameter optimization (Random/Grid search) Recap: our Deep Learning Pipeline with Spark
  • 35. Model Serving and Future Work • Using Apache Kafka and Spark? • FPGA serving DNN models MODEL Streaming platform MODEL RTL translation FPGA Output pipeline: to storage / further online analysis Output pipeline
  • 36. Summary • The use case developed addresses the needs for higher efficiency in event filtering at LHC experiments • Spark, Python notebooks • Provide well-known APIs and productive environment for data preparation • Data preparation performance, lessons learned: • Use Spark SQL/DataFrame API, avoid Python UDF when possible • Successfully scaled Deep Learning on Spark clusters • Using Analytics Zoo and BigDL • Deployed on existing Intel Xeon-based servers: Hadoop clusters and cloud • Good results also with Tensorflow 2.0, running on Kubernetes • Continuous evolution and improvements of DL at scale • Data preparation and scalable distributed training are key
  • 37. Acknowledgments • Matteo Migliorini, Marco Zanetti, Riccardo Castellotti, Michał Bień, Viktor Khristenko, CERN Spark and Hadoop service, CERN openlab • Authors of “Topology classification with deep learning to improve real-time event selection at the LHC”, notably Thong Nguyen, Maurizio Pierini • Intel team for BigDL and Analytics Zoo: Jiao (Jennie) Wang, Sajan Govindan – Analytics Zoo: https://github.com/intel-analytics/analytics-zoo – BigDL: https://software.intel.com/bigdl References: – Data and code: https://github.com/cerndb/SparkDLTrigger – Machine Learning Pipelines with Modern Big Data Tools for High Energy Physics http://arxiv.org/abs/1909.10389 #UnifiedDataAnalytics #SparkAISummit 37
  • 38. DON’T FORGET TO RATE AND REVIEW THE SESSIONS SEARCH SPARK + AI SUMMIT