SlideShare a Scribd company logo
End-to-End Platform Support for
Distributed Deep Learning in Finance
Jim Dowling
CEO, Logical Clocks AB
Assoc Prof, KTH Stockholm
Senior Researcher, RISE SICS
jim_dowling
Deep Learning in Finance
•Financial modelling problems are typically complex and
non-linear.
•If you’re lucky, you have lots of labelled data
-Deep learning models can learn non-linear relationships and
recurrent structures that generalize beyond the training data.
•Potential areas in finance: pricing, portfolio construction,
risk management and HFT*
2/33
* https://towardsdatascience.com/deep-learning-in-finance-9e088cb17c03
More Data means Better Predictions
Prediction
Performance
Traditional ML
Deep Neural Nets
Amount Labelled Data
Hand-crafted
can outperform
1980s1990s2000s 2010s 2020s?
3/33
Do we need more Compute?
“Methods that scale with computation
are the future of AI”*
- Rich Sutton (A Founding Father of Reinforcement Learning)
* https://www.youtube.com/watch?v=EeMCEQa85tw
4/33
Reduce DNN Training Time
In 2017, Facebook
reduced training
time on ImageNet
for a CNN from 2
weeks to 1 hour
by scaling out to
256 GPUs using
Ring-AllReduce on
Caffe2.
https://arxiv.org/abs/1706.02677
5/33
• Hyper-parameter optimization is
parallelizable
• Neural Architecture Search
(Google)
- 450 GPU / 7 days
- 900 TPU / 5 days
- New SOTA on CIFAR10 (2.13% top 1)
- New SOTA on ImageNet (3.8% top 5)
Reduce Experiment Time with Parallel Experiments
https://arxiv.org/abs/1802.01548
6/33
Training Time and ML Practitioner Productivity
7
•Distributed Deep Learning
-Interactive analysis!
-Instant gratification!
“My Model’s Training.”
Training
7/33
More Compute should mean Faster Training
Training
Performance
Single-Host
Distributed
Available Compute
20152016 2017 2018?
8/33
Distributed Training: Theory and Practice
9 9/33
Image from @hardmaru on Twitter.
Distributed Training Algorithms not all Equal
Training
Performance
Parameter Servers
AllReduce
Available Compute
10/33
Ring-AllReduce vs Parameter Server
GPU 0
GPU 1
GPU 2
GPU 3
send
send
send
send
recv
recv
recv
recv GPU 1 GPU 2 GPU 3 GPU 4
Param Server(s)
Network Bandwidth is the Bottleneck for Distributed Training
11/33
AllReduce outperforms Parameter Servers
12/33
*https://github.com/uber/horovod
16 servers with 4 P100 GPUs (64 GPUs) each connected by ROCE-capable 25 Gbit/s network
(synthetic data). Speed below is images processed per second.*
For Bigger Models, Parameter Servers don’t scale
Infiniband for Training to overcome N/W Bottleneck
RDMA/Infiniband
Read Input Files, Write Model Checkpoints to Network FS
Aggregate Gradients
Separate Gradient sharing/aggregation network traffic from I/O traffic.
13/33
Horovod on Hops
import horovod.tensorflow as hvd
def conv_model(feature, target, mode)
…..
def main(_):
hvd.init()
opt = hvd.DistributedOptimizer(opt)
if hvd.local_rank()==0:
hooks = [hvd.BroadcastGlobalVariablesHook(0), ..]
…..
else:
hooks = [hvd.BroadcastGlobalVariablesHook(0), ..]
…..
from hops import allreduce
allreduce.launch(spark, 'hdfs:///Projects/…/all_reduce.ipynb')
“Pure” TensorFlow code
14/33
Parallel Experiments
Parallel Experiments on Hops
def model_fn(learning_rate, dropout):
import tensorflow as tf
from hops import tensorboard, hdfs, devices
[TensorFlow Code here]
from hops import experiment
args_dict = {'learning_rate': [0.001, 0.005, 0.01],
'dropout': [0.5, 0.6]}
experiment.launch(spark, model_fn, args_dict)
Launch TF jobs in Spark Executors
17/33
Launches 6 Spark Executors with a different Hyperparameter
combinations. Each Executor can have 1-N GPUs.
Parallel Experiments Visualization on TensorBoard
18/33
Parallel Experiment Results Visualization
Lots of good GPUs > A few great GPUs
100 x Nvidia 1080Ti (DeepLearning11)
8 x Nvidia V100 (DGX-1)
VS
Both top (100 GPUs) and bottom (8 GPUs) cost the same: $150K (March 2018).
19/33
Share GPUs to Maximize Utilization
GPU Resource Management (Hops, Mesos)
20/33
4 GPUs on any host
10 GPUs on 1 host
100 GPUs on 10 hosts with ‘Infiniband’
20 GPUs on 2 hosts with ‘Infiniband_P100’
DeepLearning11 Server $15K (10 x 1080Ti)
21/33
Economics of GPUs and the Cloud
Time
GPU
Utilization
On-Premise GPU
Cloud
DeepLearning11 (10x1080Tis) will pay for itself in 11 weeks,
compared to using a p3.8xlarge in AWS
22/33
Distributed Deep Learning for Finance
•Platform for Hyperscale Data Science
•Controlled* access to datasets
*GDPR-compliance, Sarbanes-Oxley, etc
23/33
Hopsworks
Hops: Next Generation Hadoop*
16x
Throughput
FasterBigger
*https://www.usenix.org/conference/fast17/technical-sessions/presentation/niazi
37x
Number of files
Scale Challenge Winner (2017)
25
GPUs in
YARN
25/33
Hopsworks Data Platform
Develop Train Test Serve
MySQL Cluster
Hive
InfluxDB
ElasticSearch
KafkaProjects,Datasets,Users
HopsFS / YARN
Spark, Flink, Tensorflow
Jupyter, Zeppelin
Jobs, Kibana, Grafana
REST
API
Hopsworks
26/33
Proj-42
Projects sandbox Private Data
A Project is a Grouping of Users and Data
Proj-X
Shared TopicTopic /Projs/My/Data
Proj-AllCompanyDB
Ismail et al, Hopsworks: Improving User Experience and Development on Hadoop with Scalable, Strongly Consistent Metadata, ICDCS 2017
27/33
How are Projects used?
Engineering
Kafka Topic
FX Project
FX Topic
FX DB
FX Data Stream
Shared Interactive Analytics
FX team
28/33
Per-Project Python Envs with Conda
Python libraries are usable by Spark/Tensorflow
29/33
HopsFS
YARN
FeatureStore
Tensorflow
Serving
Public Cloud or On-Premise
Tensorboard
TensorFlow in Hopsworks
Experiments
Kafka
Hive
30/33
One Click Deployment of TensorFlow Models
31/33
Hops API
•Python/Java/Scala library
-Manage tensorboard, Load/save models in HDFS
-Horovod, TensorFlowOnSpark
-Parameter sweeps for parallel experiments
-Neural Architecture Search with Genetic Algorithms
-Secure Streaming Analytics with Kafka/Spark/Flink
• SSL/TLS certs, Avro Schema, Endpoints for Kafka/Hopsworks/etc
32/33
Deep Learning Hierarchy of Scale
DDL
AllReduce
on GPU Servers
DDL with GPU Servers
and Parameter Servers
Parallel Experiments on GPU Servers
Single GPU
Many GPUs on a Single GPU Server
33/33
Summary
•Distribution can make Deep Learning practitioners more
productive.
https://www.oreilly.com/ideas/distributed-tensorflow
•Hopsworks is a new Data Platform built on HopsFS with
first-class support for Python / Deep Learning / ML /
Strong Data Governance
The Team
Jim Dowling, Seif Haridi, Tor Björn Minde, Gautier Berthou, Salman
Niazi, Mahmoud Ismail, Theofilos Kakantousis, Ermias
Gebremeskel, Antonios Kouzoupis, Alex Ormenisan, Fabio Buso,
Robin Andersson, August Bonds, Filotas Siskos, Mahmoud Hamed.
Active:
Alumni:
Vasileios Giannokostas, Johan Svedlund Nordström,Rizvi Hasan, Paul Mälzer, Bram
Leenders, Juan Roca, Misganu Dessalegn, K “Sri” Srijeyanthan, Jude D’Souza, Alberto
Lorente, Andre Moré, Ali Gholami, Davis Jaunzems, Stig Viaene, Hooman Peiro,
Evangelos Savvidis, Steffen Grohsschmiedt, Qi Qi, Gayana Chandrasekara, Nikolaos
Stanogias, Daniel Bali, Ioannis Kerkinos, Peter Buechler, Pushparaj Motamari, Hamid
Afzali, Wasif Malik, Lalith Suresh, Mariano Valles, Ying Lieu, Fanti Machmount Al
Samisti, Braulio Grana, Adam Alpire, Zahin Azher Rashid, ArunaKumari Yedurupaka,
Tobias Johansson , Roberto Bampi.
www.hops.io
@hopshadoop

More Related Content

Similar to End-to-End Platform Support for Distributed Deep Learning in Finance (20)

PDF
Data Parallel Deep Learning
inside-BigData.com
 
PPTX
Innovation with ai at scale on the edge vt sept 2019 v0
Ganesan Narayanasamy
 
PPTX
Scaling Data Science on Big Data
DataWorks Summit
 
PPT
Bhupeshbansal bigdata
Bhupesh Bansal
 
PDF
Tds — big science dec 2021
Gérard Dupont
 
PDF
Distributed TensorFlow on Hops (Papis London, April 2018)
Jim Dowling
 
PDF
Deep learning beyond the learning - Jörg Schad - Codemotion Rome 2018
Codemotion
 
PDF
Big Data: hype or necessity?
Bart Vandewoestyne
 
PPTX
Presentation1
Atul Singh
 
PPT
Stefan Geissler kairntech - SDC Nice Apr 2019
Stefan Geißler
 
PDF
ExtremeEarth: Hopsworks, a data-intensive AI platform for Deep Learning with ...
Big Data Value Association
 
PDF
Austin,TX Meetup presentation tensorflow final oct 26 2017
Clarisse Hedglin
 
PDF
Scaling the (evolving) web data –at low cost-
WU (Vienna University of Economics and Business)
 
PDF
Data Science und Machine Learning im Kubernetes-Ökosystem
inovex GmbH
 
PDF
Scaling TensorFlow with Hops, Global AI Conference Santa Clara
Jim Dowling
 
PDF
Odsc workshop - Distributed Tensorflow on Hops
Jim Dowling
 
PDF
Deep learning beyond the learning - Jörg Schad - Codemotion Amsterdam 2018
Codemotion
 
PDF
May 29, 2014 Toronto Hadoop User Group - Micro ETL
Adam Muise
 
PPTX
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
aminnezarat
 
PDF
Session 1 - The Current Landscape of Big Data Benchmarks
DataBench
 
Data Parallel Deep Learning
inside-BigData.com
 
Innovation with ai at scale on the edge vt sept 2019 v0
Ganesan Narayanasamy
 
Scaling Data Science on Big Data
DataWorks Summit
 
Bhupeshbansal bigdata
Bhupesh Bansal
 
Tds — big science dec 2021
Gérard Dupont
 
Distributed TensorFlow on Hops (Papis London, April 2018)
Jim Dowling
 
Deep learning beyond the learning - Jörg Schad - Codemotion Rome 2018
Codemotion
 
Big Data: hype or necessity?
Bart Vandewoestyne
 
Presentation1
Atul Singh
 
Stefan Geissler kairntech - SDC Nice Apr 2019
Stefan Geißler
 
ExtremeEarth: Hopsworks, a data-intensive AI platform for Deep Learning with ...
Big Data Value Association
 
Austin,TX Meetup presentation tensorflow final oct 26 2017
Clarisse Hedglin
 
Scaling the (evolving) web data –at low cost-
WU (Vienna University of Economics and Business)
 
Data Science und Machine Learning im Kubernetes-Ökosystem
inovex GmbH
 
Scaling TensorFlow with Hops, Global AI Conference Santa Clara
Jim Dowling
 
Odsc workshop - Distributed Tensorflow on Hops
Jim Dowling
 
Deep learning beyond the learning - Jörg Schad - Codemotion Amsterdam 2018
Codemotion
 
May 29, 2014 Toronto Hadoop User Group - Micro ETL
Adam Muise
 
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
aminnezarat
 
Session 1 - The Current Landscape of Big Data Benchmarks
DataBench
 

More from Jim Dowling (20)

PDF
ARVC and flecainide case report[EI] Jim.docx.pdf
Jim Dowling
 
PDF
PyData Berlin 2023 - Mythical ML Pipeline.pdf
Jim Dowling
 
PDF
Serverless ML Workshop with Hopsworks at PyData Seattle
Jim Dowling
 
PDF
PyCon Sweden 2022 - Dowling - Serverless ML with Hopsworks.pdf
Jim Dowling
 
PDF
_Python Ireland Meetup - Serverless ML - Dowling.pdf
Jim Dowling
 
PDF
Building Hopsworks, a cloud-native managed feature store for machine learning
Jim Dowling
 
PDF
Real-Time Recommendations with Hopsworks and OpenSearch - MLOps World 2022
Jim Dowling
 
PDF
Ml ops and the feature store with hopsworks, DC Data Science Meetup
Jim Dowling
 
PDF
Hops fs huawei internal conference july 2021
Jim Dowling
 
PDF
Hopsworks MLOps World talk june 21
Jim Dowling
 
PDF
Hopsworks Feature Store 2.0 a new paradigm
Jim Dowling
 
PDF
Metadata and Provenance for ML Pipelines with Hopsworks
Jim Dowling
 
PDF
GANs for Anti Money Laundering
Jim Dowling
 
PDF
Berlin buzzwords 2020-feature-store-dowling
Jim Dowling
 
PDF
Invited Lecture on GPUs and Distributed Deep Learning at Uppsala University
Jim Dowling
 
PDF
Hopsworks data engineering melbourne april 2020
Jim Dowling
 
PDF
The Bitter Lesson of ML Pipelines
Jim Dowling
 
PDF
Asynchronous Hyperparameter Search with Spark on Hopsworks and Maggy
Jim Dowling
 
PDF
Hopsworks at Google AI Huddle, Sunnyvale
Jim Dowling
 
PDF
Hopsworks in the cloud Berlin Buzzwords 2019
Jim Dowling
 
ARVC and flecainide case report[EI] Jim.docx.pdf
Jim Dowling
 
PyData Berlin 2023 - Mythical ML Pipeline.pdf
Jim Dowling
 
Serverless ML Workshop with Hopsworks at PyData Seattle
Jim Dowling
 
PyCon Sweden 2022 - Dowling - Serverless ML with Hopsworks.pdf
Jim Dowling
 
_Python Ireland Meetup - Serverless ML - Dowling.pdf
Jim Dowling
 
Building Hopsworks, a cloud-native managed feature store for machine learning
Jim Dowling
 
Real-Time Recommendations with Hopsworks and OpenSearch - MLOps World 2022
Jim Dowling
 
Ml ops and the feature store with hopsworks, DC Data Science Meetup
Jim Dowling
 
Hops fs huawei internal conference july 2021
Jim Dowling
 
Hopsworks MLOps World talk june 21
Jim Dowling
 
Hopsworks Feature Store 2.0 a new paradigm
Jim Dowling
 
Metadata and Provenance for ML Pipelines with Hopsworks
Jim Dowling
 
GANs for Anti Money Laundering
Jim Dowling
 
Berlin buzzwords 2020-feature-store-dowling
Jim Dowling
 
Invited Lecture on GPUs and Distributed Deep Learning at Uppsala University
Jim Dowling
 
Hopsworks data engineering melbourne april 2020
Jim Dowling
 
The Bitter Lesson of ML Pipelines
Jim Dowling
 
Asynchronous Hyperparameter Search with Spark on Hopsworks and Maggy
Jim Dowling
 
Hopsworks at Google AI Huddle, Sunnyvale
Jim Dowling
 
Hopsworks in the cloud Berlin Buzzwords 2019
Jim Dowling
 
Ad

Recently uploaded (20)

PDF
July Patch Tuesday
Ivanti
 
PDF
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
PDF
Transforming Utility Networks: Large-scale Data Migrations with FME
Safe Software
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PPTX
The Project Compass - GDG on Campus MSIT
dscmsitkol
 
PPTX
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
PDF
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
PDF
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
PDF
Go Concurrency Real-World Patterns, Pitfalls, and Playground Battles.pdf
Emily Achieng
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
DOCX
Python coding for beginners !! Start now!#
Rajni Bhardwaj Grover
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PPTX
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
PDF
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
PDF
LOOPS in C Programming Language - Technology
RishabhDwivedi43
 
PDF
What Makes Contify’s News API Stand Out: Key Features at a Glance
Contify
 
July Patch Tuesday
Ivanti
 
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
Transforming Utility Networks: Large-scale Data Migrations with FME
Safe Software
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
The Project Compass - GDG on Campus MSIT
dscmsitkol
 
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
Go Concurrency Real-World Patterns, Pitfalls, and Playground Battles.pdf
Emily Achieng
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
Python coding for beginners !! Start now!#
Rajni Bhardwaj Grover
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
LOOPS in C Programming Language - Technology
RishabhDwivedi43
 
What Makes Contify’s News API Stand Out: Key Features at a Glance
Contify
 
Ad

End-to-End Platform Support for Distributed Deep Learning in Finance

  • 1. End-to-End Platform Support for Distributed Deep Learning in Finance Jim Dowling CEO, Logical Clocks AB Assoc Prof, KTH Stockholm Senior Researcher, RISE SICS jim_dowling
  • 2. Deep Learning in Finance •Financial modelling problems are typically complex and non-linear. •If you’re lucky, you have lots of labelled data -Deep learning models can learn non-linear relationships and recurrent structures that generalize beyond the training data. •Potential areas in finance: pricing, portfolio construction, risk management and HFT* 2/33 * https://towardsdatascience.com/deep-learning-in-finance-9e088cb17c03
  • 3. More Data means Better Predictions Prediction Performance Traditional ML Deep Neural Nets Amount Labelled Data Hand-crafted can outperform 1980s1990s2000s 2010s 2020s? 3/33
  • 4. Do we need more Compute? “Methods that scale with computation are the future of AI”* - Rich Sutton (A Founding Father of Reinforcement Learning) * https://www.youtube.com/watch?v=EeMCEQa85tw 4/33
  • 5. Reduce DNN Training Time In 2017, Facebook reduced training time on ImageNet for a CNN from 2 weeks to 1 hour by scaling out to 256 GPUs using Ring-AllReduce on Caffe2. https://arxiv.org/abs/1706.02677 5/33
  • 6. • Hyper-parameter optimization is parallelizable • Neural Architecture Search (Google) - 450 GPU / 7 days - 900 TPU / 5 days - New SOTA on CIFAR10 (2.13% top 1) - New SOTA on ImageNet (3.8% top 5) Reduce Experiment Time with Parallel Experiments https://arxiv.org/abs/1802.01548 6/33
  • 7. Training Time and ML Practitioner Productivity 7 •Distributed Deep Learning -Interactive analysis! -Instant gratification! “My Model’s Training.” Training 7/33
  • 8. More Compute should mean Faster Training Training Performance Single-Host Distributed Available Compute 20152016 2017 2018? 8/33
  • 9. Distributed Training: Theory and Practice 9 9/33 Image from @hardmaru on Twitter.
  • 10. Distributed Training Algorithms not all Equal Training Performance Parameter Servers AllReduce Available Compute 10/33
  • 11. Ring-AllReduce vs Parameter Server GPU 0 GPU 1 GPU 2 GPU 3 send send send send recv recv recv recv GPU 1 GPU 2 GPU 3 GPU 4 Param Server(s) Network Bandwidth is the Bottleneck for Distributed Training 11/33
  • 12. AllReduce outperforms Parameter Servers 12/33 *https://github.com/uber/horovod 16 servers with 4 P100 GPUs (64 GPUs) each connected by ROCE-capable 25 Gbit/s network (synthetic data). Speed below is images processed per second.* For Bigger Models, Parameter Servers don’t scale
  • 13. Infiniband for Training to overcome N/W Bottleneck RDMA/Infiniband Read Input Files, Write Model Checkpoints to Network FS Aggregate Gradients Separate Gradient sharing/aggregation network traffic from I/O traffic. 13/33
  • 14. Horovod on Hops import horovod.tensorflow as hvd def conv_model(feature, target, mode) ….. def main(_): hvd.init() opt = hvd.DistributedOptimizer(opt) if hvd.local_rank()==0: hooks = [hvd.BroadcastGlobalVariablesHook(0), ..] ….. else: hooks = [hvd.BroadcastGlobalVariablesHook(0), ..] ….. from hops import allreduce allreduce.launch(spark, 'hdfs:///Projects/…/all_reduce.ipynb') “Pure” TensorFlow code 14/33
  • 16. Parallel Experiments on Hops def model_fn(learning_rate, dropout): import tensorflow as tf from hops import tensorboard, hdfs, devices [TensorFlow Code here] from hops import experiment args_dict = {'learning_rate': [0.001, 0.005, 0.01], 'dropout': [0.5, 0.6]} experiment.launch(spark, model_fn, args_dict) Launch TF jobs in Spark Executors 17/33 Launches 6 Spark Executors with a different Hyperparameter combinations. Each Executor can have 1-N GPUs.
  • 17. Parallel Experiments Visualization on TensorBoard 18/33 Parallel Experiment Results Visualization
  • 18. Lots of good GPUs > A few great GPUs 100 x Nvidia 1080Ti (DeepLearning11) 8 x Nvidia V100 (DGX-1) VS Both top (100 GPUs) and bottom (8 GPUs) cost the same: $150K (March 2018). 19/33
  • 19. Share GPUs to Maximize Utilization GPU Resource Management (Hops, Mesos) 20/33 4 GPUs on any host 10 GPUs on 1 host 100 GPUs on 10 hosts with ‘Infiniband’ 20 GPUs on 2 hosts with ‘Infiniband_P100’
  • 20. DeepLearning11 Server $15K (10 x 1080Ti) 21/33
  • 21. Economics of GPUs and the Cloud Time GPU Utilization On-Premise GPU Cloud DeepLearning11 (10x1080Tis) will pay for itself in 11 weeks, compared to using a p3.8xlarge in AWS 22/33
  • 22. Distributed Deep Learning for Finance •Platform for Hyperscale Data Science •Controlled* access to datasets *GDPR-compliance, Sarbanes-Oxley, etc 23/33
  • 24. Hops: Next Generation Hadoop* 16x Throughput FasterBigger *https://www.usenix.org/conference/fast17/technical-sessions/presentation/niazi 37x Number of files Scale Challenge Winner (2017) 25 GPUs in YARN 25/33
  • 25. Hopsworks Data Platform Develop Train Test Serve MySQL Cluster Hive InfluxDB ElasticSearch KafkaProjects,Datasets,Users HopsFS / YARN Spark, Flink, Tensorflow Jupyter, Zeppelin Jobs, Kibana, Grafana REST API Hopsworks 26/33
  • 26. Proj-42 Projects sandbox Private Data A Project is a Grouping of Users and Data Proj-X Shared TopicTopic /Projs/My/Data Proj-AllCompanyDB Ismail et al, Hopsworks: Improving User Experience and Development on Hadoop with Scalable, Strongly Consistent Metadata, ICDCS 2017 27/33
  • 27. How are Projects used? Engineering Kafka Topic FX Project FX Topic FX DB FX Data Stream Shared Interactive Analytics FX team 28/33
  • 28. Per-Project Python Envs with Conda Python libraries are usable by Spark/Tensorflow 29/33
  • 29. HopsFS YARN FeatureStore Tensorflow Serving Public Cloud or On-Premise Tensorboard TensorFlow in Hopsworks Experiments Kafka Hive 30/33
  • 30. One Click Deployment of TensorFlow Models 31/33
  • 31. Hops API •Python/Java/Scala library -Manage tensorboard, Load/save models in HDFS -Horovod, TensorFlowOnSpark -Parameter sweeps for parallel experiments -Neural Architecture Search with Genetic Algorithms -Secure Streaming Analytics with Kafka/Spark/Flink • SSL/TLS certs, Avro Schema, Endpoints for Kafka/Hopsworks/etc 32/33
  • 32. Deep Learning Hierarchy of Scale DDL AllReduce on GPU Servers DDL with GPU Servers and Parameter Servers Parallel Experiments on GPU Servers Single GPU Many GPUs on a Single GPU Server 33/33
  • 33. Summary •Distribution can make Deep Learning practitioners more productive. https://www.oreilly.com/ideas/distributed-tensorflow •Hopsworks is a new Data Platform built on HopsFS with first-class support for Python / Deep Learning / ML / Strong Data Governance
  • 34. The Team Jim Dowling, Seif Haridi, Tor Björn Minde, Gautier Berthou, Salman Niazi, Mahmoud Ismail, Theofilos Kakantousis, Ermias Gebremeskel, Antonios Kouzoupis, Alex Ormenisan, Fabio Buso, Robin Andersson, August Bonds, Filotas Siskos, Mahmoud Hamed. Active: Alumni: Vasileios Giannokostas, Johan Svedlund Nordström,Rizvi Hasan, Paul Mälzer, Bram Leenders, Juan Roca, Misganu Dessalegn, K “Sri” Srijeyanthan, Jude D’Souza, Alberto Lorente, Andre Moré, Ali Gholami, Davis Jaunzems, Stig Viaene, Hooman Peiro, Evangelos Savvidis, Steffen Grohsschmiedt, Qi Qi, Gayana Chandrasekara, Nikolaos Stanogias, Daniel Bali, Ioannis Kerkinos, Peter Buechler, Pushparaj Motamari, Hamid Afzali, Wasif Malik, Lalith Suresh, Mariano Valles, Ying Lieu, Fanti Machmount Al Samisti, Braulio Grana, Adam Alpire, Zahin Azher Rashid, ArunaKumari Yedurupaka, Tobias Johansson , Roberto Bampi. www.hops.io @hopshadoop