SlideShare a Scribd company logo
Scaling Machine Learning To
Billions of Parameters
Badri Bhaskar, Erik Ordentlich
(joint with Andy Feng, Lee Yang, Peter Cnudde)
Yahoo, Inc.
Outline
• Large scale machine learning (ML)
• Spark + Parameter Server
– Architecture
– Implementation
• Examples:
- Distributed L-BFGS (Batch)
- Distributed Word2vec (Sequential)
• Spark + Parameter Server on Hadoop Cluster
LARGE SCALE ML
1.2 1.2 -78
6.3 -8.1
5.4 -8
4.2 2.3 -3.4
-1.1
2.3 4.9 7.4
4.5 2.1 -15
2.3 2.3
0.5 1.2 -0.9
-24
-1.3 -2.2 1.8
-4.9 -2.1 1.2
Web Scale ML
Billions of features
Hundredsofbillionsofexamples
Big Model
BigData
Ex:	Yahoo	word2vec	-	120	billion	parameters	and	500	billion	samples
1.2 1.2 -78
6.3 -8.1
5.4 -8
4.2 2.3 -3.4
-1.1
2.3 4.9 7.4
4.5 2.1 -15
2.3 2.3
0.5 1.2 -0.9
-24
-1.3 -2.2 1.8
-4.9 -2.1 1.2
Web Scale ML
Billions of features
Hundredsofbillionsofexamples
Big Model
BigData
Ex:	Yahoo	word2vec	-	120	billion	parameters	and	500	billion	samples
Store Store Store
1.2 1.2 -78
6.3 -8.1
5.4 -8
4.2 2.3 -3.4
-1.1
2.3 4.9 7.4
4.5 2.1 -15
2.3 2.3
0.5 1.2 -0.9
-24
-1.3 -2.2 1.8
-4.9 -2.1 1.2
Web Scale ML
Billions of features
Hundredsofbillionsofexamples
Big Model
BigData
Ex:	Yahoo	word2vec	-	120	billion	parameters	and	500	billion	samples
Worker
Worker
Worker
Store Store Store
1.2 1.2 -78
6.3 -8.1
5.4 -8
4.2 2.3 -3.4
-1.1
2.3 4.9 7.4
4.5 2.1 -15
2.3 2.3
0.5 1.2 -0.9
-24
-1.3 -2.2 1.8
-4.9 -2.1 1.2
Web Scale ML
Billions of features
Hundredsofbillionsofexamples
Big Model
BigData
Ex:	Yahoo	word2vec	-	120	billion	parameters	and	500	billion	samples
Worker
Worker
Worker
Store Store Store
Each example depends only on a tiny fraction of the model
Two Optimization Strategies
Model
Multiple epochs…
BATCH
Example: Gradient Descent, L-BFGS
Model
Model
Model
SEQUENTIAL
Multiple random samples…
Example: (Minibatch) stochastic gradient method,
perceptron
Examples
Two Optimization Strategies
Model
Multiple epochs…
BATCH
Example: Gradient Descent, L-BFGS
Model
Model
Model
SEQUENTIAL
Multiple random samples…
Example: (Minibatch) stochastic gradient method,
perceptron
• Small number of model updates
• Accurate
• Each epoch may be expensive.
• Easy to parallelize.
Examples
Two Optimization Strategies
Model
Multiple epochs…
BATCH
Example: Gradient Descent, L-BFGS
Model
Model
Model
SEQUENTIAL
Multiple random samples…
Example: (Minibatch) stochastic gradient method,
perceptron
• Small number of model updates
• Accurate
• Each epoch may be expensive.
• Easy to parallelize.
• Requires lots of model updates.
• Not as accurate, but often good enough
• A lot of progress in one pass* for big data.
• Not trivial to parallelize.
*also optimal in terms of generalization error (often with a lot of tuning)
Examples
Requirements
Requirements
✓ Support both batch and sequential optimization
Requirements
✓ Support both batch and sequential optimization
✓ Sequential training: Handle frequent updates to the model
Requirements
✓ Support both batch and sequential optimization
✓ Sequential training: Handle frequent updates to the model
✓ Batch training: 100+ passes each pass must be fast.
Parameter Server (PS)
Client
Data
Client
Data
Client
Data
Client
Data
Training state stored in PS shards, asynchronous updates
PS Shard PS ShardPS Shard
ΔM
Model Update
M
Model
Early work: Yahoo LDA by Smola and Narayanamurthy based on memcached (2010),
Introduced in Google’s Distbelief (2012), refined in Petuum / Bösen (2013), Mu Li et al (2014)
SPARK + PARAMETER SERVER
ML in Spark alone
Executor Executor
CoreCore Core Core Core
Driver
Holds model
ML in Spark alone
Executor Executor
CoreCore Core Core Core
Driver
Holds model
MLlib optimization
ML in Spark alone
• Sequential:
– Driver-based communication limits frequency of model updates.
– Large minibatch size limits model update frequency, convergence suffers.
Executor Executor
CoreCore Core Core Core
Driver
Holds model
MLlib optimization
ML in Spark alone
• Sequential:
– Driver-based communication limits frequency of model updates.
– Large minibatch size limits model update frequency, convergence suffers.
• Batch:
– Driver bandwidth can be a bottleneck
– Synchronous stage wise processing limits throughput.
Executor Executor
CoreCore Core Core Core
Driver
Holds model
MLlib optimization
ML in Spark alone
• Sequential:
– Driver-based communication limits frequency of model updates.
– Large minibatch size limits model update frequency, convergence suffers.
• Batch:
– Driver bandwidth can be a bottleneck
– Synchronous stage wise processing limits throughput.
Executor Executor
CoreCore Core Core Core
Driver
Holds model
MLlib optimization
PS Architecture circumvents both limitations…
Spark + Parameter Server
• Leverage Spark for HDFS I/O, distributed processing, fine-grained
load balancing, failure recovery, in-memory operations
Spark + Parameter Server
• Leverage Spark for HDFS I/O, distributed processing, fine-grained
load balancing, failure recovery, in-memory operations
• Use PS to sync models, incremental updates during training, or
sometimes even some vector math.
Spark + Parameter Server
• Leverage Spark for HDFS I/O, distributed processing, fine-grained
load balancing, failure recovery, in-memory operations
• Use PS to sync models, incremental updates during training, or
sometimes even some vector math.
Spark + Parameter Server
HDFS
Training state stored in PS shards
Driver
Executor ExecutorExecutor
CoreCore Core Core
PS Shard PS ShardPS Shard
control
control
Yahoo PS
Yahoo PS
Server
Client API
Yahoo PS
Server
Client API
GC
Preallocated
arrays
Yahoo PS
Server
Client API
GC
Preallocated
arrays
• In-memory
• Lock per key / Lock-free
• Sync / Async
K-V
Yahoo PS
Server
Client API
GC
Preallocated
arrays
• In-memory
• Lock per key / Lock-free
• Sync / Async
K-V
• Column-
partitioned
• Supports
BLAS
Yahoo PS
Server
Client API
GC
Preallocated
arrays
• In-memory
• Lock per key / Lock-free
• Sync / Async
K-V
• Column-
partitioned
• Supports
BLAS
• Export Model
• Checkpoint
HDFS
Yahoo PS
Server
Client API
GC
Preallocated
arrays
• In-memory
• Lock per key / Lock-free
• Sync / Async
K-V
• Column-
partitioned
• Supports
BLAS
• Export Model
• Checkpoint
HDFS
UDF
• Client supplied aggregation
• Custom shard operations
Map PS API
• Distributed key-value store abstraction
• Supports batched operations in addition to usual get and put
• Many operations return a future – you can operate asynchronously or block
Matrix PS API
• Vector math (BLAS style operations), in addition to everything Map API provides
• Increment and fetch sparse vectors (e.g., for gradient aggregation)
• We use other custom operations on shard (API not shown)
EXAMPLES
Sponsored Search Advertising
Sponsored Search Advertising
Sponsored Search Advertising
query user ad context
Features
Model
Example Click Model: (Logistic Regression)
L-BFGS Background
L-BFGS Background
Newton’s method
Gradient Descent
Using curvature information,
you can converge faster…
L-BFGS Background
Exact, impractical
Newton’s method
Gradient Descent
Using curvature information,
you can converge faster…
L-BFGS Background
Exact, impractical
Step Size computation
- Needs to satisfy some technical (Wolfe) conditions
- Adaptively determined from data
Inverse Hessian Approximation
(based on history of L-previous gradients and model deltas)
Approximate, practical
Newton’s method
Gradient Descent
Using curvature information,
you can converge faster…
L-BFGS Background
Exact, impractical
Step Size computation
- Needs to satisfy some technical (Wolfe) conditions
- Adaptively determined from data
Inverse Hessian Approximation
(based on history of L-previous gradients and model deltas)
Approximate, practical
Newton’s method
Gradient Descent
Using curvature information,
you can converge faster…
L-BFGS Background
Exact, impractical
Step Size computation
- Needs to satisfy some technical (Wolfe) conditions
- Adaptively determined from data
Inverse Hessian Approximation
(based on history of L-previous gradients and model deltas)
Approximate, practical
Newton’s method
Gradient Descent
Using curvature information,
you can converge faster…
L-BFGS Background
Exact, impractical
Step Size computation
- Needs to satisfy some technical (Wolfe) conditions
- Adaptively determined from data
Inverse Hessian Approximation
(based on history of L-previous gradients and model deltas)
Approximate, practical
Newton’s method
Gradient Descent
Using curvature information,
you can converge faster…
dotprod
axpy (y ← ax + y)
copy
axpy
scal
scal
Vector Math
dotprod
L-BFGS Background
Exact, impractical
Step Size computation
- Needs to satisfy some technical (Wolfe) conditions
- Adaptively determined from data
Inverse Hessian Approximation
(based on history of L-previous gradients and model deltas)
Approximate, practical
Newton’s method
Gradient Descent
Using curvature information,
you can converge faster…
Executor ExecutorExecutor
HDFS HDFSHDFS
Driver
PS PS PS PS
Distributed LBFGS*
Compute gradient and loss
1. Incremental sparse gradient update
2. Fetch sparse portions of model
Coordinates executor
Step 1: Compute and update Gradient
*Our design is very similar to Sandblaster L-BFGS, Jeff Dean et al, Large Scale Distributed Deep Networks (2012)
state vectors
Executor ExecutorExecutor
HDFS HDFSHDFS
Driver
PS PS PS PS
Distributed LBFGS*
Compute gradient and loss
1. Incremental sparse gradient update
2. Fetch sparse portions of model
Coordinates executor
Step 1: Compute and update Gradient
*Our design is very similar to Sandblaster L-BFGS, Jeff Dean et al, Large Scale Distributed Deep Networks (2012)
state vectors
Executor ExecutorExecutor
HDFS HDFSHDFS
Driver
PS PS PS PS
Distributed LBFGS
Coordinates PS for
performing L-BFGS updates
Actual L-BFGS updates
(BLAS vector math)
Step 2: Build inverse Hessian Approximation
Executor ExecutorExecutor
HDFS HDFSHDFS
Driver
PS PS PS PS
Distributed LBFGS
Coordinates executor Compute directional derivatives
for parallel line search
Fetch sparse portions of modelStep 3: Compute losses and directional derivatives
Executor ExecutorExecutor
HDFS HDFSHDFS
Driver
PS PS PS PS
Distributed LBFGS
Performs line search and
model update
Model update (BLAS vector math)Step 4: Line search and model update
Speedup tricks
Speedup tricks
• Intersperse communication and computation
Speedup tricks
• Intersperse communication and computation
• Quicker convergence
– Parallel line search for step size
– Curvature for initial Hessian approximation*
*borrowed from vowpal wabbit
Speedup tricks
• Intersperse communication and computation
• Quicker convergence
– Parallel line search for step size
– Curvature for initial Hessian approximation*
• Network bandwidth reduction
– Compressed integer arrays
– Only store indices for binary data
*borrowed from vowpal wabbit
Speedup tricks
• Intersperse communication and computation
• Quicker convergence
– Parallel line search for step size
– Curvature for initial Hessian approximation*
• Network bandwidth reduction
– Compressed integer arrays
– Only store indices for binary data
• Matrix math on minibatch
*borrowed from vowpal wabbit
Speedup tricks
• Intersperse communication and computation
• Quicker convergence
– Parallel line search for step size
– Curvature for initial Hessian approximation*
• Network bandwidth reduction
– Compressed integer arrays
– Only store indices for binary data
• Matrix math on minibatch
0
750
1500
2250
3000
10
20
100
221612
2880
1260
96
MLlib
PS + Spark
1.6 x 108 examples, 100 executors, 10 cores
time(inseconds)perepoch
feature size (millions)
*borrowed from vowpal wabbit
Word Embeddings
Word Embeddings
Word Embeddings
v(paris) = [0.13, -0.4, 0.22, …., -0.45]
v(lion) = [-0.23, -0.1, 0.98, …., 0.65]
v(quark) = [1.4, 0.32, -0.01, …, 0.023]
...
Word2vec
Word2vec
Word2vec
• new techniques to
compute vector
representations of words
from corpus
Word2vec
• new techniques to
compute vector
representations of words
from corpus
• geometry of vectors
captures word semantics
Word2vec
Word2vec
• Skipgram with negative sampling:
Word2vec
• Skipgram with negative sampling:
– training set includes pairs of words and neighbors in corpus,
along with randomly selected words for each neighbor
Word2vec
• Skipgram with negative sampling:
– training set includes pairs of words and neighbors in corpus,
along with randomly selected words for each neighbor
– determine w → u(w),v(w) so that sigmoid(u(w)•v(w’)) is close
to (minimizes log loss) the probability that w’ is a neighbor of
w as opposed to a randomly selected word.
Word2vec
• Skipgram with negative sampling:
– training set includes pairs of words and neighbors in corpus,
along with randomly selected words for each neighbor
– determine w → u(w),v(w) so that sigmoid(u(w)•v(w’)) is close
to (minimizes log loss) the probability that w’ is a neighbor of
w as opposed to a randomly selected word.
– SGD involves computing many vector dot products e.g.,
u(w)•v(w’) and vector linear combinations
e.g., u(w) += α v(w’).
Word2vec Application at Yahoo
• Example training data:
gas_cap_replacement_for_car
slc_679f037df54f5d9c41cab05bfae0926
gas_door_replacement_for_car
slc_466145af16a40717c84683db3f899d0a fuel_door_covers
adid_c_28540527225_285898621262
slc_348709d73214fdeb9782f8b71aff7b6e autozone_auto_parts
adid_b_3318310706_280452370893 auoto_zone
slc_8dcdab5d20a2caa02b8b1d1c8ccbd36b
slc_58f979b6deb6f40c640f7ca8a177af2d
[ Grbovic, et. al. SIGIR 2015 and SIGIR 2016 (to appear) ]
Distributed Word2vec
Distributed Word2vec
• Needed system to train 200 million 300
dimensional word2vec model using minibatch
SGD
Distributed Word2vec
• Needed system to train 200 million 300
dimensional word2vec model using minibatch
SGD
• Achieved in a high throughput and network
efficient way using our matrix based PS server:
Distributed Word2vec
• Needed system to train 200 million 300
dimensional word2vec model using minibatch
SGD
• Achieved in a high throughput and network
efficient way using our matrix based PS server:
– Vectors don’t go over network.
Distributed Word2vec
• Needed system to train 200 million 300
dimensional word2vec model using minibatch
SGD
• Achieved in a high throughput and network
efficient way using our matrix based PS server:
– Vectors don’t go over network.
– Most compute on PS servers, with clients aggregating
partial results from shards.
Distributed Word2vec
Word2vec
learners
PS Shards
V1 row
V2 row
…
Vn row
HDFS
. . .
Distributed Word2vec
Word2vec
learners
PS Shards
V1 row
V2 row
…
Vn row
Each shard stores
a part of every vector
HDFS
. . .
Distributed Word2vec
Send word indices
and seeds
Word2vec
learners
PS Shards
V1 row
V2 row
…
Vn row
HDFS
. . .
Distributed Word2vec
Negative sampling,
compute u•v
Word2vec
learners
PS Shards
V1 row
V2 row
…
Vn row
HDFS
. . .
Distributed Word2vec
Word2vec
learners
PS Shards
Aggregate results &
compute lin. comb. coefficients (e.g., α…)
V1 row
V2 row
…
Vn row
HDFS
. . .
Distributed Word2vec
Word2vec
learners
PS Shards
V1 row
V2 row
…
Vn row
Update vectors
(v += αu, …)
HDFS
. . .
Distributed Word2vec
Distributed Word2vec
• Network lower by factor of #shards/dimension
compared to conventional PS based system
(1/20 to 1/100 for useful scenarios).
Distributed Word2vec
• Network lower by factor of #shards/dimension
compared to conventional PS based system
(1/20 to 1/100 for useful scenarios).
• Trains 200 million vocab, 55 billion word search
session in 2.5 days.
Distributed Word2vec
• Network lower by factor of #shards/dimension
compared to conventional PS based system
(1/20 to 1/100 for useful scenarios).
• Trains 200 million vocab, 55 billion word search
session in 2.5 days.
• In production for regular training in Yahoo search
ad serving system.
Other Projects using Spark + PS
• Online learning on PS
– Personalization as a Service
– Sponsored Search
• Factorization Machines
– Large scale user profiling
SPARK+PS ON HADOOP CLUSTER
Training Data on HDFS
HDFS
Launch PS Using Apache Slider on YARN
HDFS
YARN
Apache Slider
Parameter Servers
Launch Clients using Spark or Hadoop Streaming
API
HDFS
YARN
Apache Slider
Parameter Servers
Apache Spark
Clients
Parameter ServerClients
Training
HDFS
YARN
Apache SliderApache Spark
Model Export
HDFS
YARN
Apache Slider
Parameter Server
Apache Spark
Clients
Model Export
HDFS
YARN
Apache Slider
Parameter Server
Apache Spark
Clients
Summary
• Parameter server indispensable for big models
• Spark + Parameter Server has proved to be very
flexible platform for our large scale computing
needs
• Direct computation on the parameter servers
accelerate training for our use-cases
Thank you!
For more, contact bigdata@yahoo-inc.com.

More Related Content

What's hot (20)

PDF
Apache Spark MLlib 2.0 Preview: Data Science and Production
Databricks
 
PDF
Random Walks on Large Scale Graphs with Apache Spark with Min Shen
Databricks
 
PDF
Operational Tips For Deploying Apache Spark
Databricks
 
PDF
Spark Summit EU talk by Oscar Castaneda
Spark Summit
 
PDF
Deep Dive of ADBMS Migration to Apache Spark—Use Cases Sharing
Databricks
 
PDF
Improving the Life of Data Scientists: Automating ML Lifecycle through MLflow
Databricks
 
PDF
Spark Summit EU talk by Berni Schiefer
Spark Summit
 
PDF
Recent Developments In SparkR For Advanced Analytics
Databricks
 
PPTX
From Pipelines to Refineries: scaling big data applications with Tim Hunter
Databricks
 
PDF
ROCm and Distributed Deep Learning on Spark and TensorFlow
Databricks
 
PDF
Spark Summit EU talk by Elena Lazovik
Spark Summit
 
PDF
Spark Autotuning: Spark Summit East talk by Lawrence Spracklen
Spark Summit
 
PDF
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Jen Aman
 
PDF
A Journey into Databricks' Pipelines: Journey and Lessons Learned
Databricks
 
PDF
Spark Summit EU talk by Tim Hunter
Spark Summit
 
PDF
Build, Scale, and Deploy Deep Learning Pipelines with Ease
Databricks
 
PDF
High Performance Python on Apache Spark
Wes McKinney
 
PDF
Willump: Optimizing Feature Computation in ML Inference
Databricks
 
PPTX
Lambda architecture on Spark, Kafka for real-time large scale ML
huguk
 
PDF
Improving Spark SQL at LinkedIn
Databricks
 
Apache Spark MLlib 2.0 Preview: Data Science and Production
Databricks
 
Random Walks on Large Scale Graphs with Apache Spark with Min Shen
Databricks
 
Operational Tips For Deploying Apache Spark
Databricks
 
Spark Summit EU talk by Oscar Castaneda
Spark Summit
 
Deep Dive of ADBMS Migration to Apache Spark—Use Cases Sharing
Databricks
 
Improving the Life of Data Scientists: Automating ML Lifecycle through MLflow
Databricks
 
Spark Summit EU talk by Berni Schiefer
Spark Summit
 
Recent Developments In SparkR For Advanced Analytics
Databricks
 
From Pipelines to Refineries: scaling big data applications with Tim Hunter
Databricks
 
ROCm and Distributed Deep Learning on Spark and TensorFlow
Databricks
 
Spark Summit EU talk by Elena Lazovik
Spark Summit
 
Spark Autotuning: Spark Summit East talk by Lawrence Spracklen
Spark Summit
 
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Jen Aman
 
A Journey into Databricks' Pipelines: Journey and Lessons Learned
Databricks
 
Spark Summit EU talk by Tim Hunter
Spark Summit
 
Build, Scale, and Deploy Deep Learning Pipelines with Ease
Databricks
 
High Performance Python on Apache Spark
Wes McKinney
 
Willump: Optimizing Feature Computation in ML Inference
Databricks
 
Lambda architecture on Spark, Kafka for real-time large scale ML
huguk
 
Improving Spark SQL at LinkedIn
Databricks
 

Similar to Scaling Machine Learning To Billions Of Parameters (20)

PDF
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...
PAPIs.io
 
PDF
Low Latency Polyglot Model Scoring using Apache Apex
Apache Apex
 
PDF
Infrastructure Challenges in Scaling RAG with Custom AI models
Zilliz
 
PPTX
Design Patterns for Large-Scale Real-Time Learning
Swiss Big Data User Group
 
PPTX
Cloudera Federal Forum 2014: The Evolution of Machine Learning from Science t...
Cloudera, Inc.
 
PPTX
Modern Data Warehousing with the Microsoft Analytics Platform System
James Serra
 
PPTX
From Pandas to Koalas: Reducing Time-To-Insight for Virgin Hyperloop's Data
Databricks
 
PDF
Auto-Train a Time-Series Forecast Model With AML + ADB
Databricks
 
PPTX
DataMass Summit - Machine Learning for Big Data in SQL Server
Łukasz Grala
 
PDF
Scaling AI in production using PyTorch
geetachauhan
 
PDF
Challenges on Distributed Machine Learning
jie cao
 
PPTX
Emerging technologies /frameworks in Big Data
Rahul Jain
 
PDF
Applied Machine learning using H2O, python and R Workshop
Avkash Chauhan
 
DOC
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Mopuru Babu
 
DOC
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Mopuru Babu
 
PDF
How to Productionize Your Machine Learning Models Using Apache Spark MLlib 2....
Databricks
 
PDF
Navigating SAP’s Integration Options (Mastering SAP Technologies 2013)
Sascha Wenninger
 
PDF
SQL Engines for Hadoop - The case for Impala
markgrover
 
PDF
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Chris Fregly
 
PPTX
Machine Learning Models in Production
DataWorks Summit
 
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...
PAPIs.io
 
Low Latency Polyglot Model Scoring using Apache Apex
Apache Apex
 
Infrastructure Challenges in Scaling RAG with Custom AI models
Zilliz
 
Design Patterns for Large-Scale Real-Time Learning
Swiss Big Data User Group
 
Cloudera Federal Forum 2014: The Evolution of Machine Learning from Science t...
Cloudera, Inc.
 
Modern Data Warehousing with the Microsoft Analytics Platform System
James Serra
 
From Pandas to Koalas: Reducing Time-To-Insight for Virgin Hyperloop's Data
Databricks
 
Auto-Train a Time-Series Forecast Model With AML + ADB
Databricks
 
DataMass Summit - Machine Learning for Big Data in SQL Server
Łukasz Grala
 
Scaling AI in production using PyTorch
geetachauhan
 
Challenges on Distributed Machine Learning
jie cao
 
Emerging technologies /frameworks in Big Data
Rahul Jain
 
Applied Machine learning using H2O, python and R Workshop
Avkash Chauhan
 
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Mopuru Babu
 
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Mopuru Babu
 
How to Productionize Your Machine Learning Models Using Apache Spark MLlib 2....
Databricks
 
Navigating SAP’s Integration Options (Mastering SAP Technologies 2013)
Sascha Wenninger
 
SQL Engines for Hadoop - The case for Impala
markgrover
 
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Chris Fregly
 
Machine Learning Models in Production
DataWorks Summit
 
Ad

More from Jen Aman (20)

PPTX
Deep Learning and Streaming in Apache Spark 2.x with Matei Zaharia
Jen Aman
 
PDF
Snorkel: Dark Data and Machine Learning with Christopher Ré
Jen Aman
 
PDF
Deep Learning on Apache® Spark™: Workflows and Best Practices
Jen Aman
 
PDF
Deep Learning on Apache® Spark™ : Workflows and Best Practices
Jen Aman
 
PDF
RISELab:Enabling Intelligent Real-Time Decisions
Jen Aman
 
PDF
Spatial Analysis On Histological Images Using Spark
Jen Aman
 
PDF
Massive Simulations In Spark: Distributed Monte Carlo For Global Health Forec...
Jen Aman
 
PDF
A Graph-Based Method For Cross-Entity Threat Detection
Jen Aman
 
PDF
Yggdrasil: Faster Decision Trees Using Column Partitioning In Spark
Jen Aman
 
PDF
Time-Evolving Graph Processing On Commodity Clusters
Jen Aman
 
PDF
Deploying Accelerators At Datacenter Scale Using Spark
Jen Aman
 
PDF
Re-Architecting Spark For Performance Understandability
Jen Aman
 
PDF
Re-Architecting Spark For Performance Understandability
Jen Aman
 
PDF
Low Latency Execution For Apache Spark
Jen Aman
 
PDF
Efficient State Management With Spark 2.0 And Scale-Out Databases
Jen Aman
 
PDF
Livy: A REST Web Service For Apache Spark
Jen Aman
 
PDF
GPU Computing With Apache Spark And Python
Jen Aman
 
PDF
Spark And Cassandra: 2 Fast, 2 Furious
Jen Aman
 
PDF
Building Custom Machine Learning Algorithms With Apache SystemML
Jen Aman
 
PDF
Spark on Mesos
Jen Aman
 
Deep Learning and Streaming in Apache Spark 2.x with Matei Zaharia
Jen Aman
 
Snorkel: Dark Data and Machine Learning with Christopher Ré
Jen Aman
 
Deep Learning on Apache® Spark™: Workflows and Best Practices
Jen Aman
 
Deep Learning on Apache® Spark™ : Workflows and Best Practices
Jen Aman
 
RISELab:Enabling Intelligent Real-Time Decisions
Jen Aman
 
Spatial Analysis On Histological Images Using Spark
Jen Aman
 
Massive Simulations In Spark: Distributed Monte Carlo For Global Health Forec...
Jen Aman
 
A Graph-Based Method For Cross-Entity Threat Detection
Jen Aman
 
Yggdrasil: Faster Decision Trees Using Column Partitioning In Spark
Jen Aman
 
Time-Evolving Graph Processing On Commodity Clusters
Jen Aman
 
Deploying Accelerators At Datacenter Scale Using Spark
Jen Aman
 
Re-Architecting Spark For Performance Understandability
Jen Aman
 
Re-Architecting Spark For Performance Understandability
Jen Aman
 
Low Latency Execution For Apache Spark
Jen Aman
 
Efficient State Management With Spark 2.0 And Scale-Out Databases
Jen Aman
 
Livy: A REST Web Service For Apache Spark
Jen Aman
 
GPU Computing With Apache Spark And Python
Jen Aman
 
Spark And Cassandra: 2 Fast, 2 Furious
Jen Aman
 
Building Custom Machine Learning Algorithms With Apache SystemML
Jen Aman
 
Spark on Mesos
Jen Aman
 
Ad

Recently uploaded (20)

PDF
WEF_Future_of_Global_Fintech_Second_Edition_2025.pdf
AproximacionAlFuturo
 
PPT
Growth of Public Expendituuure_55423.ppt
NavyaDeora
 
PPTX
The _Operations_on_Functions_Addition subtruction Multiplication and Division...
mdregaspi24
 
PPTX
apidays Singapore 2025 - From Data to Insights: Building AI-Powered Data APIs...
apidays
 
PDF
apidays Helsinki & North 2025 - How (not) to run a Graphql Stewardship Group,...
apidays
 
PDF
apidays Helsinki & North 2025 - API-Powered Journeys: Mobility in an API-Driv...
apidays
 
PPTX
apidays Helsinki & North 2025 - Agentic AI: A Friend or Foe?, Merja Kajava (A...
apidays
 
PDF
Product Management in HealthTech (Case Studies from SnappDoctor)
Hamed Shams
 
PPTX
apidays Helsinki & North 2025 - Vero APIs - Experiences of API development in...
apidays
 
PPTX
ER_Model_with_Diagrams_Presentation.pptx
dharaadhvaryu1992
 
PDF
The European Business Wallet: Why It Matters and How It Powers the EUDI Ecosy...
Lal Chandran
 
PDF
List of all the AI prompt cheat codes.pdf
Avijit Kumar Roy
 
PPTX
Exploring Multilingual Embeddings for Italian Semantic Search: A Pretrained a...
Sease
 
PDF
apidays Helsinki & North 2025 - Monetizing AI APIs: The New API Economy, Alla...
apidays
 
PDF
apidays Helsinki & North 2025 - APIs in the healthcare sector: hospitals inte...
apidays
 
PPTX
Module-5-Measures-of-Central-Tendency-Grouped-Data-1.pptx
lacsonjhoma0407
 
PDF
How to Connect Your On-Premises Site to AWS Using Site-to-Site VPN.pdf
Tamanna
 
PPTX
GenAI-Introduction-to-Copilot-for-Bing-March-2025-FOR-HUB.pptx
cleydsonborges1
 
PPTX
Numbers of a nation: how we estimate population statistics | Accessible slides
Office for National Statistics
 
PPTX
apidays Helsinki & North 2025 - APIs at Scale: Designing for Alignment, Trust...
apidays
 
WEF_Future_of_Global_Fintech_Second_Edition_2025.pdf
AproximacionAlFuturo
 
Growth of Public Expendituuure_55423.ppt
NavyaDeora
 
The _Operations_on_Functions_Addition subtruction Multiplication and Division...
mdregaspi24
 
apidays Singapore 2025 - From Data to Insights: Building AI-Powered Data APIs...
apidays
 
apidays Helsinki & North 2025 - How (not) to run a Graphql Stewardship Group,...
apidays
 
apidays Helsinki & North 2025 - API-Powered Journeys: Mobility in an API-Driv...
apidays
 
apidays Helsinki & North 2025 - Agentic AI: A Friend or Foe?, Merja Kajava (A...
apidays
 
Product Management in HealthTech (Case Studies from SnappDoctor)
Hamed Shams
 
apidays Helsinki & North 2025 - Vero APIs - Experiences of API development in...
apidays
 
ER_Model_with_Diagrams_Presentation.pptx
dharaadhvaryu1992
 
The European Business Wallet: Why It Matters and How It Powers the EUDI Ecosy...
Lal Chandran
 
List of all the AI prompt cheat codes.pdf
Avijit Kumar Roy
 
Exploring Multilingual Embeddings for Italian Semantic Search: A Pretrained a...
Sease
 
apidays Helsinki & North 2025 - Monetizing AI APIs: The New API Economy, Alla...
apidays
 
apidays Helsinki & North 2025 - APIs in the healthcare sector: hospitals inte...
apidays
 
Module-5-Measures-of-Central-Tendency-Grouped-Data-1.pptx
lacsonjhoma0407
 
How to Connect Your On-Premises Site to AWS Using Site-to-Site VPN.pdf
Tamanna
 
GenAI-Introduction-to-Copilot-for-Bing-March-2025-FOR-HUB.pptx
cleydsonborges1
 
Numbers of a nation: how we estimate population statistics | Accessible slides
Office for National Statistics
 
apidays Helsinki & North 2025 - APIs at Scale: Designing for Alignment, Trust...
apidays
 

Scaling Machine Learning To Billions Of Parameters

  • 1. Scaling Machine Learning To Billions of Parameters Badri Bhaskar, Erik Ordentlich (joint with Andy Feng, Lee Yang, Peter Cnudde) Yahoo, Inc.
  • 2. Outline • Large scale machine learning (ML) • Spark + Parameter Server – Architecture – Implementation • Examples: - Distributed L-BFGS (Batch) - Distributed Word2vec (Sequential) • Spark + Parameter Server on Hadoop Cluster
  • 4. 1.2 1.2 -78 6.3 -8.1 5.4 -8 4.2 2.3 -3.4 -1.1 2.3 4.9 7.4 4.5 2.1 -15 2.3 2.3 0.5 1.2 -0.9 -24 -1.3 -2.2 1.8 -4.9 -2.1 1.2 Web Scale ML Billions of features Hundredsofbillionsofexamples Big Model BigData Ex: Yahoo word2vec - 120 billion parameters and 500 billion samples
  • 5. 1.2 1.2 -78 6.3 -8.1 5.4 -8 4.2 2.3 -3.4 -1.1 2.3 4.9 7.4 4.5 2.1 -15 2.3 2.3 0.5 1.2 -0.9 -24 -1.3 -2.2 1.8 -4.9 -2.1 1.2 Web Scale ML Billions of features Hundredsofbillionsofexamples Big Model BigData Ex: Yahoo word2vec - 120 billion parameters and 500 billion samples Store Store Store
  • 6. 1.2 1.2 -78 6.3 -8.1 5.4 -8 4.2 2.3 -3.4 -1.1 2.3 4.9 7.4 4.5 2.1 -15 2.3 2.3 0.5 1.2 -0.9 -24 -1.3 -2.2 1.8 -4.9 -2.1 1.2 Web Scale ML Billions of features Hundredsofbillionsofexamples Big Model BigData Ex: Yahoo word2vec - 120 billion parameters and 500 billion samples Worker Worker Worker Store Store Store
  • 7. 1.2 1.2 -78 6.3 -8.1 5.4 -8 4.2 2.3 -3.4 -1.1 2.3 4.9 7.4 4.5 2.1 -15 2.3 2.3 0.5 1.2 -0.9 -24 -1.3 -2.2 1.8 -4.9 -2.1 1.2 Web Scale ML Billions of features Hundredsofbillionsofexamples Big Model BigData Ex: Yahoo word2vec - 120 billion parameters and 500 billion samples Worker Worker Worker Store Store Store Each example depends only on a tiny fraction of the model
  • 8. Two Optimization Strategies Model Multiple epochs… BATCH Example: Gradient Descent, L-BFGS Model Model Model SEQUENTIAL Multiple random samples… Example: (Minibatch) stochastic gradient method, perceptron Examples
  • 9. Two Optimization Strategies Model Multiple epochs… BATCH Example: Gradient Descent, L-BFGS Model Model Model SEQUENTIAL Multiple random samples… Example: (Minibatch) stochastic gradient method, perceptron • Small number of model updates • Accurate • Each epoch may be expensive. • Easy to parallelize. Examples
  • 10. Two Optimization Strategies Model Multiple epochs… BATCH Example: Gradient Descent, L-BFGS Model Model Model SEQUENTIAL Multiple random samples… Example: (Minibatch) stochastic gradient method, perceptron • Small number of model updates • Accurate • Each epoch may be expensive. • Easy to parallelize. • Requires lots of model updates. • Not as accurate, but often good enough • A lot of progress in one pass* for big data. • Not trivial to parallelize. *also optimal in terms of generalization error (often with a lot of tuning) Examples
  • 12. Requirements ✓ Support both batch and sequential optimization
  • 13. Requirements ✓ Support both batch and sequential optimization ✓ Sequential training: Handle frequent updates to the model
  • 14. Requirements ✓ Support both batch and sequential optimization ✓ Sequential training: Handle frequent updates to the model ✓ Batch training: 100+ passes each pass must be fast.
  • 15. Parameter Server (PS) Client Data Client Data Client Data Client Data Training state stored in PS shards, asynchronous updates PS Shard PS ShardPS Shard ΔM Model Update M Model Early work: Yahoo LDA by Smola and Narayanamurthy based on memcached (2010), Introduced in Google’s Distbelief (2012), refined in Petuum / Bösen (2013), Mu Li et al (2014)
  • 17. ML in Spark alone Executor Executor CoreCore Core Core Core Driver Holds model
  • 18. ML in Spark alone Executor Executor CoreCore Core Core Core Driver Holds model MLlib optimization
  • 19. ML in Spark alone • Sequential: – Driver-based communication limits frequency of model updates. – Large minibatch size limits model update frequency, convergence suffers. Executor Executor CoreCore Core Core Core Driver Holds model MLlib optimization
  • 20. ML in Spark alone • Sequential: – Driver-based communication limits frequency of model updates. – Large minibatch size limits model update frequency, convergence suffers. • Batch: – Driver bandwidth can be a bottleneck – Synchronous stage wise processing limits throughput. Executor Executor CoreCore Core Core Core Driver Holds model MLlib optimization
  • 21. ML in Spark alone • Sequential: – Driver-based communication limits frequency of model updates. – Large minibatch size limits model update frequency, convergence suffers. • Batch: – Driver bandwidth can be a bottleneck – Synchronous stage wise processing limits throughput. Executor Executor CoreCore Core Core Core Driver Holds model MLlib optimization PS Architecture circumvents both limitations…
  • 23. • Leverage Spark for HDFS I/O, distributed processing, fine-grained load balancing, failure recovery, in-memory operations Spark + Parameter Server
  • 24. • Leverage Spark for HDFS I/O, distributed processing, fine-grained load balancing, failure recovery, in-memory operations • Use PS to sync models, incremental updates during training, or sometimes even some vector math. Spark + Parameter Server
  • 25. • Leverage Spark for HDFS I/O, distributed processing, fine-grained load balancing, failure recovery, in-memory operations • Use PS to sync models, incremental updates during training, or sometimes even some vector math. Spark + Parameter Server HDFS Training state stored in PS shards Driver Executor ExecutorExecutor CoreCore Core Core PS Shard PS ShardPS Shard control control
  • 29. Yahoo PS Server Client API GC Preallocated arrays • In-memory • Lock per key / Lock-free • Sync / Async K-V
  • 30. Yahoo PS Server Client API GC Preallocated arrays • In-memory • Lock per key / Lock-free • Sync / Async K-V • Column- partitioned • Supports BLAS
  • 31. Yahoo PS Server Client API GC Preallocated arrays • In-memory • Lock per key / Lock-free • Sync / Async K-V • Column- partitioned • Supports BLAS • Export Model • Checkpoint HDFS
  • 32. Yahoo PS Server Client API GC Preallocated arrays • In-memory • Lock per key / Lock-free • Sync / Async K-V • Column- partitioned • Supports BLAS • Export Model • Checkpoint HDFS UDF • Client supplied aggregation • Custom shard operations
  • 33. Map PS API • Distributed key-value store abstraction • Supports batched operations in addition to usual get and put • Many operations return a future – you can operate asynchronously or block
  • 34. Matrix PS API • Vector math (BLAS style operations), in addition to everything Map API provides • Increment and fetch sparse vectors (e.g., for gradient aggregation) • We use other custom operations on shard (API not shown)
  • 38. Sponsored Search Advertising query user ad context Features Model Example Click Model: (Logistic Regression)
  • 40. L-BFGS Background Newton’s method Gradient Descent Using curvature information, you can converge faster…
  • 41. L-BFGS Background Exact, impractical Newton’s method Gradient Descent Using curvature information, you can converge faster…
  • 42. L-BFGS Background Exact, impractical Step Size computation - Needs to satisfy some technical (Wolfe) conditions - Adaptively determined from data Inverse Hessian Approximation (based on history of L-previous gradients and model deltas) Approximate, practical Newton’s method Gradient Descent Using curvature information, you can converge faster…
  • 43. L-BFGS Background Exact, impractical Step Size computation - Needs to satisfy some technical (Wolfe) conditions - Adaptively determined from data Inverse Hessian Approximation (based on history of L-previous gradients and model deltas) Approximate, practical Newton’s method Gradient Descent Using curvature information, you can converge faster…
  • 44. L-BFGS Background Exact, impractical Step Size computation - Needs to satisfy some technical (Wolfe) conditions - Adaptively determined from data Inverse Hessian Approximation (based on history of L-previous gradients and model deltas) Approximate, practical Newton’s method Gradient Descent Using curvature information, you can converge faster…
  • 45. L-BFGS Background Exact, impractical Step Size computation - Needs to satisfy some technical (Wolfe) conditions - Adaptively determined from data Inverse Hessian Approximation (based on history of L-previous gradients and model deltas) Approximate, practical Newton’s method Gradient Descent Using curvature information, you can converge faster… dotprod axpy (y ← ax + y) copy axpy scal scal Vector Math dotprod
  • 46. L-BFGS Background Exact, impractical Step Size computation - Needs to satisfy some technical (Wolfe) conditions - Adaptively determined from data Inverse Hessian Approximation (based on history of L-previous gradients and model deltas) Approximate, practical Newton’s method Gradient Descent Using curvature information, you can converge faster…
  • 47. Executor ExecutorExecutor HDFS HDFSHDFS Driver PS PS PS PS Distributed LBFGS* Compute gradient and loss 1. Incremental sparse gradient update 2. Fetch sparse portions of model Coordinates executor Step 1: Compute and update Gradient *Our design is very similar to Sandblaster L-BFGS, Jeff Dean et al, Large Scale Distributed Deep Networks (2012) state vectors
  • 48. Executor ExecutorExecutor HDFS HDFSHDFS Driver PS PS PS PS Distributed LBFGS* Compute gradient and loss 1. Incremental sparse gradient update 2. Fetch sparse portions of model Coordinates executor Step 1: Compute and update Gradient *Our design is very similar to Sandblaster L-BFGS, Jeff Dean et al, Large Scale Distributed Deep Networks (2012) state vectors
  • 49. Executor ExecutorExecutor HDFS HDFSHDFS Driver PS PS PS PS Distributed LBFGS Coordinates PS for performing L-BFGS updates Actual L-BFGS updates (BLAS vector math) Step 2: Build inverse Hessian Approximation
  • 50. Executor ExecutorExecutor HDFS HDFSHDFS Driver PS PS PS PS Distributed LBFGS Coordinates executor Compute directional derivatives for parallel line search Fetch sparse portions of modelStep 3: Compute losses and directional derivatives
  • 51. Executor ExecutorExecutor HDFS HDFSHDFS Driver PS PS PS PS Distributed LBFGS Performs line search and model update Model update (BLAS vector math)Step 4: Line search and model update
  • 53. Speedup tricks • Intersperse communication and computation
  • 54. Speedup tricks • Intersperse communication and computation • Quicker convergence – Parallel line search for step size – Curvature for initial Hessian approximation* *borrowed from vowpal wabbit
  • 55. Speedup tricks • Intersperse communication and computation • Quicker convergence – Parallel line search for step size – Curvature for initial Hessian approximation* • Network bandwidth reduction – Compressed integer arrays – Only store indices for binary data *borrowed from vowpal wabbit
  • 56. Speedup tricks • Intersperse communication and computation • Quicker convergence – Parallel line search for step size – Curvature for initial Hessian approximation* • Network bandwidth reduction – Compressed integer arrays – Only store indices for binary data • Matrix math on minibatch *borrowed from vowpal wabbit
  • 57. Speedup tricks • Intersperse communication and computation • Quicker convergence – Parallel line search for step size – Curvature for initial Hessian approximation* • Network bandwidth reduction – Compressed integer arrays – Only store indices for binary data • Matrix math on minibatch 0 750 1500 2250 3000 10 20 100 221612 2880 1260 96 MLlib PS + Spark 1.6 x 108 examples, 100 executors, 10 cores time(inseconds)perepoch feature size (millions) *borrowed from vowpal wabbit
  • 60. Word Embeddings v(paris) = [0.13, -0.4, 0.22, …., -0.45] v(lion) = [-0.23, -0.1, 0.98, …., 0.65] v(quark) = [1.4, 0.32, -0.01, …, 0.023] ...
  • 63. Word2vec • new techniques to compute vector representations of words from corpus
  • 64. Word2vec • new techniques to compute vector representations of words from corpus • geometry of vectors captures word semantics
  • 66. Word2vec • Skipgram with negative sampling:
  • 67. Word2vec • Skipgram with negative sampling: – training set includes pairs of words and neighbors in corpus, along with randomly selected words for each neighbor
  • 68. Word2vec • Skipgram with negative sampling: – training set includes pairs of words and neighbors in corpus, along with randomly selected words for each neighbor – determine w → u(w),v(w) so that sigmoid(u(w)•v(w’)) is close to (minimizes log loss) the probability that w’ is a neighbor of w as opposed to a randomly selected word.
  • 69. Word2vec • Skipgram with negative sampling: – training set includes pairs of words and neighbors in corpus, along with randomly selected words for each neighbor – determine w → u(w),v(w) so that sigmoid(u(w)•v(w’)) is close to (minimizes log loss) the probability that w’ is a neighbor of w as opposed to a randomly selected word. – SGD involves computing many vector dot products e.g., u(w)•v(w’) and vector linear combinations e.g., u(w) += α v(w’).
  • 70. Word2vec Application at Yahoo • Example training data: gas_cap_replacement_for_car slc_679f037df54f5d9c41cab05bfae0926 gas_door_replacement_for_car slc_466145af16a40717c84683db3f899d0a fuel_door_covers adid_c_28540527225_285898621262 slc_348709d73214fdeb9782f8b71aff7b6e autozone_auto_parts adid_b_3318310706_280452370893 auoto_zone slc_8dcdab5d20a2caa02b8b1d1c8ccbd36b slc_58f979b6deb6f40c640f7ca8a177af2d [ Grbovic, et. al. SIGIR 2015 and SIGIR 2016 (to appear) ]
  • 72. Distributed Word2vec • Needed system to train 200 million 300 dimensional word2vec model using minibatch SGD
  • 73. Distributed Word2vec • Needed system to train 200 million 300 dimensional word2vec model using minibatch SGD • Achieved in a high throughput and network efficient way using our matrix based PS server:
  • 74. Distributed Word2vec • Needed system to train 200 million 300 dimensional word2vec model using minibatch SGD • Achieved in a high throughput and network efficient way using our matrix based PS server: – Vectors don’t go over network.
  • 75. Distributed Word2vec • Needed system to train 200 million 300 dimensional word2vec model using minibatch SGD • Achieved in a high throughput and network efficient way using our matrix based PS server: – Vectors don’t go over network. – Most compute on PS servers, with clients aggregating partial results from shards.
  • 76. Distributed Word2vec Word2vec learners PS Shards V1 row V2 row … Vn row HDFS . . .
  • 77. Distributed Word2vec Word2vec learners PS Shards V1 row V2 row … Vn row Each shard stores a part of every vector HDFS . . .
  • 78. Distributed Word2vec Send word indices and seeds Word2vec learners PS Shards V1 row V2 row … Vn row HDFS . . .
  • 79. Distributed Word2vec Negative sampling, compute u•v Word2vec learners PS Shards V1 row V2 row … Vn row HDFS . . .
  • 80. Distributed Word2vec Word2vec learners PS Shards Aggregate results & compute lin. comb. coefficients (e.g., α…) V1 row V2 row … Vn row HDFS . . .
  • 81. Distributed Word2vec Word2vec learners PS Shards V1 row V2 row … Vn row Update vectors (v += αu, …) HDFS . . .
  • 83. Distributed Word2vec • Network lower by factor of #shards/dimension compared to conventional PS based system (1/20 to 1/100 for useful scenarios).
  • 84. Distributed Word2vec • Network lower by factor of #shards/dimension compared to conventional PS based system (1/20 to 1/100 for useful scenarios). • Trains 200 million vocab, 55 billion word search session in 2.5 days.
  • 85. Distributed Word2vec • Network lower by factor of #shards/dimension compared to conventional PS based system (1/20 to 1/100 for useful scenarios). • Trains 200 million vocab, 55 billion word search session in 2.5 days. • In production for regular training in Yahoo search ad serving system.
  • 86. Other Projects using Spark + PS • Online learning on PS – Personalization as a Service – Sponsored Search • Factorization Machines – Large scale user profiling
  • 88. Training Data on HDFS HDFS
  • 89. Launch PS Using Apache Slider on YARN HDFS YARN Apache Slider Parameter Servers
  • 90. Launch Clients using Spark or Hadoop Streaming API HDFS YARN Apache Slider Parameter Servers Apache Spark Clients
  • 92. Model Export HDFS YARN Apache Slider Parameter Server Apache Spark Clients
  • 93. Model Export HDFS YARN Apache Slider Parameter Server Apache Spark Clients
  • 94. Summary • Parameter server indispensable for big models • Spark + Parameter Server has proved to be very flexible platform for our large scale computing needs • Direct computation on the parameter servers accelerate training for our use-cases