SlideShare a Scribd company logo
Training Large-scale Ad Ranking Models in Spark
PRESENTED BY Patrick Pletscher October 19, 2015
About Us
2
Michal Aharon Oren Somekh Yaacov Fernandess Yair Koren
Amit Kagian Shahar Golan Raz Nissim Patrick Pletscher
Amir Ingber
Haifa
Collaborator
What We Do
3
Research focused on ad ranking algorithms for Yahoo Gemini Native Ads
Ad Ranking Overview
4
• Advertisers run several campaigns each with several ads
• Each ad has a bid set by the advertiser; different ad price types
- pay per view
- pay per click
- various conversion price types
• Auction for each impression on a Gemini Native enabled property
- auction between all eligible ads (filter by targeting/budget)
- ad with the highest expected revenue is determined
• Need to know the (personalized!) probability of a click
- we mostly get money for clicks / conversions!
Ad 1 Ad 2
2$1$
5% 1%5c 2c
user
Click-Through Rate (CTR) Prediction
5
• Given a user and context, predict probability of a click for an ad.
• Probably the most “profitable” machine learning problem in industry
- simple binary problem; but want probabilities, not just the label
- very skewed label distribution: clicks << skips
- tons of data (every impression generates a training example)
- limitations at serving: need to predict quickly
• Basic setting quite well-studied; scale makes it challenging
- Google (McMahan et al. 2013)
- Facebook (He et al. 2014)
- Yahoo (Aharon et al. 2013)
- others (Chapelle et al. 2014)
• Some more involved research topics
- Exploration/Exploitation tradeoff
- Learning from logged feedback
Overview - CTR Prediction for Gemini Native Ads
6
• Collaborative Filtering approach (Aharon et al. 2013)
- Current production system
- Implemented in Hadoop MapReduce
- Used in Gemini Native ad ranking
• Large-scale Logistic Regression
- A research prototype
- Implemented in Spark
- The combination of Spark & Scala allows us to iterate quickly
- Takes several concepts from the CF approach
Large-­scale Logistic Regression in Spark
Apache Spark
8
• “Apache Spark is a fast and general engine for large-scale data processing”
• Similar to Hadoop
• Advantages over Hadoop MapReduce
- Option to cache data in memory, great for iterative computations
- A lot of syntactic sugar
‣ filter, reduceByKey, distinct, sortByKey, join
‣ in general Spark/Scala code very concise
- Spark Shell, great for interactive/ETL* workflows
- Dataframes interesting for data scientists coming from R / Python
• Includes modules for
- machine learning
- streaming
- graph computations
- SQL / Dataframes
*ETL: Extract, transform, load
Spark at Yahoo
9
• Spark 1.5.1, the latest version of Spark
• Runs on top of Hadoop YARN 2.6
- integrates nicely with existing Hadoop tools and infrastructure

at Yahoo
- data is generally stored in HDFS
• Clusters are centrally managed
• Large Hadoop deployment at Yahoo
- A few different clusters
- Each has at least a few thousand nodes
HDFS (storage)
YARN (resource management)
SparkMapReduceHive
Dataset for CTR Prediction
10
• Billions of ad impressions daily
- Need for Streaming / Batched Streaming
- Each impression has a unique id
• Need click information for every impression for learning
- Join impressions with a click stream every x minutes
- Need to wait for the click; introduces some delay
18:30 18:45 19:00
clicks
impressions impressions
clicks
impressions
clicks
19:15
impressions
clicks
labeled
events
labeled
events
in Spark: union & reduceByKey
Example - Joining Impression & Click RDDs
11
val keyAndImpressions = impressions

.map(e => (e.joinKey, ("i", e))
val keyAndClicks = clicks

.map(e => (e.joinKey, ("c", e)))
keyAndImpressions.union(keyAndClicks)

.reduceByKey(smartCombine)

.flatMap { case (k, (t, event)) => t match {

case "ci" => Some(LabeledEvent(event, clicked=1))

case "i" => Some(LabeledEvent(event, clicked=0))

case "c" => None

}

}
def smartCombine(event1: (String, Event), event2: (String, Event)):
(String, Event) = {

(event1._1, event2._1) match {

case ("c", "c") => event1 // de-dupe

case ("i", "i") => event1 // de-dupe

case ("c", "i") => ("ci", event2._2) // combine click and impression

case ("i", "c") => ("ci", event1._2) // combine click and impression

case ("ci", _) => event1 // de-dupe

case (_, "ci") => event2 // de-dupe

}

}
Incremental Learning Architecture
12
learning
examples
18:30 18:45 19:00
clicks
impressions impressions
clicks
impressions
clicks
19:15
impressions
clicks
labeled
events
feature extraction
learning
modelmodel
Large-scale Logistic Regression
13
• Industry standard for CTR prediction (McMahan et al. 2013, He et al. 2014)
• Models the probability of a click as
- feature vector
‣ high-dimensional vector but sparse (few non-zero values)
‣ model expressivity controlled by the features
‣ a lot of hand-tuning and playing around
- model parameters
‣ need to be learned
‣ generally rather non-sparse
Features for Logistic Regression
14
• Basic features
- age, gender
- browser, device
• Feature crosses
- E.g. age x gender x state (30 year old male from Boston)
- mostly indicator features
- Examples:
‣ gender^age m^30
‣ gender^device m^Windows_NT
‣ gender^section m^5417810
‣ gender^state m^2347579
‣ age^device 30^Windows_NT
• Feature hashing to get a vector of fixed length
- hash all the index tuples, e.g. (gender^age, m^30), to get a numeric index
- will introduce collisions! Choose dimensionality large enough
Parameter Estimation
15
• Basic Problem: Regularized Maximum Likelihood
- Often: L1 regularization instead of L2
‣ promotes sparsity in the weight vector
‣ more efficient predictions in serving (also requires less memory!)
- Batch vs. streaming
‣ in our case: batched streaming, every x min perform an incremental model update
• Follow-the-regularized leader (McMahan et al. 2013)
- sequential online algorithm: only use a data point once
- similar to stochastic gradient descent
- per coordinate learning rates
- encourages sparseness
- FTRL stores weight and accumulated gradient per coordinate
fit training data prevent overfitting
Basic Parallelized FTRL in Spark
16
def train(examples: RDD[LearningExample]): Unit={

val delta = examples

.repartition(numWorkers)

.mapPartitions(xs => updatePartition(xs, weights, counts))

.treeReduce{case(a, b) => (a._1+b._1, a._2+b._2)}



weights += delta._1 / numWorkers.toDouble

counts += delta._2 / numWorkers.toDouble

}

def updatePartition(examples: Iterator[LearningExample],

weights: DenseVector[Double],

counts: DenseVector[Double]): 

Iterator[(DenseVector[Double], DenseVector[Double])]=
{
// standard FTRL code for examples
Iterator((deltaWeights, deltaCounts))
}
hack:
actually a single
result, but Spark
expects an iterator!
Summary: LR with Spark
17
• Efficient: Can learn on all the data
- before: somewhat aggressive subsampling of the skips
• Possible to do feature pre-processing
- in Hadoop MapReduce much harder: only one pass over data
- drop infrequent features, TF-IDF, …
• Spark-shell as a life-saver
- helps to debug problems as one can inspect intermediate results at scale
- have yet to try Zeppelin notebooks
• Easy to unit test complex workflows
Spark: Lessons Learned
Upgrade!
19
• Spark has a pretty regular 3 months release schedule
• Always run with the latest version
- Lots of bugs get fixed
- Difficult to keep up with new functionality (see DataFrame vs. RDD)
• Speed improvements over the past year
Configurations
20
• Our solution
- config directory containing
‣ Logging: log4j.properties
‣ Spark itself: spark-defaults.conf
‣ our code: application.conf
- two versions of configs: local & cluster
- in YARN: specify them using --files argument & SPARK_CONF_DIR variable
• Use Typesafe’s config library for all application related configs
- provide sensible defaults for everything
- overwrite using application.conf
• Do not hard-code any configurations in code
Accumulators
21
• Use accumulators for ensuring correctness!
• Example:
- parse data, ignore event if there is a problem with the data
- use accumulator to count these failed lines
class Parser(failedLinesAccumulator: Accumulator[Int]) extends Serializable {
def parse(s: String): Option[Event] = {
try {
// parsing logic goes here
Some(...)
}
catch {

case e: Exception => {

failedLinesAccumulator += 1

None

}

}
}
}
val accumulator = Some(sc.accumulator(0, “failed lines”))
val parser = new Parser(accumulator)
val events = sc.textFile(“hdfs:///myfile”)
.flatMap(s => parser.parse(s))
RDD vs. DataFrame in Spark
22
• Initially Spark advocated Resilient Distributed Data (RDD) for data set
abstraction
- type-safe
- usually stores some Scala case class
- code relatively easy to understand
• Recently Spark is pushing towards using DataFrame
- similar to R and Python’s Pandas data frames
- some advantages
‣ less rigid types: can append columns
‣ speed
- disadvantage: code readability suffers for non-basic types
‣ user defined types
‣ user defined functions
• Have not fully migrated to it yet
Every Day I’m Shuffling…
23
• Careful with operations which send a lot of data over the network
- reduceByKey
- repartition / shuffle
• Careful with sending too much data to the driver
- collect
- reduce
• found mapPartitions & treeReduce useful in some cases (see FTRL example)
• play with spark configurations: frameSize, maxResultSize, timeouts…
textFile flatMap map reduceByKey
Machine Learning in Spark
24
• Relatively basic
- some algorithms don’t scale so well
- not customizable enough for experts:
‣ optimizers that assume a regularizer
‣ built our own DSL for feature extraction & combination
‣ a lot of the APIs are not exposed, i.e. private to Spark
- will hopefully get there eventually
• Nice: new Transformer / Estimator / Pipeline approach
- Inspired by scikit-learn, makes it easy to combine different algorithms
- Requires DataFrame
- Example (from Spark docs)
val tokenizer = new Tokenizer()
.setInputCol("text")
.setOutputCol("words")
val hashingTF = new HashingTF()
.setNumFeatures(1000)
.setInputCol(tokenizer.getOutputCol)
.setOutputCol("features")
val lr = new LogisticRegression()
.setMaxIter(10)
.setRegParam(0.01)
val pipeline = new Pipeline()
.setStages(Array(tokenizer, hashingTF, lr))
val model = pipeline.fit(training)
Thank you!

More Related Content

PDF
Cross Device Ad Targeting at Scale
Trieu Nguyen
 
PPTX
Modern classification techniques
mark_landry
 
PDF
Apache Spark Machine Learning
Carol McDonald
 
PPTX
Misha Bilenko, Principal Researcher, Microsoft at MLconf SEA - 5/01/15
MLconf
 
PPTX
Surge: Rise of Scalable Machine Learning at Yahoo!
DataWorks Summit
 
PDF
MLConf 2016 SigOpt Talk by Scott Clark
SigOpt
 
PDF
Introduction of Feature Hashing
Wush Wu
 
PPTX
A Scaleable Implementation of Deep Learning on Spark -Alexander Ulanov
Spark Summit
 
Cross Device Ad Targeting at Scale
Trieu Nguyen
 
Modern classification techniques
mark_landry
 
Apache Spark Machine Learning
Carol McDonald
 
Misha Bilenko, Principal Researcher, Microsoft at MLconf SEA - 5/01/15
MLconf
 
Surge: Rise of Scalable Machine Learning at Yahoo!
DataWorks Summit
 
MLConf 2016 SigOpt Talk by Scott Clark
SigOpt
 
Introduction of Feature Hashing
Wush Wu
 
A Scaleable Implementation of Deep Learning on Spark -Alexander Ulanov
Spark Summit
 

What's hot (20)

PDF
Lessons Learned while Implementing a Sparse Logistic Regression Algorithm in ...
Spark Summit
 
PPTX
Jay Yagnik at AI Frontiers : A History Lesson on AI
AI Frontiers
 
PDF
Large-Scale Machine Learning with Apache Spark
DB Tsai
 
PPTX
Tom Peters, Software Engineer, Ufora at MLconf ATL 2016
MLconf
 
PPTX
Machine Learning with Hadoop
Sangchul Song
 
PDF
Netflix's Recommendation ML Pipeline Using Apache Spark: Spark Summit East ta...
Spark Summit
 
PDF
Auto-Pilot for Apache Spark Using Machine Learning
Databricks
 
PPTX
Training at AI Frontiers 2018 - LaiOffer Data Session: How Spark Speedup AI
AI Frontiers
 
PDF
Funda Gunes, Senior Research Statistician Developer & Patrick Koch, Principal...
MLconf
 
PDF
Data Wrangling For Kaggle Data Science Competitions
Krishna Sankar
 
PDF
How to use Apache TVM to optimize your ML models
Databricks
 
PDF
Ge aviation spark application experience porting analytics into py spark ml p...
Databricks
 
PPTX
Scaling TensorFlow Models for Training using multi-GPUs & Google Cloud ML
Seldon
 
PPTX
Machine Learning and Hadoop
Josh Patterson
 
PDF
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016
MLconf
 
PDF
Continuous Evaluation of Deployed Models in Production Many high-tech industr...
Databricks
 
PDF
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...
Databricks
 
PPTX
StackNet Meta-Modelling framework
Sri Ambati
 
PDF
Yggdrasil: Faster Decision Trees Using Column Partitioning In Spark
Jen Aman
 
PDF
Distributed Deep Learning on Spark
Mathieu Dumoulin
 
Lessons Learned while Implementing a Sparse Logistic Regression Algorithm in ...
Spark Summit
 
Jay Yagnik at AI Frontiers : A History Lesson on AI
AI Frontiers
 
Large-Scale Machine Learning with Apache Spark
DB Tsai
 
Tom Peters, Software Engineer, Ufora at MLconf ATL 2016
MLconf
 
Machine Learning with Hadoop
Sangchul Song
 
Netflix's Recommendation ML Pipeline Using Apache Spark: Spark Summit East ta...
Spark Summit
 
Auto-Pilot for Apache Spark Using Machine Learning
Databricks
 
Training at AI Frontiers 2018 - LaiOffer Data Session: How Spark Speedup AI
AI Frontiers
 
Funda Gunes, Senior Research Statistician Developer & Patrick Koch, Principal...
MLconf
 
Data Wrangling For Kaggle Data Science Competitions
Krishna Sankar
 
How to use Apache TVM to optimize your ML models
Databricks
 
Ge aviation spark application experience porting analytics into py spark ml p...
Databricks
 
Scaling TensorFlow Models for Training using multi-GPUs & Google Cloud ML
Seldon
 
Machine Learning and Hadoop
Josh Patterson
 
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016
MLconf
 
Continuous Evaluation of Deployed Models in Production Many high-tech industr...
Databricks
 
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...
Databricks
 
StackNet Meta-Modelling framework
Sri Ambati
 
Yggdrasil: Faster Decision Trees Using Column Partitioning In Spark
Jen Aman
 
Distributed Deep Learning on Spark
Mathieu Dumoulin
 
Ad

Similar to Training Large-scale Ad Ranking Models in Spark (20)

PDF
NigthClazz Spark - Machine Learning / Introduction à Spark et Zeppelin
Zenika
 
PPTX
Combining Machine Learning frameworks with Apache Spark
DataWorks Summit/Hadoop Summit
 
PPTX
Combining Machine Learning Frameworks with Apache Spark
Databricks
 
PPTX
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and ...
Jose Quesada (hiring)
 
PDF
Understanding Parallelization of Machine Learning Algorithms in Apache Spark™
Databricks
 
PPTX
Big Data Analytics with Storm, Spark and GraphLab
Impetus Technologies
 
PPTX
Machine Learning With Spark
Shivaji Dutta
 
PDF
End-to-end Data Pipeline with Apache Spark
Databricks
 
PDF
Scalable Data Science in Python and R on Apache Spark
felixcss
 
PDF
Ml pipelines with Apache spark and Apache beam - Ottawa Reactive meetup Augus...
Holden Karau
 
PPTX
Big data analytics_beyond_hadoop_public_18_july_2013
Vijay Srinivas Agneeswaran, Ph.D
 
PDF
Spark Based Distributed Deep Learning Framework For Big Data Applications
Humoyun Ahmedov
 
PPTX
Yarn spark next_gen_hadoop_8_jan_2014
Vijay Srinivas Agneeswaran, Ph.D
 
PDF
End-to-end Big Data Projects with Python - StampedeCon Big Data Conference 2017
StampedeCon
 
PDF
Foundations for Scaling ML in Apache Spark
Databricks
 
PDF
Foundations for Scaling ML in Apache Spark by Joseph Bradley at BigMine16
BigMine
 
PDF
Apache Spark MLlib 2.0 Preview: Data Science and Production
Databricks
 
PDF
Deep Learning Pipelines for High Energy Physics using Apache Spark with Distr...
Databricks
 
PDF
Simplifying Big Data Analytics with Apache Spark
Databricks
 
PPTX
Practical Distributed Machine Learning Pipelines on Hadoop
DataWorks Summit
 
NigthClazz Spark - Machine Learning / Introduction à Spark et Zeppelin
Zenika
 
Combining Machine Learning frameworks with Apache Spark
DataWorks Summit/Hadoop Summit
 
Combining Machine Learning Frameworks with Apache Spark
Databricks
 
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and ...
Jose Quesada (hiring)
 
Understanding Parallelization of Machine Learning Algorithms in Apache Spark™
Databricks
 
Big Data Analytics with Storm, Spark and GraphLab
Impetus Technologies
 
Machine Learning With Spark
Shivaji Dutta
 
End-to-end Data Pipeline with Apache Spark
Databricks
 
Scalable Data Science in Python and R on Apache Spark
felixcss
 
Ml pipelines with Apache spark and Apache beam - Ottawa Reactive meetup Augus...
Holden Karau
 
Big data analytics_beyond_hadoop_public_18_july_2013
Vijay Srinivas Agneeswaran, Ph.D
 
Spark Based Distributed Deep Learning Framework For Big Data Applications
Humoyun Ahmedov
 
Yarn spark next_gen_hadoop_8_jan_2014
Vijay Srinivas Agneeswaran, Ph.D
 
End-to-end Big Data Projects with Python - StampedeCon Big Data Conference 2017
StampedeCon
 
Foundations for Scaling ML in Apache Spark
Databricks
 
Foundations for Scaling ML in Apache Spark by Joseph Bradley at BigMine16
BigMine
 
Apache Spark MLlib 2.0 Preview: Data Science and Production
Databricks
 
Deep Learning Pipelines for High Energy Physics using Apache Spark with Distr...
Databricks
 
Simplifying Big Data Analytics with Apache Spark
Databricks
 
Practical Distributed Machine Learning Pipelines on Hadoop
DataWorks Summit
 
Ad

Recently uploaded (20)

PDF
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
PDF
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
PDF
settlement FOR FOUNDATION ENGINEERS.pdf
Endalkazene
 
PDF
Unit I Part II.pdf : Security Fundamentals
Dr. Madhuri Jawale
 
PDF
EVS+PRESENTATIONS EVS+PRESENTATIONS like
saiyedaqib429
 
PPTX
Civil Engineering Practices_BY Sh.JP Mishra 23.09.pptx
bineetmishra1990
 
PDF
20ME702-Mechatronics-UNIT-1,UNIT-2,UNIT-3,UNIT-4,UNIT-5, 2025-2026
Mohanumar S
 
PPTX
Online Cab Booking and Management System.pptx
diptipaneri80
 
PDF
The Effect of Artifact Removal from EEG Signals on the Detection of Epileptic...
Partho Prosad
 
PPTX
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
PDF
Zero carbon Building Design Guidelines V4
BassemOsman1
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
DOCX
SAR - EEEfdfdsdasdsdasdasdasdasdasdasdasda.docx
Kanimozhi676285
 
PPTX
quantum computing transition from classical mechanics.pptx
gvlbcy
 
PPTX
22PCOAM21 Session 2 Understanding Data Source.pptx
Guru Nanak Technical Institutions
 
PDF
FLEX-LNG-Company-Presentation-Nov-2017.pdf
jbloggzs
 
PPTX
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
PPTX
Chapter_Seven_Construction_Reliability_Elective_III_Msc CM
SubashKumarBhattarai
 
PDF
top-5-use-cases-for-splunk-security-analytics.pdf
yaghutialireza
 
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
settlement FOR FOUNDATION ENGINEERS.pdf
Endalkazene
 
Unit I Part II.pdf : Security Fundamentals
Dr. Madhuri Jawale
 
EVS+PRESENTATIONS EVS+PRESENTATIONS like
saiyedaqib429
 
Civil Engineering Practices_BY Sh.JP Mishra 23.09.pptx
bineetmishra1990
 
20ME702-Mechatronics-UNIT-1,UNIT-2,UNIT-3,UNIT-4,UNIT-5, 2025-2026
Mohanumar S
 
Online Cab Booking and Management System.pptx
diptipaneri80
 
The Effect of Artifact Removal from EEG Signals on the Detection of Epileptic...
Partho Prosad
 
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
Zero carbon Building Design Guidelines V4
BassemOsman1
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
SAR - EEEfdfdsdasdsdasdasdasdasdasdasdasda.docx
Kanimozhi676285
 
quantum computing transition from classical mechanics.pptx
gvlbcy
 
22PCOAM21 Session 2 Understanding Data Source.pptx
Guru Nanak Technical Institutions
 
FLEX-LNG-Company-Presentation-Nov-2017.pdf
jbloggzs
 
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
Chapter_Seven_Construction_Reliability_Elective_III_Msc CM
SubashKumarBhattarai
 
top-5-use-cases-for-splunk-security-analytics.pdf
yaghutialireza
 

Training Large-scale Ad Ranking Models in Spark

  • 1. Training Large-scale Ad Ranking Models in Spark PRESENTED BY Patrick Pletscher October 19, 2015
  • 2. About Us 2 Michal Aharon Oren Somekh Yaacov Fernandess Yair Koren Amit Kagian Shahar Golan Raz Nissim Patrick Pletscher Amir Ingber Haifa Collaborator
  • 3. What We Do 3 Research focused on ad ranking algorithms for Yahoo Gemini Native Ads
  • 4. Ad Ranking Overview 4 • Advertisers run several campaigns each with several ads • Each ad has a bid set by the advertiser; different ad price types - pay per view - pay per click - various conversion price types • Auction for each impression on a Gemini Native enabled property - auction between all eligible ads (filter by targeting/budget) - ad with the highest expected revenue is determined • Need to know the (personalized!) probability of a click - we mostly get money for clicks / conversions! Ad 1 Ad 2 2$1$ 5% 1%5c 2c user
  • 5. Click-Through Rate (CTR) Prediction 5 • Given a user and context, predict probability of a click for an ad. • Probably the most “profitable” machine learning problem in industry - simple binary problem; but want probabilities, not just the label - very skewed label distribution: clicks << skips - tons of data (every impression generates a training example) - limitations at serving: need to predict quickly • Basic setting quite well-studied; scale makes it challenging - Google (McMahan et al. 2013) - Facebook (He et al. 2014) - Yahoo (Aharon et al. 2013) - others (Chapelle et al. 2014) • Some more involved research topics - Exploration/Exploitation tradeoff - Learning from logged feedback
  • 6. Overview - CTR Prediction for Gemini Native Ads 6 • Collaborative Filtering approach (Aharon et al. 2013) - Current production system - Implemented in Hadoop MapReduce - Used in Gemini Native ad ranking • Large-scale Logistic Regression - A research prototype - Implemented in Spark - The combination of Spark & Scala allows us to iterate quickly - Takes several concepts from the CF approach
  • 8. Apache Spark 8 • “Apache Spark is a fast and general engine for large-scale data processing” • Similar to Hadoop • Advantages over Hadoop MapReduce - Option to cache data in memory, great for iterative computations - A lot of syntactic sugar ‣ filter, reduceByKey, distinct, sortByKey, join ‣ in general Spark/Scala code very concise - Spark Shell, great for interactive/ETL* workflows - Dataframes interesting for data scientists coming from R / Python • Includes modules for - machine learning - streaming - graph computations - SQL / Dataframes *ETL: Extract, transform, load
  • 9. Spark at Yahoo 9 • Spark 1.5.1, the latest version of Spark • Runs on top of Hadoop YARN 2.6 - integrates nicely with existing Hadoop tools and infrastructure
 at Yahoo - data is generally stored in HDFS • Clusters are centrally managed • Large Hadoop deployment at Yahoo - A few different clusters - Each has at least a few thousand nodes HDFS (storage) YARN (resource management) SparkMapReduceHive
  • 10. Dataset for CTR Prediction 10 • Billions of ad impressions daily - Need for Streaming / Batched Streaming - Each impression has a unique id • Need click information for every impression for learning - Join impressions with a click stream every x minutes - Need to wait for the click; introduces some delay 18:30 18:45 19:00 clicks impressions impressions clicks impressions clicks 19:15 impressions clicks labeled events labeled events in Spark: union & reduceByKey
  • 11. Example - Joining Impression & Click RDDs 11 val keyAndImpressions = impressions
 .map(e => (e.joinKey, ("i", e)) val keyAndClicks = clicks
 .map(e => (e.joinKey, ("c", e))) keyAndImpressions.union(keyAndClicks)
 .reduceByKey(smartCombine)
 .flatMap { case (k, (t, event)) => t match {
 case "ci" => Some(LabeledEvent(event, clicked=1))
 case "i" => Some(LabeledEvent(event, clicked=0))
 case "c" => None
 }
 } def smartCombine(event1: (String, Event), event2: (String, Event)): (String, Event) = {
 (event1._1, event2._1) match {
 case ("c", "c") => event1 // de-dupe
 case ("i", "i") => event1 // de-dupe
 case ("c", "i") => ("ci", event2._2) // combine click and impression
 case ("i", "c") => ("ci", event1._2) // combine click and impression
 case ("ci", _) => event1 // de-dupe
 case (_, "ci") => event2 // de-dupe
 }
 }
  • 12. Incremental Learning Architecture 12 learning examples 18:30 18:45 19:00 clicks impressions impressions clicks impressions clicks 19:15 impressions clicks labeled events feature extraction learning modelmodel
  • 13. Large-scale Logistic Regression 13 • Industry standard for CTR prediction (McMahan et al. 2013, He et al. 2014) • Models the probability of a click as - feature vector ‣ high-dimensional vector but sparse (few non-zero values) ‣ model expressivity controlled by the features ‣ a lot of hand-tuning and playing around - model parameters ‣ need to be learned ‣ generally rather non-sparse
  • 14. Features for Logistic Regression 14 • Basic features - age, gender - browser, device • Feature crosses - E.g. age x gender x state (30 year old male from Boston) - mostly indicator features - Examples: ‣ gender^age m^30 ‣ gender^device m^Windows_NT ‣ gender^section m^5417810 ‣ gender^state m^2347579 ‣ age^device 30^Windows_NT • Feature hashing to get a vector of fixed length - hash all the index tuples, e.g. (gender^age, m^30), to get a numeric index - will introduce collisions! Choose dimensionality large enough
  • 15. Parameter Estimation 15 • Basic Problem: Regularized Maximum Likelihood - Often: L1 regularization instead of L2 ‣ promotes sparsity in the weight vector ‣ more efficient predictions in serving (also requires less memory!) - Batch vs. streaming ‣ in our case: batched streaming, every x min perform an incremental model update • Follow-the-regularized leader (McMahan et al. 2013) - sequential online algorithm: only use a data point once - similar to stochastic gradient descent - per coordinate learning rates - encourages sparseness - FTRL stores weight and accumulated gradient per coordinate fit training data prevent overfitting
  • 16. Basic Parallelized FTRL in Spark 16 def train(examples: RDD[LearningExample]): Unit={
 val delta = examples
 .repartition(numWorkers)
 .mapPartitions(xs => updatePartition(xs, weights, counts))
 .treeReduce{case(a, b) => (a._1+b._1, a._2+b._2)}
 
 weights += delta._1 / numWorkers.toDouble
 counts += delta._2 / numWorkers.toDouble
 }
 def updatePartition(examples: Iterator[LearningExample],
 weights: DenseVector[Double],
 counts: DenseVector[Double]): 
 Iterator[(DenseVector[Double], DenseVector[Double])]= { // standard FTRL code for examples Iterator((deltaWeights, deltaCounts)) } hack: actually a single result, but Spark expects an iterator!
  • 17. Summary: LR with Spark 17 • Efficient: Can learn on all the data - before: somewhat aggressive subsampling of the skips • Possible to do feature pre-processing - in Hadoop MapReduce much harder: only one pass over data - drop infrequent features, TF-IDF, … • Spark-shell as a life-saver - helps to debug problems as one can inspect intermediate results at scale - have yet to try Zeppelin notebooks • Easy to unit test complex workflows
  • 19. Upgrade! 19 • Spark has a pretty regular 3 months release schedule • Always run with the latest version - Lots of bugs get fixed - Difficult to keep up with new functionality (see DataFrame vs. RDD) • Speed improvements over the past year
  • 20. Configurations 20 • Our solution - config directory containing ‣ Logging: log4j.properties ‣ Spark itself: spark-defaults.conf ‣ our code: application.conf - two versions of configs: local & cluster - in YARN: specify them using --files argument & SPARK_CONF_DIR variable • Use Typesafe’s config library for all application related configs - provide sensible defaults for everything - overwrite using application.conf • Do not hard-code any configurations in code
  • 21. Accumulators 21 • Use accumulators for ensuring correctness! • Example: - parse data, ignore event if there is a problem with the data - use accumulator to count these failed lines class Parser(failedLinesAccumulator: Accumulator[Int]) extends Serializable { def parse(s: String): Option[Event] = { try { // parsing logic goes here Some(...) } catch {
 case e: Exception => {
 failedLinesAccumulator += 1
 None
 }
 } } } val accumulator = Some(sc.accumulator(0, “failed lines”)) val parser = new Parser(accumulator) val events = sc.textFile(“hdfs:///myfile”) .flatMap(s => parser.parse(s))
  • 22. RDD vs. DataFrame in Spark 22 • Initially Spark advocated Resilient Distributed Data (RDD) for data set abstraction - type-safe - usually stores some Scala case class - code relatively easy to understand • Recently Spark is pushing towards using DataFrame - similar to R and Python’s Pandas data frames - some advantages ‣ less rigid types: can append columns ‣ speed - disadvantage: code readability suffers for non-basic types ‣ user defined types ‣ user defined functions • Have not fully migrated to it yet
  • 23. Every Day I’m Shuffling… 23 • Careful with operations which send a lot of data over the network - reduceByKey - repartition / shuffle • Careful with sending too much data to the driver - collect - reduce • found mapPartitions & treeReduce useful in some cases (see FTRL example) • play with spark configurations: frameSize, maxResultSize, timeouts… textFile flatMap map reduceByKey
  • 24. Machine Learning in Spark 24 • Relatively basic - some algorithms don’t scale so well - not customizable enough for experts: ‣ optimizers that assume a regularizer ‣ built our own DSL for feature extraction & combination ‣ a lot of the APIs are not exposed, i.e. private to Spark - will hopefully get there eventually • Nice: new Transformer / Estimator / Pipeline approach - Inspired by scikit-learn, makes it easy to combine different algorithms - Requires DataFrame - Example (from Spark docs) val tokenizer = new Tokenizer() .setInputCol("text") .setOutputCol("words") val hashingTF = new HashingTF() .setNumFeatures(1000) .setInputCol(tokenizer.getOutputCol) .setOutputCol("features") val lr = new LogisticRegression() .setMaxIter(10) .setRegParam(0.01) val pipeline = new Pipeline() .setStages(Array(tokenizer, hashingTF, lr)) val model = pipeline.fit(training)