SlideShare a Scribd company logo
Runtime Internal
Nan Zhu (McGill University & Faimdata)
–Johnny Appleseed
“Type a quote here.”
WHO AM I
• Nan Zhu, PhD Candidate in School of Computer Science of
McGill University
• Work on computer networks (Software Defined Networks)
and large-scale data processing
• Work with Prof. Wenbo He and Prof. Xue Liu
• PhD is an awesome experience in my life
• Tackle real world problems
• Keep thinking ! Get insights !
WHO AM I
• Nan Zhu, PhD Candidate in School of Computer Science of
McGill University
• Work on computer networks (Software Defined Networks)
and large-scale data processing
• Work with Prof. Wenbo He and Prof. Xue Liu
• PhD is an awesome experience in my life
• Tackle real world problems
• Keep thinking ! Get insights !
When will I graduate ?
WHO AM I
• Do-it-all Engineer in Faimdata (http://www.faimdata.com)
• Faimdata is a new startup located in Montreal
• Build Customer-centric analysis solution based on
Spark for retailers
• My responsibility
• Participate in everything related to data
• Akka, HBase, Hive, Kafka, Spark, etc.
WHO AM I
• My Contribution to Spark
• 0.8.1, 0.9.0, 0.9.1, 1.0.0
• 1000+ code, 30 patches
• Two examples:
• YARN-like architecture in Spark
• Introduce Actor Supervisor mechanism to
DAGScheduler
WHO AM I
• My Contribution to Spark
• 0.8.1, 0.9.0, 0.9.1, 1.0.0
• 1000+ code, 30 patches
• Two examples:
• YARN-like architecture in Spark
• Introduce Actor Supervisor mechanism to
DAGScheduler
I’m CodingCat@GitHub !!!!
WHO AM I
• My Contribution to Spark
• 0.8.1, 0.9.0, 0.9.1, 1.0.0
• 1000+ code, 30 patches
• Two examples:
• YARN-like architecture in Spark
• Introduce Actor Supervisor mechanism to
DAGScheduler
I’m CodingCat@GitHub !!!!
What is Spark?
What is Spark?
• A distributed computing framework
• Organize computation as concurrent tasks
• Schedule tasks to multiple servers
• Handle fault-tolerance, load balancing, etc, in
automatic (and transparently)
Advantages of Spark
• More Descriptive Computing Model
• Faster Processing Speed
• Unified Pipeline
Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Reduce function,
collect the <word, 1>
pairs generated by
Map function and
merge them by
accumulation
Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Reduce function,
collect the <word, 1>
pairs generated by
Map function and
merge them by
accumulation
Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Reduce function,
collect the <word, 1>
pairs generated by
Map function and
merge them by
accumulation
Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Reduce function,
collect the <word, 1>
pairs generated by
Map function and
merge them by
accumulation
Configurate the
program
Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Reduce function,
collect the <word, 1>
pairs generated by
Map function and
merge them by
accumulation
Configurate the
program
DESCRIPTIVE COMPUTING MODEL (2)
• WordCount in Spark
Scala:
Java:
DESCRIPTIVE COMPUTING MODEL (2)
• Closer look at WordCount in Spark
Scala:
Organize Computation into Multiple Stages in a Processing Pipeline:
transformation to get the intermediate results with expected schema
action to get final output
Computation is expressed with more high-level
APIs, which simplify the logic in original Map &
Reduce and define the computation as a
processing pipeline
DESCRIPTIVE COMPUTING MODEL (2)
• Closer look at WordCount in Spark
Scala:
Organize Computation into Multiple Stages in a Processing Pipeline:
transformation to get the intermediate results with expected schema
action to get final output
Transformation
Computation is expressed with more high-level
APIs, which simplify the logic in original Map &
Reduce and define the computation as a
processing pipeline
DESCRIPTIVE COMPUTING MODEL (2)
• Closer look at WordCount in Spark
Scala:
Organize Computation into Multiple Stages in a Processing Pipeline:
transformation to get the intermediate results with expected schema
action to get final output
Transformation
Action
Computation is expressed with more high-level
APIs, which simplify the logic in original Map &
Reduce and define the computation as a
processing pipeline
MUCH BETTER PERFORMANCE
• PageRank Algorithm Performance Comparison
Matei Zaharia, et al, Resilient Distributed Datasets: A Fault-Tolerant Abstraction for
In-Memory Cluster Computing, NSDI 2012
0"
20"
40"
60"
80"
100"
120"
140"
160"
180"
Hadoop"" Basic"Spark" Spark"with"Controlled;
par<<on"
Time%per%Itera+ons%(s)%
Unified pipeline
Diverse APIs, Operational Cost, etc.
Unified pipeline
Unified pipeline
• With a Single Spark Cluster
• Batching Processing: Spark
Core
• Query: Shark & Spark SQL &
BlinkDB
• Streaming: Spark Streaming
• Machine Learning: MLlib
• Graph: GraphX
Understanding Distributed Computing Framework
Understand a distributed computing framework
• DataFlow
• e.g. Hadoop family utilizes HDFS to transfer data within
a job and share data across jobs/applications
HDFS
Daemon
MapTask
HDFS
Daemon
MapTask
HDFS
Daemon
MapTask
Understand a distributed computing framework
• DataFlow
• e.g. Hadoop family utilizes HDFS to transfer data within
a job and share data across jobs/applications
Understand a distributed computing framework
• DataFlow
• e.g. Hadoop family utilizes HDFS to transfer data within
a job and share data across jobs/applications
Understanding a distributed computing engine
• Task Management
• How the computation is executed within multiple
servers
• How the tasks are scheduled
• How the resources are allocated
Spark Data Abstraction Model
Basic Structure of Spark program
• A Spark Program
val sc = new SparkContext(…)
!
val points = sc.textFile("hdfs://...")
	 .map(_.split.map(_.toDouble)).splitAt(1)
	 .map { case (Array(label), features) =>
	 	 LabeledPoint(label, features)
	 }
!
val model = Model.train(points)
Basic Structure of Spark program
• A Spark Program
val sc = new SparkContext(…)
!
val points = sc.textFile("hdfs://...")
	 .map(_.split.map(_.toDouble)).splitAt(1)
	 .map { case (Array(label), features) =>
	 	 LabeledPoint(label, features)
	 }
!
val model = Model.train(points)
Includes the components
driving the running of
computing tasks (will
introduce later)
Basic Structure of Spark program
• A Spark Program
val sc = new SparkContext(…)
!
val points = sc.textFile("hdfs://...")
	 .map(_.split.map(_.toDouble)).splitAt(1)
	 .map { case (Array(label), features) =>
	 	 LabeledPoint(label, features)
	 }
!
val model = Model.train(points)
Includes the components
driving the running of
computing tasks (will
introduce later)
Load data from HDFS,
forming a RDD
(Resilient Distributed
Datasets) object
Basic Structure of Spark program
• A Spark Program
val sc = new SparkContext(…)
!
val points = sc.textFile("hdfs://...")
	 .map(_.split.map(_.toDouble)).splitAt(1)
	 .map { case (Array(label), features) =>
	 	 LabeledPoint(label, features)
	 }
!
val model = Model.train(points)
Includes the components
driving the running of
computing tasks (will
introduce later)
Load data from HDFS,
forming a RDD
(Resilient Distributed
Datasets) object
Transformations to
generate RDDs with
expected element(s)/
format
Basic Structure of Spark program
• A Spark Program
val sc = new SparkContext(…)
!
val points = sc.textFile("hdfs://...")
	 .map(_.split.map(_.toDouble)).splitAt(1)
	 .map { case (Array(label), features) =>
	 	 LabeledPoint(label, features)
	 }
!
val model = Model.train(points)
Includes the components
driving the running of
computing tasks (will
introduce later)
Load data from HDFS,
forming a RDD
(Resilient Distributed
Datasets) object
Transformations to
generate RDDs with
expected element(s)/
format
All Computations are around RDDs
Resilient Distributed Dataset
• RDD is a distributed memory abstraction which is
• data collection
• immutable
• created by either loading from stable storage system (e.g. HDFS) or
through transformations on other RDD(s)
• partitioned and distributed
sc.textFile(…)
filter() map()
From data to computation
• Lineage
From data to computation
• Lineage
From data to computation
• Lineage
Where do I come
from?
(dependency)
From data to computation
• Lineage
Where do I come
from?
(dependency)
How do I come from?
(save the functions
calculating the partitions)
From data to computation
• Lineage
Where do I come
from?
(dependency)
How do I come from?
(save the functions
calculating the partitions)
Computation is
organized as a DAG
(Lineage)
From data to computation
• Lineage
Where do I come
from?
(dependency)
How do I come from?
(save the functions
calculating the partitions)
Computation is
organized as a DAG
(Lineage)
Lost data can be
recovered in parallel
with the help of the
lineage DAG
Cache
• Frequently accessed RDDs can be materialized and
cached in memory
• Cached RDD can also be replicated for fault tolerance
(Spark scheduler takes cached data locality into account)
• Manage the cache space with LRU algorithm
Benefits Brought Cache
• Example (Log Mining)
Benefits Brought Cache
• Example (Log Mining)
Count is an action, for the first
time, it has to calculate from the
start of the DAG Graph (textFile)
Benefits Brought Cache
• Example (Log Mining)
Count is an action, for the first
time, it has to calculate from the
start of the DAG Graph (textFile)
Because the data is cached, the
second count does not trigger a
“start-from-zero” computation,
instead, it is based on
“cachedMsgs” directly
Summary
• Resilient Distributed Datasets (RDD)
• Distributed memory abstraction in Spark
• Keep computation run in memory with best effort
• Keep track of the “lineage” of data
• Organize computation
• Support fault-tolerance
• Cache
RDD brings much better performance by
simplifying the data flow
• Share Data among Applications
A typical data processing pipeline
RDD brings much better performance by
simplifying the data flow
• Share Data among Applications
A typical data processing pipeline
Overhead Overhead
RDD brings much better performance by
simplifying the data flow
• Share Data among Applications
A typical data processing pipeline
Overhead Overhead
RDD brings much better performance by
simplifying the data flow
• Share Data among Applications
A typical data processing pipeline
RDD brings much better performance by
simplifying the data flow
• Share Data among Applications
A typical data processing pipeline
RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
Step 1: Place randomly initial group centroids into the space.
Step 2: Assign each object to the group that has the closest
centroid.
Step 3: Recalculate the positions of the centroids.
Step 4: If the positions of the centroids didn't change go to
the next step, else go to Step 2.
Step 5: End.
RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
Assign group
(Step 2)
Recalculate
Group (Step 3 & 4)
Output (Step 5)
HDFS
Read
Write
Write
RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
Write
Assign group
(Step 2)
Recalculate
Group (Step 3 & 4)
Output (Step 5)
HDFS
RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
Spark Scheduler
Worker
Worker
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
DAG
Sche
Task
Sche
Cluster
Sche
Worker
Worker
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
DAG
Sche
Task
Sche
Cluster
Sche
Worker
Worker
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
DAG
Sche
Task
Sche
Cluster
Sche
Worker
Worker
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
Submit Application
to Cluster Manager
DAG
Sche
Task
Sche
Cluster
Sche
Worker
Worker
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
Submit Application
to Cluster Manager
The Cluster Manager
can be the master of
standalone mode in
Spark, Mesos and
YARN
DAG
Sche
Task
Sche
Cluster
Sche
Worker
Worker
Executor
Cache
Executor
Cache
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
Submit Application
to Cluster Manager
The Cluster Manager
can be the master of
standalone mode in
Spark, Mesos and
YARN
DAG
Sche
Task
Sche
Cluster
Sche
Worker
Worker
Executor
Cache
Executor
Cache
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
Submit Application
to Cluster Manager
The Cluster Manager
can be the master of
standalone mode in
Spark, Mesos and
YARN
Start Executors for the application in Workers; Executors
registers with ClusterScheduler;
DAG
Sche
Task
Sche
Cluster
Sche
Worker
Worker
Executor
Cache
Executor
Cache
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
Submit Application
to Cluster Manager
The Cluster Manager
can be the master of
standalone mode in
Spark, Mesos and
YARN
Start Executors for the application in Workers; Executors
registers with ClusterScheduler;
DAG
Sche
Task
Sche
Cluster
Sche
Driver program schedules tasks for the application
TaskTask
Task Task
Scheduling Process
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Split DAG
DAGScheduler
RDD objects
are connected
together with a
DAG
Submit each stage as
a TaskSet
TaskScheduler
TaskSetManagers
!
(monitor the
progress of tasks
and handle failed
stages)
Failed Stages
ClusterScheduler
Submit Tasks to
Executors
Scheduling Optimization
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Scheduling Optimization
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Within Stage Optimization, Pipelining
the generation of RDD partitions when
they are in narrow dependency
Scheduling Optimization
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Within Stage Optimization, Pipelining
the generation of RDD partitions when
they are in narrow dependency
Scheduling Optimization
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Within Stage Optimization, Pipelining
the generation of RDD partitions when
they are in narrow dependency
Scheduling Optimization
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Within Stage Optimization, Pipelining
the generation of RDD partitions when
they are in narrow dependency
Scheduling Optimization
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Within Stage Optimization, Pipelining
the generation of RDD partitions when
they are in narrow dependency
Scheduling Optimization
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Within Stage Optimization, Pipelining
the generation of RDD partitions when
they are in narrow dependency
Scheduling Optimization
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Within Stage Optimization, Pipelining
the generation of RDD partitions when
they are in narrow dependency
Partitioning-based join optimization,
avoid whole-shuffle with best-efforts
Scheduling Optimization
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Within Stage Optimization, Pipelining
the generation of RDD partitions when
they are in narrow dependency
Partitioning-based join optimization,
avoid whole-shuffle with best-efforts
Scheduling Optimization
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Within Stage Optimization, Pipelining
the generation of RDD partitions when
they are in narrow dependency
Partitioning-based join optimization,
avoid whole-shuffle with best-efforts
Cache-aware to avoid duplicate
computation
Summary
• No centralized application scheduler
• Maximize Throughput
• Application specific schedulers (DAGScheduler, TaskScheduler,
ClusterScheduler) are initialized within SparkContext
• Scheduling Abstraction (DAG, TaskSet, Task)
• Support fault-tolerance, pipelining, auto-recovery, etc.
• Scheduling Optimization
• Pipelining, join, caching
We are hiring !
http://www.faimdata.com
nanzhu@faimdata.com
jobs@faimdata.com
Thank you!
Q & A
Credits to my friend, LianCheng@Databricks, his slides inspired
me a lot

More Related Content

What's hot (20)

PPTX
Transformations and actions a visual guide training
Spark Summit
 
PDF
Introduction to spark
Duyhai Doan
 
PPTX
Apache Spark Architecture
Alexey Grishchenko
 
ODP
Spark Deep Dive
Corey Nolet
 
PPTX
Processing Large Data with Apache Spark -- HasGeek
Venkata Naga Ravi
 
PPTX
Apache Spark
Majid Hajibaba
 
PDF
Spark overview
Lisa Hua
 
PDF
Apache Spark Introduction
sudhakara st
 
PPT
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Sachin Aggarwal
 
PDF
Apache Spark RDDs
Dean Chen
 
PDF
Apache Spark & Streaming
Fernando Rodriguez
 
PDF
Apache Spark
Uwe Printz
 
PDF
Hadoop Spark Introduction-20150130
Xuan-Chao Huang
 
PPTX
Apache spark Intro
Tudor Lapusan
 
PPTX
Intro to Apache Spark
Robert Sanders
 
PPTX
Apache Spark RDD 101
sparkInstructor
 
PPTX
Tuning and Debugging in Apache Spark
Patrick Wendell
 
PPTX
Spark introduction and architecture
Sohil Jain
 
PPTX
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17
spark-project
 
PPTX
Spark Study Notes
Richard Kuo
 
Transformations and actions a visual guide training
Spark Summit
 
Introduction to spark
Duyhai Doan
 
Apache Spark Architecture
Alexey Grishchenko
 
Spark Deep Dive
Corey Nolet
 
Processing Large Data with Apache Spark -- HasGeek
Venkata Naga Ravi
 
Apache Spark
Majid Hajibaba
 
Spark overview
Lisa Hua
 
Apache Spark Introduction
sudhakara st
 
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Sachin Aggarwal
 
Apache Spark RDDs
Dean Chen
 
Apache Spark & Streaming
Fernando Rodriguez
 
Apache Spark
Uwe Printz
 
Hadoop Spark Introduction-20150130
Xuan-Chao Huang
 
Apache spark Intro
Tudor Lapusan
 
Intro to Apache Spark
Robert Sanders
 
Apache Spark RDD 101
sparkInstructor
 
Tuning and Debugging in Apache Spark
Patrick Wendell
 
Spark introduction and architecture
Sohil Jain
 
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17
spark-project
 
Spark Study Notes
Richard Kuo
 

Viewers also liked (20)

PPTX
BDM26: Spark Summit 2014 Debriefing
David Lauzon
 
PPTX
BDM32: AdamCloud Project - Part II
David Lauzon
 
PPTX
BDM29: AdamCloud Project - Part I
David Lauzon
 
PPTX
BDM8 - Near-realtime Big Data Analytics using Impala
David Lauzon
 
PPTX
BDM24 - Cassandra use case at Netflix 20140429 montrealmeetup
David Lauzon
 
PPTX
BDM9 - Comparison of Oracle RDBMS and Cloudera Impala for a hospital use case
David Lauzon
 
PPTX
Introduction to Spark: Data Analysis and Use Cases in Big Data
Jongwook Woo
 
PDF
Apache Spark Tutorial
Farzad Nozarian
 
PPTX
Securing the Data Hub--Protecting your Customer IP (Technical Workshop)
Cloudera, Inc.
 
PPTX
Building a Data Hub that Empowers Customer Insight (Technical Workshop)
Cloudera, Inc.
 
PPTX
Spark Internals - Hadoop Source Code Reading #16 in Japan
Taro L. Saito
 
PDF
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
BigDataEverywhere
 
PDF
หนังสือภาษาไทย Spark Internal
Bhuridech Sudsee
 
PDF
Unified Big Data Processing with Apache Spark
C4Media
 
PDF
QCon2016--Drive Best Spark Performance on AI
Lex Yu
 
ODP
Internals
Sandeep Purohit
 
PPTX
The Vortex of Change - Digital Transformation (Presented by Intel)
Cloudera, Inc.
 
PPTX
Spark & Cassandra at DataStax Meetup on Jan 29, 2015
Sameer Farooqui
 
PDF
Apache Spark 101 [in 50 min]
Pawel Szulc
 
PPTX
Using Big Data to Transform Your Customer’s Experience - Part 1

Cloudera, Inc.
 
BDM26: Spark Summit 2014 Debriefing
David Lauzon
 
BDM32: AdamCloud Project - Part II
David Lauzon
 
BDM29: AdamCloud Project - Part I
David Lauzon
 
BDM8 - Near-realtime Big Data Analytics using Impala
David Lauzon
 
BDM24 - Cassandra use case at Netflix 20140429 montrealmeetup
David Lauzon
 
BDM9 - Comparison of Oracle RDBMS and Cloudera Impala for a hospital use case
David Lauzon
 
Introduction to Spark: Data Analysis and Use Cases in Big Data
Jongwook Woo
 
Apache Spark Tutorial
Farzad Nozarian
 
Securing the Data Hub--Protecting your Customer IP (Technical Workshop)
Cloudera, Inc.
 
Building a Data Hub that Empowers Customer Insight (Technical Workshop)
Cloudera, Inc.
 
Spark Internals - Hadoop Source Code Reading #16 in Japan
Taro L. Saito
 
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
BigDataEverywhere
 
หนังสือภาษาไทย Spark Internal
Bhuridech Sudsee
 
Unified Big Data Processing with Apache Spark
C4Media
 
QCon2016--Drive Best Spark Performance on AI
Lex Yu
 
Internals
Sandeep Purohit
 
The Vortex of Change - Digital Transformation (Presented by Intel)
Cloudera, Inc.
 
Spark & Cassandra at DataStax Meetup on Jan 29, 2015
Sameer Farooqui
 
Apache Spark 101 [in 50 min]
Pawel Szulc
 
Using Big Data to Transform Your Customer’s Experience - Part 1

Cloudera, Inc.
 
Ad

Similar to BDM25 - Spark runtime internal (20)

PPT
11. From Hadoop to Spark 1:2
Fabio Fumarola
 
PDF
Meetup ml spark_ppt
Snehal Nagmote
 
PDF
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
DataStax Academy
 
PPTX
Apache spark - History and market overview
Martin Zapletal
 
PDF
Hadoop and Spark
Shravan (Sean) Pabba
 
PPTX
Apache Spark
SugumarSarDurai
 
PDF
Apache Spark: What? Why? When?
Massimo Schenone
 
PDF
Introduction to Spark Training
Spark Summit
 
PDF
What is Distributed Computing, Why we use Apache Spark
Andy Petrella
 
PPTX
2016-07-21-Godil-presentation.pptx
D21CE161GOSWAMIPARTH
 
PPTX
Intro to Spark development
Spark Summit
 
PDF
Introduction to Apache Spark
Anastasios Skarlatidis
 
PDF
Fast Data Analytics with Spark and Python
Benjamin Bengfort
 
PDF
Spark
newmooxx
 
PPTX
Zaharia spark-scala-days-2012
Skills Matter Talks
 
PPT
hadoop-spark.ppt
NouhaElhaji1
 
PDF
Spark cluster computing with working sets
JinxinTang
 
PPT
Scala and spark
Fabio Fumarola
 
PDF
Apache Hadoop and Spark: Introduction and Use Cases for Data Analysis
Trieu Nguyen
 
PDF
Introduction to Spark
Li Ming Tsai
 
11. From Hadoop to Spark 1:2
Fabio Fumarola
 
Meetup ml spark_ppt
Snehal Nagmote
 
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
DataStax Academy
 
Apache spark - History and market overview
Martin Zapletal
 
Hadoop and Spark
Shravan (Sean) Pabba
 
Apache Spark
SugumarSarDurai
 
Apache Spark: What? Why? When?
Massimo Schenone
 
Introduction to Spark Training
Spark Summit
 
What is Distributed Computing, Why we use Apache Spark
Andy Petrella
 
2016-07-21-Godil-presentation.pptx
D21CE161GOSWAMIPARTH
 
Intro to Spark development
Spark Summit
 
Introduction to Apache Spark
Anastasios Skarlatidis
 
Fast Data Analytics with Spark and Python
Benjamin Bengfort
 
Spark
newmooxx
 
Zaharia spark-scala-days-2012
Skills Matter Talks
 
hadoop-spark.ppt
NouhaElhaji1
 
Spark cluster computing with working sets
JinxinTang
 
Scala and spark
Fabio Fumarola
 
Apache Hadoop and Spark: Introduction and Use Cases for Data Analysis
Trieu Nguyen
 
Introduction to Spark
Li Ming Tsai
 
Ad

Recently uploaded (20)

PPTX
Agentic Automation: Build & Deploy Your First UiPath Agent
klpathrudu
 
PDF
유니티에서 Burst Compiler+ThreadedJobs+SIMD 적용사례
Seongdae Kim
 
PPTX
Migrating Millions of Users with Debezium, Apache Kafka, and an Acyclic Synch...
MD Sayem Ahmed
 
PDF
The 5 Reasons for IT Maintenance - Arna Softech
Arna Softech
 
PPTX
Human Resources Information System (HRIS)
Amity University, Patna
 
PPTX
Change Common Properties in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PPTX
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PPTX
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
PDF
Alexander Marshalov - How to use AI Assistants with your Monitoring system Q2...
VictoriaMetrics
 
PDF
Linux Certificate of Completion - LabEx Certificate
VICTOR MAESTRE RAMIREZ
 
PDF
[Solution] Why Choose the VeryPDF DRM Protector Custom-Built Solution for You...
Lingwen1998
 
PPTX
Why Businesses Are Switching to Open Source Alternatives to Crystal Reports.pptx
Varsha Nayak
 
PPTX
Tally software_Introduction_Presentation
AditiBansal54083
 
PDF
Online Queue Management System for Public Service Offices in Nepal [Focused i...
Rishab Acharya
 
PDF
Wondershare PDFelement Pro Crack for MacOS New Version Latest 2025
bashirkhan333g
 
PDF
Revenue streams of the Wazirx clone script.pdf
aaronjeffray
 
PDF
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
PPTX
In From the Cold: Open Source as Part of Mainstream Software Asset Management
Shane Coughlan
 
PDF
Build It, Buy It, or Already Got It? Make Smarter Martech Decisions
bbedford2
 
PPTX
OpenChain @ OSS NA - In From the Cold: Open Source as Part of Mainstream Soft...
Shane Coughlan
 
Agentic Automation: Build & Deploy Your First UiPath Agent
klpathrudu
 
유니티에서 Burst Compiler+ThreadedJobs+SIMD 적용사례
Seongdae Kim
 
Migrating Millions of Users with Debezium, Apache Kafka, and an Acyclic Synch...
MD Sayem Ahmed
 
The 5 Reasons for IT Maintenance - Arna Softech
Arna Softech
 
Human Resources Information System (HRIS)
Amity University, Patna
 
Change Common Properties in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
Alexander Marshalov - How to use AI Assistants with your Monitoring system Q2...
VictoriaMetrics
 
Linux Certificate of Completion - LabEx Certificate
VICTOR MAESTRE RAMIREZ
 
[Solution] Why Choose the VeryPDF DRM Protector Custom-Built Solution for You...
Lingwen1998
 
Why Businesses Are Switching to Open Source Alternatives to Crystal Reports.pptx
Varsha Nayak
 
Tally software_Introduction_Presentation
AditiBansal54083
 
Online Queue Management System for Public Service Offices in Nepal [Focused i...
Rishab Acharya
 
Wondershare PDFelement Pro Crack for MacOS New Version Latest 2025
bashirkhan333g
 
Revenue streams of the Wazirx clone script.pdf
aaronjeffray
 
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
In From the Cold: Open Source as Part of Mainstream Software Asset Management
Shane Coughlan
 
Build It, Buy It, or Already Got It? Make Smarter Martech Decisions
bbedford2
 
OpenChain @ OSS NA - In From the Cold: Open Source as Part of Mainstream Soft...
Shane Coughlan
 

BDM25 - Spark runtime internal

  • 1. Runtime Internal Nan Zhu (McGill University & Faimdata)
  • 3. WHO AM I • Nan Zhu, PhD Candidate in School of Computer Science of McGill University • Work on computer networks (Software Defined Networks) and large-scale data processing • Work with Prof. Wenbo He and Prof. Xue Liu • PhD is an awesome experience in my life • Tackle real world problems • Keep thinking ! Get insights !
  • 4. WHO AM I • Nan Zhu, PhD Candidate in School of Computer Science of McGill University • Work on computer networks (Software Defined Networks) and large-scale data processing • Work with Prof. Wenbo He and Prof. Xue Liu • PhD is an awesome experience in my life • Tackle real world problems • Keep thinking ! Get insights ! When will I graduate ?
  • 5. WHO AM I • Do-it-all Engineer in Faimdata (http://www.faimdata.com) • Faimdata is a new startup located in Montreal • Build Customer-centric analysis solution based on Spark for retailers • My responsibility • Participate in everything related to data • Akka, HBase, Hive, Kafka, Spark, etc.
  • 6. WHO AM I • My Contribution to Spark • 0.8.1, 0.9.0, 0.9.1, 1.0.0 • 1000+ code, 30 patches • Two examples: • YARN-like architecture in Spark • Introduce Actor Supervisor mechanism to DAGScheduler
  • 7. WHO AM I • My Contribution to Spark • 0.8.1, 0.9.0, 0.9.1, 1.0.0 • 1000+ code, 30 patches • Two examples: • YARN-like architecture in Spark • Introduce Actor Supervisor mechanism to DAGScheduler I’m CodingCat@GitHub !!!!
  • 8. WHO AM I • My Contribution to Spark • 0.8.1, 0.9.0, 0.9.1, 1.0.0 • 1000+ code, 30 patches • Two examples: • YARN-like architecture in Spark • Introduce Actor Supervisor mechanism to DAGScheduler I’m CodingCat@GitHub !!!!
  • 10. What is Spark? • A distributed computing framework • Organize computation as concurrent tasks • Schedule tasks to multiple servers • Handle fault-tolerance, load balancing, etc, in automatic (and transparently)
  • 11. Advantages of Spark • More Descriptive Computing Model • Faster Processing Speed • Unified Pipeline
  • 12. Mode Descriptive Computing Model (1) • WordCount in Hadoop (Map & Reduce)
  • 13. Mode Descriptive Computing Model (1) • WordCount in Hadoop (Map & Reduce)
  • 14. Mode Descriptive Computing Model (1) • WordCount in Hadoop (Map & Reduce) Map function, read each line of the input file and transform each word into <word, 1> pair
  • 15. Mode Descriptive Computing Model (1) • WordCount in Hadoop (Map & Reduce) Map function, read each line of the input file and transform each word into <word, 1> pair
  • 16. Mode Descriptive Computing Model (1) • WordCount in Hadoop (Map & Reduce) Map function, read each line of the input file and transform each word into <word, 1> pair
  • 17. Mode Descriptive Computing Model (1) • WordCount in Hadoop (Map & Reduce) Map function, read each line of the input file and transform each word into <word, 1> pair Reduce function, collect the <word, 1> pairs generated by Map function and merge them by accumulation
  • 18. Mode Descriptive Computing Model (1) • WordCount in Hadoop (Map & Reduce) Map function, read each line of the input file and transform each word into <word, 1> pair Reduce function, collect the <word, 1> pairs generated by Map function and merge them by accumulation
  • 19. Mode Descriptive Computing Model (1) • WordCount in Hadoop (Map & Reduce) Map function, read each line of the input file and transform each word into <word, 1> pair Reduce function, collect the <word, 1> pairs generated by Map function and merge them by accumulation
  • 20. Mode Descriptive Computing Model (1) • WordCount in Hadoop (Map & Reduce) Map function, read each line of the input file and transform each word into <word, 1> pair Reduce function, collect the <word, 1> pairs generated by Map function and merge them by accumulation Configurate the program
  • 21. Mode Descriptive Computing Model (1) • WordCount in Hadoop (Map & Reduce) Map function, read each line of the input file and transform each word into <word, 1> pair Reduce function, collect the <word, 1> pairs generated by Map function and merge them by accumulation Configurate the program
  • 22. DESCRIPTIVE COMPUTING MODEL (2) • WordCount in Spark Scala: Java:
  • 23. DESCRIPTIVE COMPUTING MODEL (2) • Closer look at WordCount in Spark Scala: Organize Computation into Multiple Stages in a Processing Pipeline: transformation to get the intermediate results with expected schema action to get final output Computation is expressed with more high-level APIs, which simplify the logic in original Map & Reduce and define the computation as a processing pipeline
  • 24. DESCRIPTIVE COMPUTING MODEL (2) • Closer look at WordCount in Spark Scala: Organize Computation into Multiple Stages in a Processing Pipeline: transformation to get the intermediate results with expected schema action to get final output Transformation Computation is expressed with more high-level APIs, which simplify the logic in original Map & Reduce and define the computation as a processing pipeline
  • 25. DESCRIPTIVE COMPUTING MODEL (2) • Closer look at WordCount in Spark Scala: Organize Computation into Multiple Stages in a Processing Pipeline: transformation to get the intermediate results with expected schema action to get final output Transformation Action Computation is expressed with more high-level APIs, which simplify the logic in original Map & Reduce and define the computation as a processing pipeline
  • 26. MUCH BETTER PERFORMANCE • PageRank Algorithm Performance Comparison Matei Zaharia, et al, Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing, NSDI 2012 0" 20" 40" 60" 80" 100" 120" 140" 160" 180" Hadoop"" Basic"Spark" Spark"with"Controlled; par<<on" Time%per%Itera+ons%(s)%
  • 27. Unified pipeline Diverse APIs, Operational Cost, etc.
  • 29. Unified pipeline • With a Single Spark Cluster • Batching Processing: Spark Core • Query: Shark & Spark SQL & BlinkDB • Streaming: Spark Streaming • Machine Learning: MLlib • Graph: GraphX
  • 31. Understand a distributed computing framework • DataFlow • e.g. Hadoop family utilizes HDFS to transfer data within a job and share data across jobs/applications HDFS Daemon MapTask HDFS Daemon MapTask HDFS Daemon MapTask
  • 32. Understand a distributed computing framework • DataFlow • e.g. Hadoop family utilizes HDFS to transfer data within a job and share data across jobs/applications
  • 33. Understand a distributed computing framework • DataFlow • e.g. Hadoop family utilizes HDFS to transfer data within a job and share data across jobs/applications
  • 34. Understanding a distributed computing engine • Task Management • How the computation is executed within multiple servers • How the tasks are scheduled • How the resources are allocated
  • 36. Basic Structure of Spark program • A Spark Program val sc = new SparkContext(…) ! val points = sc.textFile("hdfs://...") .map(_.split.map(_.toDouble)).splitAt(1) .map { case (Array(label), features) => LabeledPoint(label, features) } ! val model = Model.train(points)
  • 37. Basic Structure of Spark program • A Spark Program val sc = new SparkContext(…) ! val points = sc.textFile("hdfs://...") .map(_.split.map(_.toDouble)).splitAt(1) .map { case (Array(label), features) => LabeledPoint(label, features) } ! val model = Model.train(points) Includes the components driving the running of computing tasks (will introduce later)
  • 38. Basic Structure of Spark program • A Spark Program val sc = new SparkContext(…) ! val points = sc.textFile("hdfs://...") .map(_.split.map(_.toDouble)).splitAt(1) .map { case (Array(label), features) => LabeledPoint(label, features) } ! val model = Model.train(points) Includes the components driving the running of computing tasks (will introduce later) Load data from HDFS, forming a RDD (Resilient Distributed Datasets) object
  • 39. Basic Structure of Spark program • A Spark Program val sc = new SparkContext(…) ! val points = sc.textFile("hdfs://...") .map(_.split.map(_.toDouble)).splitAt(1) .map { case (Array(label), features) => LabeledPoint(label, features) } ! val model = Model.train(points) Includes the components driving the running of computing tasks (will introduce later) Load data from HDFS, forming a RDD (Resilient Distributed Datasets) object Transformations to generate RDDs with expected element(s)/ format
  • 40. Basic Structure of Spark program • A Spark Program val sc = new SparkContext(…) ! val points = sc.textFile("hdfs://...") .map(_.split.map(_.toDouble)).splitAt(1) .map { case (Array(label), features) => LabeledPoint(label, features) } ! val model = Model.train(points) Includes the components driving the running of computing tasks (will introduce later) Load data from HDFS, forming a RDD (Resilient Distributed Datasets) object Transformations to generate RDDs with expected element(s)/ format All Computations are around RDDs
  • 41. Resilient Distributed Dataset • RDD is a distributed memory abstraction which is • data collection • immutable • created by either loading from stable storage system (e.g. HDFS) or through transformations on other RDD(s) • partitioned and distributed sc.textFile(…) filter() map()
  • 42. From data to computation • Lineage
  • 43. From data to computation • Lineage
  • 44. From data to computation • Lineage Where do I come from? (dependency)
  • 45. From data to computation • Lineage Where do I come from? (dependency) How do I come from? (save the functions calculating the partitions)
  • 46. From data to computation • Lineage Where do I come from? (dependency) How do I come from? (save the functions calculating the partitions) Computation is organized as a DAG (Lineage)
  • 47. From data to computation • Lineage Where do I come from? (dependency) How do I come from? (save the functions calculating the partitions) Computation is organized as a DAG (Lineage) Lost data can be recovered in parallel with the help of the lineage DAG
  • 48. Cache • Frequently accessed RDDs can be materialized and cached in memory • Cached RDD can also be replicated for fault tolerance (Spark scheduler takes cached data locality into account) • Manage the cache space with LRU algorithm
  • 49. Benefits Brought Cache • Example (Log Mining)
  • 50. Benefits Brought Cache • Example (Log Mining) Count is an action, for the first time, it has to calculate from the start of the DAG Graph (textFile)
  • 51. Benefits Brought Cache • Example (Log Mining) Count is an action, for the first time, it has to calculate from the start of the DAG Graph (textFile) Because the data is cached, the second count does not trigger a “start-from-zero” computation, instead, it is based on “cachedMsgs” directly
  • 52. Summary • Resilient Distributed Datasets (RDD) • Distributed memory abstraction in Spark • Keep computation run in memory with best effort • Keep track of the “lineage” of data • Organize computation • Support fault-tolerance • Cache
  • 53. RDD brings much better performance by simplifying the data flow • Share Data among Applications A typical data processing pipeline
  • 54. RDD brings much better performance by simplifying the data flow • Share Data among Applications A typical data processing pipeline Overhead Overhead
  • 55. RDD brings much better performance by simplifying the data flow • Share Data among Applications A typical data processing pipeline Overhead Overhead
  • 56. RDD brings much better performance by simplifying the data flow • Share Data among Applications A typical data processing pipeline
  • 57. RDD brings much better performance by simplifying the data flow • Share Data among Applications A typical data processing pipeline
  • 58. RDD brings much better performance by simplifying the data flow • Share data in Iterative Algorithms • Certain amount of predictive/machine learning algorithms are iterative • e.g. K-Means Step 1: Place randomly initial group centroids into the space. Step 2: Assign each object to the group that has the closest centroid. Step 3: Recalculate the positions of the centroids. Step 4: If the positions of the centroids didn't change go to the next step, else go to Step 2. Step 5: End.
  • 59. RDD brings much better performance by simplifying the data flow • Share data in Iterative Algorithms • Certain amount of predictive/machine learning algorithms are iterative • e.g. K-Means
  • 60. RDD brings much better performance by simplifying the data flow • Share data in Iterative Algorithms • Certain amount of predictive/machine learning algorithms are iterative • e.g. K-Means Assign group (Step 2) Recalculate Group (Step 3 & 4) Output (Step 5) HDFS Read Write Write
  • 61. RDD brings much better performance by simplifying the data flow • Share data in Iterative Algorithms • Certain amount of predictive/machine learning algorithms are iterative • e.g. K-Means
  • 62. RDD brings much better performance by simplifying the data flow • Share data in Iterative Algorithms • Certain amount of predictive/machine learning algorithms are iterative • e.g. K-Means Write Assign group (Step 2) Recalculate Group (Step 3 & 4) Output (Step 5) HDFS
  • 63. RDD brings much better performance by simplifying the data flow • Share data in Iterative Algorithms • Certain amount of predictive/machine learning algorithms are iterative • e.g. K-Means
  • 65. Worker Worker The Structure of Spark Cluster Driver Program Spark Context Cluster Manager DAG Sche Task Sche Cluster Sche
  • 66. Worker Worker The Structure of Spark Cluster Driver Program Spark Context Cluster Manager Each SparkContext creates a Spark application DAG Sche Task Sche Cluster Sche
  • 67. Worker Worker The Structure of Spark Cluster Driver Program Spark Context Cluster Manager Each SparkContext creates a Spark application DAG Sche Task Sche Cluster Sche
  • 68. Worker Worker The Structure of Spark Cluster Driver Program Spark Context Cluster Manager Each SparkContext creates a Spark application Submit Application to Cluster Manager DAG Sche Task Sche Cluster Sche
  • 69. Worker Worker The Structure of Spark Cluster Driver Program Spark Context Cluster Manager Each SparkContext creates a Spark application Submit Application to Cluster Manager The Cluster Manager can be the master of standalone mode in Spark, Mesos and YARN DAG Sche Task Sche Cluster Sche
  • 70. Worker Worker Executor Cache Executor Cache The Structure of Spark Cluster Driver Program Spark Context Cluster Manager Each SparkContext creates a Spark application Submit Application to Cluster Manager The Cluster Manager can be the master of standalone mode in Spark, Mesos and YARN DAG Sche Task Sche Cluster Sche
  • 71. Worker Worker Executor Cache Executor Cache The Structure of Spark Cluster Driver Program Spark Context Cluster Manager Each SparkContext creates a Spark application Submit Application to Cluster Manager The Cluster Manager can be the master of standalone mode in Spark, Mesos and YARN Start Executors for the application in Workers; Executors registers with ClusterScheduler; DAG Sche Task Sche Cluster Sche
  • 72. Worker Worker Executor Cache Executor Cache The Structure of Spark Cluster Driver Program Spark Context Cluster Manager Each SparkContext creates a Spark application Submit Application to Cluster Manager The Cluster Manager can be the master of standalone mode in Spark, Mesos and YARN Start Executors for the application in Workers; Executors registers with ClusterScheduler; DAG Sche Task Sche Cluster Sche Driver program schedules tasks for the application TaskTask Task Task
  • 73. Scheduling Process join% union% groupBy% map% Stage%3% Stage%1% Stage%2% A:% B:% C:% D:% E:% F:% G:% Split DAG DAGScheduler RDD objects are connected together with a DAG Submit each stage as a TaskSet TaskScheduler TaskSetManagers ! (monitor the progress of tasks and handle failed stages) Failed Stages ClusterScheduler Submit Tasks to Executors
  • 75. Scheduling Optimization join% union% groupBy% map% Stage%3% Stage%1% Stage%2% A:% B:% C:% D:% E:% F:% G:% Within Stage Optimization, Pipelining the generation of RDD partitions when they are in narrow dependency
  • 76. Scheduling Optimization join% union% groupBy% map% Stage%3% Stage%1% Stage%2% A:% B:% C:% D:% E:% F:% G:% Within Stage Optimization, Pipelining the generation of RDD partitions when they are in narrow dependency
  • 77. Scheduling Optimization join% union% groupBy% map% Stage%3% Stage%1% Stage%2% A:% B:% C:% D:% E:% F:% G:% Within Stage Optimization, Pipelining the generation of RDD partitions when they are in narrow dependency
  • 78. Scheduling Optimization join% union% groupBy% map% Stage%3% Stage%1% Stage%2% A:% B:% C:% D:% E:% F:% G:% Within Stage Optimization, Pipelining the generation of RDD partitions when they are in narrow dependency
  • 79. Scheduling Optimization join% union% groupBy% map% Stage%3% Stage%1% Stage%2% A:% B:% C:% D:% E:% F:% G:% Within Stage Optimization, Pipelining the generation of RDD partitions when they are in narrow dependency
  • 80. Scheduling Optimization join% union% groupBy% map% Stage%3% Stage%1% Stage%2% A:% B:% C:% D:% E:% F:% G:% Within Stage Optimization, Pipelining the generation of RDD partitions when they are in narrow dependency
  • 81. Scheduling Optimization join% union% groupBy% map% Stage%3% Stage%1% Stage%2% A:% B:% C:% D:% E:% F:% G:% Within Stage Optimization, Pipelining the generation of RDD partitions when they are in narrow dependency Partitioning-based join optimization, avoid whole-shuffle with best-efforts
  • 82. Scheduling Optimization join% union% groupBy% map% Stage%3% Stage%1% Stage%2% A:% B:% C:% D:% E:% F:% G:% Within Stage Optimization, Pipelining the generation of RDD partitions when they are in narrow dependency Partitioning-based join optimization, avoid whole-shuffle with best-efforts
  • 83. Scheduling Optimization join% union% groupBy% map% Stage%3% Stage%1% Stage%2% A:% B:% C:% D:% E:% F:% G:% Within Stage Optimization, Pipelining the generation of RDD partitions when they are in narrow dependency Partitioning-based join optimization, avoid whole-shuffle with best-efforts Cache-aware to avoid duplicate computation
  • 84. Summary • No centralized application scheduler • Maximize Throughput • Application specific schedulers (DAGScheduler, TaskScheduler, ClusterScheduler) are initialized within SparkContext • Scheduling Abstraction (DAG, TaskSet, Task) • Support fault-tolerance, pipelining, auto-recovery, etc. • Scheduling Optimization • Pipelining, join, caching
  • 85. We are hiring ! http://www.faimdata.com [email protected] [email protected]
  • 86. Thank you! Q & A Credits to my friend, LianCheng@Databricks, his slides inspired me a lot