Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spark Tutorial |Simplilearn
1. What is Spark?
2. Components of Spark
Spark Core
Spark SQL
Spark Streaming
Spark MLlib
GraphX
3. Apache Spark Architecture
4. Running a Spark Application
What’s in it for you?
What is Apache Spark?
Apache Spark is a top-level open-source cluster computing framework used for real-time
processing and analysis of a large amount of data
What is Apache Spark?
Apache Spark is a top-level open-source cluster computing framework used for real-time
processing and analysis of a large amount of data
Fast
processing
Spark processes data faster since it
saves time in reading and writing
operations
What is Apache Spark?
Apache Spark is a top-level open-source cluster computing framework used for real-time
processing and analysis of a large amount of data
Fast
processing
Real-time
streaming
Spark processes data faster since it
saves time in reading and writing
operations
Spark allows real-time streaming and
processing of data
What is Apache Spark?
Apache Spark is a top-level open-source cluster computing framework used for real-time
processing and analysis of a large amount of data
Fast
processing
Real-time
streaming
In-memory
computation
Spark processes data faster since it
saves time in reading and writing
operations
Spark allows real-time streaming and
processing of data
Spark has DAG execution engine that
provides in-memory computation
What is Apache Spark?
Apache Spark is a top-level open-source cluster computing framework used for real-time
processing and analysis of a large amount of data
Fast
processing
Real-time
streaming
In-memory
computation
Fault
tolerant
Spark processes data faster since it
saves time in reading and writing
operations
Spark allows real-time streaming and
processing of data
Spark has DAG execution engine that
provides in-memory computation
Spark is fault tolerant through RDDs which
are designed to handle the failure of any
worker node in the cluster
Spark Components
Spark Core
Apache Spark Components
Spark Core Spark SQL
SQL
Apache Spark Components
Spark
Streaming
Spark Core Spark SQL
SQL Streaming
Apache Spark Components
MLlib
Spark
Streaming
Spark Core Spark SQL
SQL Streaming MLlib
Apache Spark Components
MLlib
Spark
Streaming
Spark Core Spark SQL GraphX
SQL Streaming MLlib
Apache Spark Components
Spark Core
Spark is the core engine for large-scale parallel and distributed data processing
Spark Core
Spark is the core engine for large-scale parallel and distributed data processing
Memory management and fault recovery
Scheduling, distributing and monitoring jobs on a cluster
Interacting with storage system
Performs the following:
Spark RDD
Resilient Distributed Datasets (RDDs) are the building blocks of any Spark application
Create RDD Transformations
RDD
Actions Results
Transformations are Operations (such as
map, filter, join, union) that are performed on
an RDD that yields a new RDD containing the
result
Actions are operations (such as
reduce, first, count) that return
a value after running a computation
on an RDD
Spark SQL
Spark SQL is Apache Spark’s module for working with structured data
SQL
Spark SQL
Spark SQL is Apache Spark’s module for working with structured data
SQL
Integrated
You can integrate Spark SQL with
Spark programs and query
structured data inside Spark
programs
Spark SQL features
Spark SQL
Spark SQL is Apache Spark’s module for working with structured data
SQL
Integrated
High
Compatibility
You can integrate Spark SQL with
Spark programs and query
structured data inside Spark
programs
You can run unmodified Hive
queries on existing warehouses
in Spark SQL. With existing Hive
data, queries and UDFs, Spark
SQL offers full compatibility
Spark SQL features
Spark SQL
Spark SQL is Apache Spark’s module for working with structured data
SQL
Integrated
High
Compatibility
Scalability
You can integrate Spark SQL with
Spark programs and query
structured data inside Spark
programs
You can run unmodified Hive
queries on existing warehouses
in Spark SQL. With existing Hive
data, queries and UDFs, Spark
SQL offers full compatibility
Spark SQL leverages RDD model
as it supports large jobs and mid-
query fault tolerance. Moreover,
for both interactive and long
queries, it uses the same engine
Spark SQL features
Spark SQL
Spark SQL is Apache Spark’s module for working with structured data
SQL
Integrated
Spark SQL features
High
Compatibility
Scalability
Standard
Connectivity
You can integrate Spark SQL with
Spark programs and query
structured data inside Spark
programs
You can run unmodified Hive
queries on existing warehouses
in Spark SQL. With existing Hive
data, queries and UDFs, Spark
SQL offers full compatibility
Spark SQL leverages RDD model
as it supports large jobs and mid-
query fault tolerance. Moreover,
for both interactive and long
queries, it uses the same engine
You can easily connect Spark
SQL with JDBC or ODBC. For
connectivity for business
intelligence tools, both turned as
industry norms
Spark SQL
Spark SQL is Apache Spark’s module for working with structured data
DataFrame DSLSpark SQL and HQL
DataFrame API
Data Source API
CSV JSON JDBC
SQL Architecture
SQL
Spark SQL
Spark SQL has three main layers
Spark SQL is Apache Spark’s module for working with structured data
Language API SchemaRDD Data Sources
Spark is compatible and even
supported by the languages like
Python, HiveQL, Scala, and Java
As Spark SQL works on schema,
tables, and records, you can use
SchemaRDD or data frame as a
temporary table
Data sources for Spark SQL are
different like JSON document, HIVE
tables, and Cassandra database
SQL
Spark SQL
Spark allows you to define custom SQL functions called User Defined Functions (UDFs)
SQL
def lowerRemoveAllWhiteSpaces(s: String): String = {
s.tolowerCase().replace(“S”, ‘’”)
}
val lowerRemoveAllWhiteSpacesUDF = udf[String, String]
(lowerRemoveAllWhiteSpaces)
val sourceDF = spark.createDF(
List(
(“ WELCOME “)
(“ SpaRk SqL “)
), List(
(“text”, StringType, true)
)
)
sourceDF.select(
lowerRemoveAllWhiteSpacesUDF(col(“text”)).as(“clean_text”)
).show()
UDF that removes all
the whitespace and
lowercases all the characters
in a string
clean_text
welcome
sparksql
Output
Spark Streaming
Spark Streaming an extension of the core Spark API that enables scalable,
high-throughput, fault-tolerant stream processing of live data streams
Streaming
Spark Streaming
Spark Streaming an extension of the core Spark API that enables scalable,
high-throughput, fault-tolerant stream processing of live data streams
Data can be ingested from many sources and the processed data
can be pushed out to different filesystems
Streaming
Spark Streaming
Spark Streaming an extension of the core Spark API that enables scalable,
high-throughput, fault-tolerant stream processing of live data streams
Data can be ingested from many sources and the processed data
can be pushed out to different filesystems
Streaming
Streaming data sources
Static data sources
Spark Streaming
Spark Streaming an extension of the core Spark API that enables scalable,
high-throughput, fault-tolerant stream processing of live data streams
Data can be ingested from many sources and the processed data
can be pushed out to different filesystems
Streaming
Streaming
Streaming data sources
Static data sources
Spark Streaming
Spark Streaming an extension of the core Spark API that enables scalable,
high-throughput, fault-tolerant stream processing of live data streams
Data can be ingested from many sources and the processed data
can be pushed out to different filesystems
Streaming
Streaming
Streaming data sources
Static data sources
Data storage
Spark Streaming
Spark Streaming an extension of the core Spark API that enables scalable,
high-throughput, fault-tolerant stream processing of live data streams
Spark Streaming receives live input data streams and divides the
data into batches, which are then processed by the Spark engine to
generate the final stream of results in batches
Streaming Engine
Input data
stream
Batches of
input data
Batches of
processed
data
Streaming
Spark Streaming
Spark Streaming an extension of the core Spark API that enables scalable,
high-throughput, fault-tolerant stream processing of live data streams
Streaming
Here is an example of a basic RDD operation to extract individual
words from lines of text in an input data stream
Lines From
Time 0 and 1
Lines From
Time 1 and 2
Lines From
Time 2 and 3
Lines From
Time 3 and 4
Words From
Time 0 and 1
Words From
Time 1 and 2
Words From
Time 2 and 3
Words From
Time 3 and 4
Lines
DStream
Words
DStream
flatMap
Operation
Spark MLlib
MLlib is Spark’s machine learning library. Its goal is to make practical machine learning
scalable and easy
MLlib
MLlib is Spark’s machine learning library. Its goal is to make practical machine learning
scalable and easy
MLlib
At a high level, it provides the following:
ML Algorithms: classification, regression, clustering, and
collaborative filtering
Spark MLlib
MLlib is Spark’s machine learning library. Its goal is to make practical machine learning
scalable and easy
MLlib
At a high level, it provides the following:
ML Algorithms: classification, regression, clustering, and
collaborative filtering
Featurization: feature extraction, transformation,
dimensionality reduction, and selection
Spark MLlib
MLlib is Spark’s machine learning library. Its goal is to make practical machine learning
scalable and easy
MLlib
At a high level, it provides the following:
ML Algorithms: classification, regression, clustering, and
collaborative filtering
Featurization: feature extraction, transformation,
dimensionality reduction, and selection
Pipelines: tools for constructing, evaluating, and tuning ML
pipelines
Spark MLlib
MLlib is Spark’s machine learning library. Its goal is to make practical machine learning
scalable and easy
MLlib
At a high level, it provides the following:
ML Algorithms: classification, regression, clustering, and
collaborative filtering
Featurization: feature extraction, transformation,
dimensionality reduction, and selection
Pipelines: tools for constructing, evaluating, and tuning ML
pipelines
Utilities: linear algebra, statistics, data handling
Spark MLlib
GraphX
GraphX is a component in Spark for graphs and graph-parallel computation
GraphX is used to model relations between objects. A graph has vertices
(objects) and edges (relationships).
Mathew Justin
Edge
Vertex
Relationship: Friends
GraphX
GraphX is a component in Spark for graphs and graph-parallel computation
Provides a uniform tool
for ETL
Exploratory data
analysis
Interactive graph
computations
GraphX is a component in Spark for graphs and graph-parallel computation
Page Rank
Fraud
Detection
Geographic
information system
Disaster
management
Following are the applications of GraphX
GraphX
Spark Architecture
Spark Architecture
Spark Architecture is based on 2 important abstractions
Spark Architecture
Spark Architecture is based on 2 important abstractions
Resilient Distributed Dataset
(RDD)
RDD’s are the fundamental units of data in Apache
Spark that are split into partitions and can be executed
on different nodes of a cluster
Cluster
RDD
Spark Architecture
Spark Architecture is based on 2 important abstractions
Resilient Distributed Dataset
(RDD)
Directed Acyclic Graph
(DAG)
RDD’s are the fundamental units of data in Apache
Spark that are split into partitions and can be executed
on different nodes of a cluster
Cluster
DAG is the scheduling layer of the Spark
Architecture that implements stage-oriented
scheduling and eliminates the Hadoop MapReduce
multistage execution model
RDD
Stage 1
Parallelize
Filter
Map
Stage 2
reduceByKey
Map
Spark Architecture
Master Node
Driver Program
SparkContext
• Master Node has a Driver Program
• The Spark code behaves as a driver
program and creates a SparkContext
which is a gateway to all the Spark
functionalities
Apache Spark uses a master-slave architecture that consists of a driver, that runs on a
master node, and multiple executors which run across the worker nodes in the cluster
Spark Architecture
Cluster Manager
• Spark applications run as independent
sets of processes
on a cluster
• The driver program & Spark context
takes care of the job execution within
the cluster
Master Node
Driver Program
SparkContext
Spark Architecture
Cache
Task Task
Executor
Worker Node
Cache
Task Task
Executor
Worker Node
• A job is split into multiple tasks that are
distributed over the worker node
• When an RDD is created in Spark
context, it can be distributed across
various nodes
• Worker nodes are slaves that execute
different tasks
Cluster Manager
Master Node
Driver Program
SparkContext
Spark Architecture
Cache
Task Task
Executor
Worker Node
Cache
Task Task
Executor
Worker Node
• Executor is responsible for the
execution of these tasks
• Worker nodes execute the tasks
assigned by the Cluster Manager and
returns the resultback to the Spark
Context
Master Node
Driver Program
SparkContext Cluster Manager
Spark Architecture
Cache
Task Task
Executor
Worker Node
Cache
Task Task
Executor
Worker Node
• Worker nodes execute the tasks
assigned by the Cluster Manager and
returns it back to the Spark Context
• Executor is responsible for the
execution of these tasks
Master Node
Driver Program
SparkContext Cluster Manager
Running a Spark
Application
Spark Session
Driver Program
Application
How a Spark application runs on a cluster?
Spark applications run as independent processes, coordinated by the
SparkSession object in the driver program
Spark Session
Driver Program
Application
Resource Manager/
Cluster Manager
How a Spark application runs on a cluster?
The resource or cluster manager assigns tasks to workers,
one task per partition
Spark Session
Driver Program
Application
Worker Node
Executor
Task
Task
Cache
Partition
Partition
Disk
Data
Data
How a Spark application runs on a cluster?
Resource Manager/
Cluster Manager
• A task applies its unit of work to the dataset in its
partition and outputs a new partition dataset
• Because iterative algorithms apply operations
repeatedly to data, they benefit from caching datasets
across iterations
How a Spark application runs on a cluster?
Spark Session
Driver Program
Application
Executor
Task
Task
Cache
Partition
Partition
Disk
Data
Data
Resource Manager/
Cluster Manager
Results are sent back to the driver application or
can be saved to disk
Worker Node
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spark Tutorial |Simplilearn

More Related Content

PPTX
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
PDF
AWS glue technical enablement training
PDF
Introduction to apache spark
PPT
Dimensional Modeling
PDF
Introduction to Data Stream Processing
PPTX
Hypermedia messageing (UNIT 5)
PDF
Introduction to Apache Spark
PDF
Week 5: Elastic Compute Service (ECS) with Alibaba Cloud- DSA 441 Cloud Compu...
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
AWS glue technical enablement training
Introduction to apache spark
Dimensional Modeling
Introduction to Data Stream Processing
Hypermedia messageing (UNIT 5)
Introduction to Apache Spark
Week 5: Elastic Compute Service (ECS) with Alibaba Cloud- DSA 441 Cloud Compu...

What's hot (20)

PPTX
Introduction to Apache Spark
PPTX
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
PDF
Apache Spark Introduction
PPTX
Apache Spark overview
PDF
Apache Spark Overview
PPTX
Spark architecture
PDF
Pyspark Tutorial | Introduction to Apache Spark with Python | PySpark Trainin...
PDF
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
PPTX
Apache Spark Architecture
PDF
Apache spark
PPTX
Druid deep dive
PPTX
PDF
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
PDF
Intro to Delta Lake
PPTX
Intro to Apache Spark
PDF
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLab
PPTX
Apache Tez: Accelerating Hadoop Query Processing
PDF
Making Apache Spark Better with Delta Lake
PDF
Apache spark - Architecture , Overview & libraries
PPTX
Apache Spark Core
Introduction to Apache Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Apache Spark Introduction
Apache Spark overview
Apache Spark Overview
Spark architecture
Pyspark Tutorial | Introduction to Apache Spark with Python | PySpark Trainin...
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
Apache Spark Architecture
Apache spark
Druid deep dive
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
Intro to Delta Lake
Intro to Apache Spark
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLab
Apache Tez: Accelerating Hadoop Query Processing
Making Apache Spark Better with Delta Lake
Apache spark - Architecture , Overview & libraries
Apache Spark Core
Ad

Similar to Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spark Tutorial |Simplilearn (20)

PPTX
Apache Spark Overview
PPTX
Getting Started with Apache Spark (Scala)
PPT
An Introduction to Apache spark with scala
PPTX
big data analytics (BAD601) Module-5.pptx
PPTX
Spark SQL Tutorial | Spark SQL Using Scala | Apache Spark Tutorial For Beginn...
PPTX
Machine Learning with SparkR
PPTX
Spark Workshop
PPTX
Spark from the Surface
PDF
Apache Spark PDF
PPTX
Lighting up Big Data Analytics with Apache Spark in Azure
PPTX
Learn Apache Spark: A Comprehensive Guide
PPTX
Pyspark presentationsfspfsjfspfjsfpsjfspfjsfpsjfsfsf
PPTX
sparkbigdataanlyticspoweerpointpptt.pptx
PDF
Started with-apache-spark
PPTX
PPTX
Apachespark 160612140708
PPTX
Apache spark
PPTX
CLOUD_COMPUTING_MODULE5_RK_BIG_DATA.pptx
PPTX
Apache spark
PDF
Hands-on Guide to Apache Spark 3: Build Scalable Computing Engines for Batch ...
Apache Spark Overview
Getting Started with Apache Spark (Scala)
An Introduction to Apache spark with scala
big data analytics (BAD601) Module-5.pptx
Spark SQL Tutorial | Spark SQL Using Scala | Apache Spark Tutorial For Beginn...
Machine Learning with SparkR
Spark Workshop
Spark from the Surface
Apache Spark PDF
Lighting up Big Data Analytics with Apache Spark in Azure
Learn Apache Spark: A Comprehensive Guide
Pyspark presentationsfspfsjfspfjsfpsjfspfjsfpsjfsfsf
sparkbigdataanlyticspoweerpointpptt.pptx
Started with-apache-spark
Apachespark 160612140708
Apache spark
CLOUD_COMPUTING_MODULE5_RK_BIG_DATA.pptx
Apache spark
Hands-on Guide to Apache Spark 3: Build Scalable Computing Engines for Batch ...
Ad

More from Simplilearn (20)

PPTX
Top 50 Scrum Master Interview Questions | Scrum Master Interview Questions & ...
PPTX
Bagging Vs Boosting In Machine Learning | Ensemble Learning In Machine Learni...
PPTX
Future Of Social Media | Social Media Trends and Strategies 2025 | Instagram ...
PPTX
SQL Query Optimization | SQL Query Optimization Techniques | SQL Basics | SQL...
PPTX
SQL INterview Questions .pTop 45 SQL Interview Questions And Answers In 2025 ...
PPTX
How To Start Influencer Marketing Business | Influencer Marketing For Beginne...
PPTX
Cyber Security Roadmap 2025 | How To Become Cyber Security Engineer In 2025 |...
PPTX
How To Become An AI And ML Engineer In 2025 | AI Engineer Roadmap | AI ML Car...
PPTX
What Is GitHub Copilot? | How To Use GitHub Copilot? | How does GitHub Copilo...
PPTX
Top 10 Data Analyst Certification For 2025 | Best Data Analyst Certification ...
PPTX
Complete Data Science Roadmap For 2025 | Data Scientist Roadmap For Beginners...
PPTX
Top 7 High Paying AI Certifications Courses For 2025 | Best AI Certifications...
PPTX
Data Cleaning In Data Mining | Step by Step Data Cleaning Process | Data Clea...
PPTX
Top 10 Data Analyst Projects For 2025 | Data Analyst Projects | Data Analysis...
PPTX
AI Engineer Roadmap 2025 | AI Engineer Roadmap For Beginners | AI Engineer Ca...
PPTX
Machine Learning Roadmap 2025 | Machine Learning Engineer Roadmap For Beginne...
PPTX
Kotter's 8-Step Change Model Explained | Kotter's Change Management Model | S...
PPTX
Gen AI Engineer Roadmap For 2025 | How To Become Gen AI Engineer In 2025 | Si...
PPTX
Top 10 Data Analyst Certification For 2025 | Best Data Analyst Certification ...
PPTX
Complete Data Science Roadmap For 2025 | Data Scientist Roadmap For Beginners...
Top 50 Scrum Master Interview Questions | Scrum Master Interview Questions & ...
Bagging Vs Boosting In Machine Learning | Ensemble Learning In Machine Learni...
Future Of Social Media | Social Media Trends and Strategies 2025 | Instagram ...
SQL Query Optimization | SQL Query Optimization Techniques | SQL Basics | SQL...
SQL INterview Questions .pTop 45 SQL Interview Questions And Answers In 2025 ...
How To Start Influencer Marketing Business | Influencer Marketing For Beginne...
Cyber Security Roadmap 2025 | How To Become Cyber Security Engineer In 2025 |...
How To Become An AI And ML Engineer In 2025 | AI Engineer Roadmap | AI ML Car...
What Is GitHub Copilot? | How To Use GitHub Copilot? | How does GitHub Copilo...
Top 10 Data Analyst Certification For 2025 | Best Data Analyst Certification ...
Complete Data Science Roadmap For 2025 | Data Scientist Roadmap For Beginners...
Top 7 High Paying AI Certifications Courses For 2025 | Best AI Certifications...
Data Cleaning In Data Mining | Step by Step Data Cleaning Process | Data Clea...
Top 10 Data Analyst Projects For 2025 | Data Analyst Projects | Data Analysis...
AI Engineer Roadmap 2025 | AI Engineer Roadmap For Beginners | AI Engineer Ca...
Machine Learning Roadmap 2025 | Machine Learning Engineer Roadmap For Beginne...
Kotter's 8-Step Change Model Explained | Kotter's Change Management Model | S...
Gen AI Engineer Roadmap For 2025 | How To Become Gen AI Engineer In 2025 | Si...
Top 10 Data Analyst Certification For 2025 | Best Data Analyst Certification ...
Complete Data Science Roadmap For 2025 | Data Scientist Roadmap For Beginners...

Recently uploaded (20)

PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PDF
Weekly quiz Compilation Jan -July 25.pdf
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PDF
advance database management system book.pdf
PPTX
TNA_Presentation-1-Final(SAVE)) (1).pptx
DOCX
Cambridge-Practice-Tests-for-IELTS-12.docx
PDF
International_Financial_Reporting_Standa.pdf
PDF
Uderstanding digital marketing and marketing stratergie for engaging the digi...
PDF
AI-driven educational solutions for real-life interventions in the Philippine...
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PDF
My India Quiz Book_20210205121199924.pdf
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
202450812 BayCHI UCSC-SV 20250812 v17.pptx
FORM 1 BIOLOGY MIND MAPS and their schemes
Weekly quiz Compilation Jan -July 25.pdf
Unit 4 Computer Architecture Multicore Processor.pptx
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
advance database management system book.pdf
TNA_Presentation-1-Final(SAVE)) (1).pptx
Cambridge-Practice-Tests-for-IELTS-12.docx
International_Financial_Reporting_Standa.pdf
Uderstanding digital marketing and marketing stratergie for engaging the digi...
AI-driven educational solutions for real-life interventions in the Philippine...
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
My India Quiz Book_20210205121199924.pdf
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
ChatGPT for Dummies - Pam Baker Ccesa007.pdf

Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spark Tutorial |Simplilearn

  • 2. 1. What is Spark? 2. Components of Spark Spark Core Spark SQL Spark Streaming Spark MLlib GraphX 3. Apache Spark Architecture 4. Running a Spark Application What’s in it for you?
  • 3. What is Apache Spark? Apache Spark is a top-level open-source cluster computing framework used for real-time processing and analysis of a large amount of data
  • 4. What is Apache Spark? Apache Spark is a top-level open-source cluster computing framework used for real-time processing and analysis of a large amount of data Fast processing Spark processes data faster since it saves time in reading and writing operations
  • 5. What is Apache Spark? Apache Spark is a top-level open-source cluster computing framework used for real-time processing and analysis of a large amount of data Fast processing Real-time streaming Spark processes data faster since it saves time in reading and writing operations Spark allows real-time streaming and processing of data
  • 6. What is Apache Spark? Apache Spark is a top-level open-source cluster computing framework used for real-time processing and analysis of a large amount of data Fast processing Real-time streaming In-memory computation Spark processes data faster since it saves time in reading and writing operations Spark allows real-time streaming and processing of data Spark has DAG execution engine that provides in-memory computation
  • 7. What is Apache Spark? Apache Spark is a top-level open-source cluster computing framework used for real-time processing and analysis of a large amount of data Fast processing Real-time streaming In-memory computation Fault tolerant Spark processes data faster since it saves time in reading and writing operations Spark allows real-time streaming and processing of data Spark has DAG execution engine that provides in-memory computation Spark is fault tolerant through RDDs which are designed to handle the failure of any worker node in the cluster
  • 10. Spark Core Spark SQL SQL Apache Spark Components
  • 11. Spark Streaming Spark Core Spark SQL SQL Streaming Apache Spark Components
  • 12. MLlib Spark Streaming Spark Core Spark SQL SQL Streaming MLlib Apache Spark Components
  • 13. MLlib Spark Streaming Spark Core Spark SQL GraphX SQL Streaming MLlib Apache Spark Components
  • 14. Spark Core Spark is the core engine for large-scale parallel and distributed data processing
  • 15. Spark Core Spark is the core engine for large-scale parallel and distributed data processing Memory management and fault recovery Scheduling, distributing and monitoring jobs on a cluster Interacting with storage system Performs the following:
  • 16. Spark RDD Resilient Distributed Datasets (RDDs) are the building blocks of any Spark application Create RDD Transformations RDD Actions Results Transformations are Operations (such as map, filter, join, union) that are performed on an RDD that yields a new RDD containing the result Actions are operations (such as reduce, first, count) that return a value after running a computation on an RDD
  • 17. Spark SQL Spark SQL is Apache Spark’s module for working with structured data SQL
  • 18. Spark SQL Spark SQL is Apache Spark’s module for working with structured data SQL Integrated You can integrate Spark SQL with Spark programs and query structured data inside Spark programs Spark SQL features
  • 19. Spark SQL Spark SQL is Apache Spark’s module for working with structured data SQL Integrated High Compatibility You can integrate Spark SQL with Spark programs and query structured data inside Spark programs You can run unmodified Hive queries on existing warehouses in Spark SQL. With existing Hive data, queries and UDFs, Spark SQL offers full compatibility Spark SQL features
  • 20. Spark SQL Spark SQL is Apache Spark’s module for working with structured data SQL Integrated High Compatibility Scalability You can integrate Spark SQL with Spark programs and query structured data inside Spark programs You can run unmodified Hive queries on existing warehouses in Spark SQL. With existing Hive data, queries and UDFs, Spark SQL offers full compatibility Spark SQL leverages RDD model as it supports large jobs and mid- query fault tolerance. Moreover, for both interactive and long queries, it uses the same engine Spark SQL features
  • 21. Spark SQL Spark SQL is Apache Spark’s module for working with structured data SQL Integrated Spark SQL features High Compatibility Scalability Standard Connectivity You can integrate Spark SQL with Spark programs and query structured data inside Spark programs You can run unmodified Hive queries on existing warehouses in Spark SQL. With existing Hive data, queries and UDFs, Spark SQL offers full compatibility Spark SQL leverages RDD model as it supports large jobs and mid- query fault tolerance. Moreover, for both interactive and long queries, it uses the same engine You can easily connect Spark SQL with JDBC or ODBC. For connectivity for business intelligence tools, both turned as industry norms
  • 22. Spark SQL Spark SQL is Apache Spark’s module for working with structured data DataFrame DSLSpark SQL and HQL DataFrame API Data Source API CSV JSON JDBC SQL Architecture SQL
  • 23. Spark SQL Spark SQL has three main layers Spark SQL is Apache Spark’s module for working with structured data Language API SchemaRDD Data Sources Spark is compatible and even supported by the languages like Python, HiveQL, Scala, and Java As Spark SQL works on schema, tables, and records, you can use SchemaRDD or data frame as a temporary table Data sources for Spark SQL are different like JSON document, HIVE tables, and Cassandra database SQL
  • 24. Spark SQL Spark allows you to define custom SQL functions called User Defined Functions (UDFs) SQL def lowerRemoveAllWhiteSpaces(s: String): String = { s.tolowerCase().replace(“S”, ‘’”) } val lowerRemoveAllWhiteSpacesUDF = udf[String, String] (lowerRemoveAllWhiteSpaces) val sourceDF = spark.createDF( List( (“ WELCOME “) (“ SpaRk SqL “) ), List( (“text”, StringType, true) ) ) sourceDF.select( lowerRemoveAllWhiteSpacesUDF(col(“text”)).as(“clean_text”) ).show() UDF that removes all the whitespace and lowercases all the characters in a string clean_text welcome sparksql Output
  • 25. Spark Streaming Spark Streaming an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams Streaming
  • 26. Spark Streaming Spark Streaming an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams Data can be ingested from many sources and the processed data can be pushed out to different filesystems Streaming
  • 27. Spark Streaming Spark Streaming an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams Data can be ingested from many sources and the processed data can be pushed out to different filesystems Streaming Streaming data sources Static data sources
  • 28. Spark Streaming Spark Streaming an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams Data can be ingested from many sources and the processed data can be pushed out to different filesystems Streaming Streaming Streaming data sources Static data sources
  • 29. Spark Streaming Spark Streaming an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams Data can be ingested from many sources and the processed data can be pushed out to different filesystems Streaming Streaming Streaming data sources Static data sources Data storage
  • 30. Spark Streaming Spark Streaming an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams Spark Streaming receives live input data streams and divides the data into batches, which are then processed by the Spark engine to generate the final stream of results in batches Streaming Engine Input data stream Batches of input data Batches of processed data Streaming
  • 31. Spark Streaming Spark Streaming an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams Streaming Here is an example of a basic RDD operation to extract individual words from lines of text in an input data stream Lines From Time 0 and 1 Lines From Time 1 and 2 Lines From Time 2 and 3 Lines From Time 3 and 4 Words From Time 0 and 1 Words From Time 1 and 2 Words From Time 2 and 3 Words From Time 3 and 4 Lines DStream Words DStream flatMap Operation
  • 32. Spark MLlib MLlib is Spark’s machine learning library. Its goal is to make practical machine learning scalable and easy MLlib
  • 33. MLlib is Spark’s machine learning library. Its goal is to make practical machine learning scalable and easy MLlib At a high level, it provides the following: ML Algorithms: classification, regression, clustering, and collaborative filtering Spark MLlib
  • 34. MLlib is Spark’s machine learning library. Its goal is to make practical machine learning scalable and easy MLlib At a high level, it provides the following: ML Algorithms: classification, regression, clustering, and collaborative filtering Featurization: feature extraction, transformation, dimensionality reduction, and selection Spark MLlib
  • 35. MLlib is Spark’s machine learning library. Its goal is to make practical machine learning scalable and easy MLlib At a high level, it provides the following: ML Algorithms: classification, regression, clustering, and collaborative filtering Featurization: feature extraction, transformation, dimensionality reduction, and selection Pipelines: tools for constructing, evaluating, and tuning ML pipelines Spark MLlib
  • 36. MLlib is Spark’s machine learning library. Its goal is to make practical machine learning scalable and easy MLlib At a high level, it provides the following: ML Algorithms: classification, regression, clustering, and collaborative filtering Featurization: feature extraction, transformation, dimensionality reduction, and selection Pipelines: tools for constructing, evaluating, and tuning ML pipelines Utilities: linear algebra, statistics, data handling Spark MLlib
  • 37. GraphX GraphX is a component in Spark for graphs and graph-parallel computation GraphX is used to model relations between objects. A graph has vertices (objects) and edges (relationships). Mathew Justin Edge Vertex Relationship: Friends
  • 38. GraphX GraphX is a component in Spark for graphs and graph-parallel computation Provides a uniform tool for ETL Exploratory data analysis Interactive graph computations
  • 39. GraphX is a component in Spark for graphs and graph-parallel computation Page Rank Fraud Detection Geographic information system Disaster management Following are the applications of GraphX GraphX
  • 41. Spark Architecture Spark Architecture is based on 2 important abstractions
  • 42. Spark Architecture Spark Architecture is based on 2 important abstractions Resilient Distributed Dataset (RDD) RDD’s are the fundamental units of data in Apache Spark that are split into partitions and can be executed on different nodes of a cluster Cluster RDD
  • 43. Spark Architecture Spark Architecture is based on 2 important abstractions Resilient Distributed Dataset (RDD) Directed Acyclic Graph (DAG) RDD’s are the fundamental units of data in Apache Spark that are split into partitions and can be executed on different nodes of a cluster Cluster DAG is the scheduling layer of the Spark Architecture that implements stage-oriented scheduling and eliminates the Hadoop MapReduce multistage execution model RDD Stage 1 Parallelize Filter Map Stage 2 reduceByKey Map
  • 44. Spark Architecture Master Node Driver Program SparkContext • Master Node has a Driver Program • The Spark code behaves as a driver program and creates a SparkContext which is a gateway to all the Spark functionalities Apache Spark uses a master-slave architecture that consists of a driver, that runs on a master node, and multiple executors which run across the worker nodes in the cluster
  • 45. Spark Architecture Cluster Manager • Spark applications run as independent sets of processes on a cluster • The driver program & Spark context takes care of the job execution within the cluster Master Node Driver Program SparkContext
  • 46. Spark Architecture Cache Task Task Executor Worker Node Cache Task Task Executor Worker Node • A job is split into multiple tasks that are distributed over the worker node • When an RDD is created in Spark context, it can be distributed across various nodes • Worker nodes are slaves that execute different tasks Cluster Manager Master Node Driver Program SparkContext
  • 47. Spark Architecture Cache Task Task Executor Worker Node Cache Task Task Executor Worker Node • Executor is responsible for the execution of these tasks • Worker nodes execute the tasks assigned by the Cluster Manager and returns the resultback to the Spark Context Master Node Driver Program SparkContext Cluster Manager
  • 48. Spark Architecture Cache Task Task Executor Worker Node Cache Task Task Executor Worker Node • Worker nodes execute the tasks assigned by the Cluster Manager and returns it back to the Spark Context • Executor is responsible for the execution of these tasks Master Node Driver Program SparkContext Cluster Manager
  • 50. Spark Session Driver Program Application How a Spark application runs on a cluster? Spark applications run as independent processes, coordinated by the SparkSession object in the driver program
  • 51. Spark Session Driver Program Application Resource Manager/ Cluster Manager How a Spark application runs on a cluster? The resource or cluster manager assigns tasks to workers, one task per partition
  • 52. Spark Session Driver Program Application Worker Node Executor Task Task Cache Partition Partition Disk Data Data How a Spark application runs on a cluster? Resource Manager/ Cluster Manager • A task applies its unit of work to the dataset in its partition and outputs a new partition dataset • Because iterative algorithms apply operations repeatedly to data, they benefit from caching datasets across iterations
  • 53. How a Spark application runs on a cluster? Spark Session Driver Program Application Executor Task Task Cache Partition Partition Disk Data Data Resource Manager/ Cluster Manager Results are sent back to the driver application or can be saved to disk Worker Node