SlideShare a Scribd company logo
WIFI SSID:SparkAISummit | Password: UnifiedAnalytics
Landon Robinson & Jack Chapa
Spark Streaming
Headaches and Breakthroughs in Building
Continuous Applications
#UnifiedAnalytics #SparkAISummit
Who We Are
3#UnifiedAnalytics #SparkAISummit
Landon Robinson
Data Engineer
Jack Chapa
Data Engineer
Big Data Team @ SpotX
Because Spark Streaming...
• is very powerful
• can supercharge your infrastructure
• … and can be very complex!
Lots of headaches and breakthroughs!
But first… why are we here?
4#UnifiedAnalytics #SparkAISummit
● Streaming Basics
● Testing
● Monitoring & Alerts
● Batch Intervals & Resources
Takeaways
Leave with a few actionable items that we wish we knew
when we started with Spark Streaming.
Focus Areas
● Helpful Configurations
● Backpressure
● Data Enrichment &
Transformations
Our Company
6#UnifiedAnalytics #SparkAISummit
The Trusted Platform For
Premium Publishers and
Broadcasters
We Process a Lot of Data
Data:
- 220 MM+ Total Files/Blocks
- 8 PB+ HDFS Space
- 20 TB+ new data daily
- 100MM+ records/minute
- 300+ Data Nodes
Apps:
- Thousands of daily Spark apps
- Hundreds of daily user queries
- Multiple 24/7 Streaming apps
Our uses include:
- Rapid ingestion of data into warehouse for querying
- Machine learning on near-live data streams
- Ability to react to and impact live situations
- Accelerated processing / updating of metadata
- Real-time visualization of data streams and processing
Spark Streaming is Key for Us
Spark Streaming Basics
a brief overview
Spark Streaming Basics
Spark Streaming is an extension of Spark that
enables scalable, high-throughput, fault-tolerant
processing of live data streams.
• Stream == live data stream
– Topic == Kafka’s name for a stream
•
DStream == sequence of RDDs formed
from reading a data stream
• Batch == a self-contained job within your
Streaming app that processes a segment of
the stream.
Testing
Rapid development and
testing of Spark apps
Use Spark in Local Mode
You can start building Spark Streaming apps
in minutes, using Spark locally!
On your local machine
• No cluster needed!
• Great for rough testing
We Recommend:
IntelliJ Community Edition
• with SBT: For dependency management
Use Spark in Local Mode
In your build.sbt:
• src/test/scala => “provided”
• src/main/scala => “compiled”
The Scala Build Tool is your friend!
Simply:
• Import Spark libraries
• Invoke a Context and/or Session
• Set master to local[*] or local[n]
Example Unit Test using just a SparkContext
Invoke a local session:
• In your unit test classes
• Test logic on small datasets
Add to your deployment pipeline
for a nice pre-release gut check!
Unit Testing
Spark Streaming Apps can easily be unit tested
- Using .queueStream()
- Using a spark testing library
Libraries
- spark-testing-base
- sscheck
- spark-tests
Use Cases
- DStream actions
- Business Logic
- Integration
Example Library: spark-testing-base
- Easy to Use
- Helpful wrappers
- Integrates w/ scalatest
- Minimal code required
- Clock management
- Runs alongside other tests
GitHub: https://github.com/holdenk/spark-testing-base
Monitoring
Tracking and visualizing
performance of your
app
Monitoring is Awesome
It can reveal:
• How your app is performing
• Problems + Bugs!
And provide opportunities to:
• See and address issues
• Observe behavior visually
But monitoring can be tough to implement!
Monitoring (a less than ideal approach)
You could do it all in the app...
Example: Looping over RDDs to:
• Count records
• Track Kafka offsets
• Processing time / delays
But it’s less than ideal...
• Calculating performance significantly
impacts performance… not great.
• All these metrics are calculated by
Spark!
Monitoring and Visualization (using Listeners)
Use Spark Listeners to access
metrics in the background!
Let Spark do the hard work:
• Batch duration, delays
• Record throughput
• Stream position recovery
Come to our talk: Spark Listeners:
A Crash Course in Fast, Easy
Monitoring!
• Room 3016 | Today @ 5:30 PM
Kafka Offset Recovery
Saving your place
elsewhere
Inside the Spark
Listener class, after a
batch completes...
You can access an
object generated by
Spark containing your
offsets processed.
Take those offsets and
back them up to a
DB...
Writing Offsets to MySQL
Your offsets are now
stored in a DB after
each batch completes.
Whenever your app
restarts, it reads those
offsets from the DB...
And starts processing
where it last left off!
Reading Offsets from MySQL
Getting Offsets from the Database
Example: Reading Offsets from MySQL
Example: Reading Offsets from MySQL
- Record timing info for fast
troubleshooting
- Escalate alarms to the
appropriate team
- Quickly resolve while
app continues running
Timing Logging (around actions)
React
How do I react to this monitoring?
● Heartbeats
● Scheduled Monitor Jobs
○ Version Updates
○ Ensure Running
○ Act on failure/problem
● Monitoring Alarms
● Look at them!
Batch Intervals
Optimizing for speed
and resource efficiency
You want batches that process
faster than the interval, but
not so fast that resources are
idling and therefore wasted!
Setting Appropriate Batch Intervals
An appropriate batch interval is key to
an app that is quick and efficient.
Effectiveness of interval is
affected by:
• Resources alloc (cpu + ram)
• Quantity of work
• Quantity of data
Batch interval
Setting Appropriate Batch Intervals
Consider these questions:
How quickly do I need to process data?
• Can I slow it down to save resources?
What is my resource budget / allocation?
• Can I increase? Can I cut back?
• Bigger interval = more time to process
• … but also more data to process
• Smaller interval = the opposite
Tips for finding an optimal combination:
Start small!
a. Short batch interval (seconds)
b. Modest resources
Whichever you have in more flexible
supply (a or b), increasing accordingly.
Again: processing time < interval = good
Comfortably less, not significantly less.
Setting Appropriate Batch Intervals
Additional Resource Notes
- Scale down when possible
- Free up resources or save on cloud utilization spend
- Avoid preemption
- Use resource pools with prioritization
- With preemption disabled if you can
- Set appropriate # of partitions for Kafka topics
- Higher volume == higher partition count
- Higher partition count == greater parallelization
Helpful Configuration Settings
Configuring your app to
be performant and
efficient
Helpful Configuration Settings
Spark
• spark.memory.useLegacyMode = true
– spark.storage.memoryFraction=0.03
• spark.submit.deployMode = cluster
• spark.serializer = org.apache.spark.serializer.KryoSerializer
• spark.rdd.compress = true
– spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
• spark.shuffle.service.enabled = true
• spark.streaming.blockInterval = 300
Kafka
• enable.auto.commit = ‘false’
Backpressure
Use Case:
You have irregular spikes in message throughput from Kafka topics
• Backpressure dynamically alters rate data is received per batch from Kafka.
• Prevents overwhelming of app at startup and peak load.
Settings:
• spark.streaming.backpressure.enabled = true
• spark.streaming.kafka.maxRatePerPartition = 20000
– max rate (messages/second) at which each Kafka partition will be read
• PID Rate Estimator: can be used to tweak the rate based on batch performance
– spark.streaming.backpressure.pid.*
Source: https://www.linkedin.com/pulse/enable-back-pressure-make-your-spark-streaming-production-lan-jiang/
Transformations
Bringing streaming and
static data together
Transformations (Streaming + Static)
transform()
● Allows RDD-level access to data.
● Use case: joining with another RDD
updateStateByKey() / mapWithState()
● Apply function to each key - useful for keeping
track of state
● Use case: maintaining state between batches
(e.g. rolling join w/ two streams)
reduceByKey()
● Reduce a keyed RDD with appropriate
function.
● Use case: deduping, aggregations
Using the transform() method on DStream:
Apply an RDD-to-RDD function to every RDD of the DStream. Used for arbitrary RDD
operations on the DStream.
• Useful for applying arbitrary RDD operations on a DStream.
• Great for enriching streaming data with supplemental static data
Joining Streaming and Static Data
Source: https://hadoopsters.net/2017/11/26/how-to-join-static-data-with-streaming-data-dstream-in-spark/
transactions = … // streaming dataset (dstream)
transaction_details = … // static dataset (rdd)
val complete_transaction_data = transactions.transform(live_transaction =>
live_transaction.join(transaction_details))
Effective Static Joining
How do we handle static and persistent data?
Driver:
● Broadcast if small enough
● Read on driver every batch, then join
Worker:
● Connect on worker - lazy val connection
object
● Useful for persisting data
Streaming isn’t always easy… but here are some great takeaways!
• Testing: Use Spark Locally w/ Unit Tests
• Monitoring: Use Listeners & React
• Batch Intervals & Resources: Be thoughtful!
• Configuration: Lots of awesome ones!
• Transformations: Do more with your streaming data!
• Offset Recovery: Stop worrying and love the offset management!
Review
Landon Robinson
• lrobinson@spotx.tv
Jack Chapa
• jchapa@spotx.tv
hadoopsters.dev
https://gist.github.com/hadoopsters
Contact Us
Q & A
#UnifiedAnalytics #SparkAISummit
DON’T FORGET TO RATE
AND REVIEW THE SESSIONS
SEARCH SPARK + AI SUMMIT

More Related Content

What's hot (20)

PPTX
Disrupting Big Data with Apache Spark in the Cloud
Jen Aman
 
PDF
New Developments in the Open Source Ecosystem: Apache Spark 3.0, Delta Lake, ...
Databricks
 
PDF
Self-Service Apache Spark Structured Streaming Applications and Analytics
Databricks
 
PDF
Spark Summit EU talk by Shaun Klopfenstein and Neelesh Shastry
Spark Summit
 
PDF
How to Boost 100x Performance for Real World Application with Apache Spark-(G...
Spark Summit
 
PPTX
Realtime streaming architecture in INFINARIO
Jozo Kovac
 
PDF
Power Your Delta Lake with Streaming Transactional Changes
Databricks
 
PDF
Build Real-Time Applications with Databricks Streaming
Databricks
 
PDF
Analytics at the Real-Time Speed of Business: Spark Summit East talk by Manis...
Spark Summit
 
PDF
Life is but a Stream
Databricks
 
PDF
Building Robust Production Data Pipelines with Databricks Delta
Databricks
 
PDF
Scalable Monitoring Using Apache Spark and Friends with Utkarsh Bhatnagar
Databricks
 
PDF
Spark Streaming and IoT by Mike Freedman
Spark Summit
 
PDF
Apache Spark At Scale in the Cloud
Databricks
 
PPTX
Building Data Pipelines with Spark and StreamSets
Pat Patterson
 
PDF
Streaming Analytics for Financial Enterprises
Databricks
 
PDF
Lightning-Fast Analytics for Workday Transactional Data with Pavel Hardak and...
Databricks
 
PDF
Visualizing AutoTrader Traffic in Near Real-Time with Spark Streaming-(Jon Gr...
Spark Summit
 
PPTX
Brandon obrien streaming_data
Nitin Kumar
 
PDF
SQL Analytics Powering Telemetry Analysis at Comcast
Databricks
 
Disrupting Big Data with Apache Spark in the Cloud
Jen Aman
 
New Developments in the Open Source Ecosystem: Apache Spark 3.0, Delta Lake, ...
Databricks
 
Self-Service Apache Spark Structured Streaming Applications and Analytics
Databricks
 
Spark Summit EU talk by Shaun Klopfenstein and Neelesh Shastry
Spark Summit
 
How to Boost 100x Performance for Real World Application with Apache Spark-(G...
Spark Summit
 
Realtime streaming architecture in INFINARIO
Jozo Kovac
 
Power Your Delta Lake with Streaming Transactional Changes
Databricks
 
Build Real-Time Applications with Databricks Streaming
Databricks
 
Analytics at the Real-Time Speed of Business: Spark Summit East talk by Manis...
Spark Summit
 
Life is but a Stream
Databricks
 
Building Robust Production Data Pipelines with Databricks Delta
Databricks
 
Scalable Monitoring Using Apache Spark and Friends with Utkarsh Bhatnagar
Databricks
 
Spark Streaming and IoT by Mike Freedman
Spark Summit
 
Apache Spark At Scale in the Cloud
Databricks
 
Building Data Pipelines with Spark and StreamSets
Pat Patterson
 
Streaming Analytics for Financial Enterprises
Databricks
 
Lightning-Fast Analytics for Workday Transactional Data with Pavel Hardak and...
Databricks
 
Visualizing AutoTrader Traffic in Near Real-Time with Spark Streaming-(Jon Gr...
Spark Summit
 
Brandon obrien streaming_data
Nitin Kumar
 
SQL Analytics Powering Telemetry Analysis at Comcast
Databricks
 

Similar to Headaches and Breakthroughs in Building Continuous Applications (20)

PPTX
Spark + AI Summit 2019: Apache Spark Listeners: A Crash Course in Fast, Easy ...
Landon Robinson
 
PDF
Strata NYC 2015: What's new in Spark Streaming
Databricks
 
PPTX
Spark Streaming Recipes and "Exactly Once" Semantics Revised
Michael Spector
 
PDF
Build a Time Series Application with Apache Spark and Apache HBase
Carol McDonald
 
PDF
Apache Spark - A High Level overview
Karan Alang
 
PDF
Top 5 mistakes when writing Streaming applications
hadooparchbook
 
PDF
The Top Five Mistakes Made When Writing Streaming Applications with Mark Grov...
Databricks
 
PDF
What No One Tells You About Writing a Streaming App: Spark Summit East talk b...
Spark Summit
 
PDF
What no one tells you about writing a streaming app
hadooparchbook
 
PPTX
Global Big Data Conference Sept 2014 AWS Kinesis Spark Streaming Approximatio...
Chris Fregly
 
PDF
Structured Streaming in Spark
Digital Vidya
 
PDF
Productizing Structured Streaming Jobs
Databricks
 
PDF
Apache Spark streaming and HBase
Carol McDonald
 
PPTX
East Bay Java User Group Oct 2014 Spark Streaming Kinesis Machine Learning
Chris Fregly
 
PPTX
Real time streaming analytics
Anirudh
 
PDF
Lifting the hood on spark streaming - StampedeCon 2015
StampedeCon
 
PDF
Introduction to Spark Streaming
datamantra
 
PDF
Spark Streaming Data Pipelines
MapR Technologies
 
PDF
Fast, Scalable, Streaming Applications with Spark Streaming, the Kafka API an...
Carol McDonald
 
PDF
Building Robust, Adaptive Streaming Apps with Spark Streaming
Databricks
 
Spark + AI Summit 2019: Apache Spark Listeners: A Crash Course in Fast, Easy ...
Landon Robinson
 
Strata NYC 2015: What's new in Spark Streaming
Databricks
 
Spark Streaming Recipes and "Exactly Once" Semantics Revised
Michael Spector
 
Build a Time Series Application with Apache Spark and Apache HBase
Carol McDonald
 
Apache Spark - A High Level overview
Karan Alang
 
Top 5 mistakes when writing Streaming applications
hadooparchbook
 
The Top Five Mistakes Made When Writing Streaming Applications with Mark Grov...
Databricks
 
What No One Tells You About Writing a Streaming App: Spark Summit East talk b...
Spark Summit
 
What no one tells you about writing a streaming app
hadooparchbook
 
Global Big Data Conference Sept 2014 AWS Kinesis Spark Streaming Approximatio...
Chris Fregly
 
Structured Streaming in Spark
Digital Vidya
 
Productizing Structured Streaming Jobs
Databricks
 
Apache Spark streaming and HBase
Carol McDonald
 
East Bay Java User Group Oct 2014 Spark Streaming Kinesis Machine Learning
Chris Fregly
 
Real time streaming analytics
Anirudh
 
Lifting the hood on spark streaming - StampedeCon 2015
StampedeCon
 
Introduction to Spark Streaming
datamantra
 
Spark Streaming Data Pipelines
MapR Technologies
 
Fast, Scalable, Streaming Applications with Spark Streaming, the Kafka API an...
Carol McDonald
 
Building Robust, Adaptive Streaming Apps with Spark Streaming
Databricks
 
Ad

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
Databricks
 
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
PPT
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
PPTX
Data Lakehouse Symposium | Day 2
Databricks
 
PPTX
Data Lakehouse Symposium | Day 4
Databricks
 
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks
 
PDF
Democratizing Data Quality Through a Centralized Platform
Databricks
 
PDF
Learn to Use Databricks for Data Science
Databricks
 
PDF
Why APM Is Not the Same As ML Monitoring
Databricks
 
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
PDF
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
PDF
Sawtooth Windows for Feature Aggregations
Databricks
 
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
PDF
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
PDF
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
PDF
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
DW Migration Webinar-March 2022.pptx
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
Data Lakehouse Symposium | Day 2
Databricks
 
Data Lakehouse Symposium | Day 4
Databricks
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks
 
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Learn to Use Databricks for Data Science
Databricks
 
Why APM Is Not the Same As ML Monitoring
Databricks
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
Sawtooth Windows for Feature Aggregations
Databricks
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
Ad

Recently uploaded (20)

PDF
List of all the AI prompt cheat codes.pdf
Avijit Kumar Roy
 
PPTX
Exploring Multilingual Embeddings for Italian Semantic Search: A Pretrained a...
Sease
 
PDF
2_Management_of_patients_with_Reproductive_System_Disorders.pdf
motbayhonewunetu
 
PDF
AUDITABILITY & COMPLIANCE OF AI SYSTEMS IN HEALTHCARE
GAHI Youssef
 
PDF
Early_Diabetes_Detection_using_Machine_L.pdf
maria879693
 
DOC
MATRIX_AMAN IRAWAN_20227479046.docbbbnnb
vanitafiani1
 
PPTX
apidays Munich 2025 - Building an AWS Serverless Application with Terraform, ...
apidays
 
PPTX
Introduction to Artificial Intelligence.pptx
StarToon1
 
DOCX
AI/ML Applications in Financial domain projects
Rituparna De
 
PPTX
Rational Functions, Equations, and Inequalities (1).pptx
mdregaspi24
 
PDF
Avatar for apidays apidays PRO June 07, 2025 0 5 apidays Helsinki & North 2...
apidays
 
PDF
R Cookbook - Processing and Manipulating Geological spatial data with R.pdf
OtnielSimopiaref2
 
PDF
Context Engineering for AI Agents, approaches, memories.pdf
Tamanna
 
PDF
apidays Helsinki & North 2025 - How (not) to run a Graphql Stewardship Group,...
apidays
 
PPT
01 presentation finyyyal معهد معايره.ppt
eltohamym057
 
PDF
Web Scraping with Google Gemini 2.0 .pdf
Tamanna
 
PDF
Context Engineering vs. Prompt Engineering, A Comprehensive Guide.pdf
Tamanna
 
PPTX
apidays Munich 2025 - Building Telco-Aware Apps with Open Gateway APIs, Subhr...
apidays
 
PDF
What does good look like - CRAP Brighton 8 July 2025
Jan Kierzyk
 
PDF
apidays Helsinki & North 2025 - Monetizing AI APIs: The New API Economy, Alla...
apidays
 
List of all the AI prompt cheat codes.pdf
Avijit Kumar Roy
 
Exploring Multilingual Embeddings for Italian Semantic Search: A Pretrained a...
Sease
 
2_Management_of_patients_with_Reproductive_System_Disorders.pdf
motbayhonewunetu
 
AUDITABILITY & COMPLIANCE OF AI SYSTEMS IN HEALTHCARE
GAHI Youssef
 
Early_Diabetes_Detection_using_Machine_L.pdf
maria879693
 
MATRIX_AMAN IRAWAN_20227479046.docbbbnnb
vanitafiani1
 
apidays Munich 2025 - Building an AWS Serverless Application with Terraform, ...
apidays
 
Introduction to Artificial Intelligence.pptx
StarToon1
 
AI/ML Applications in Financial domain projects
Rituparna De
 
Rational Functions, Equations, and Inequalities (1).pptx
mdregaspi24
 
Avatar for apidays apidays PRO June 07, 2025 0 5 apidays Helsinki & North 2...
apidays
 
R Cookbook - Processing and Manipulating Geological spatial data with R.pdf
OtnielSimopiaref2
 
Context Engineering for AI Agents, approaches, memories.pdf
Tamanna
 
apidays Helsinki & North 2025 - How (not) to run a Graphql Stewardship Group,...
apidays
 
01 presentation finyyyal معهد معايره.ppt
eltohamym057
 
Web Scraping with Google Gemini 2.0 .pdf
Tamanna
 
Context Engineering vs. Prompt Engineering, A Comprehensive Guide.pdf
Tamanna
 
apidays Munich 2025 - Building Telco-Aware Apps with Open Gateway APIs, Subhr...
apidays
 
What does good look like - CRAP Brighton 8 July 2025
Jan Kierzyk
 
apidays Helsinki & North 2025 - Monetizing AI APIs: The New API Economy, Alla...
apidays
 

Headaches and Breakthroughs in Building Continuous Applications

  • 1. WIFI SSID:SparkAISummit | Password: UnifiedAnalytics
  • 2. Landon Robinson & Jack Chapa Spark Streaming Headaches and Breakthroughs in Building Continuous Applications #UnifiedAnalytics #SparkAISummit
  • 3. Who We Are 3#UnifiedAnalytics #SparkAISummit Landon Robinson Data Engineer Jack Chapa Data Engineer Big Data Team @ SpotX
  • 4. Because Spark Streaming... • is very powerful • can supercharge your infrastructure • … and can be very complex! Lots of headaches and breakthroughs! But first… why are we here? 4#UnifiedAnalytics #SparkAISummit
  • 5. ● Streaming Basics ● Testing ● Monitoring & Alerts ● Batch Intervals & Resources Takeaways Leave with a few actionable items that we wish we knew when we started with Spark Streaming. Focus Areas ● Helpful Configurations ● Backpressure ● Data Enrichment & Transformations
  • 6. Our Company 6#UnifiedAnalytics #SparkAISummit The Trusted Platform For Premium Publishers and Broadcasters
  • 7. We Process a Lot of Data Data: - 220 MM+ Total Files/Blocks - 8 PB+ HDFS Space - 20 TB+ new data daily - 100MM+ records/minute - 300+ Data Nodes Apps: - Thousands of daily Spark apps - Hundreds of daily user queries - Multiple 24/7 Streaming apps
  • 8. Our uses include: - Rapid ingestion of data into warehouse for querying - Machine learning on near-live data streams - Ability to react to and impact live situations - Accelerated processing / updating of metadata - Real-time visualization of data streams and processing Spark Streaming is Key for Us
  • 9. Spark Streaming Basics a brief overview
  • 10. Spark Streaming Basics Spark Streaming is an extension of Spark that enables scalable, high-throughput, fault-tolerant processing of live data streams. • Stream == live data stream – Topic == Kafka’s name for a stream • DStream == sequence of RDDs formed from reading a data stream • Batch == a self-contained job within your Streaming app that processes a segment of the stream.
  • 12. Use Spark in Local Mode You can start building Spark Streaming apps in minutes, using Spark locally! On your local machine • No cluster needed! • Great for rough testing We Recommend: IntelliJ Community Edition • with SBT: For dependency management
  • 13. Use Spark in Local Mode In your build.sbt: • src/test/scala => “provided” • src/main/scala => “compiled” The Scala Build Tool is your friend! Simply: • Import Spark libraries • Invoke a Context and/or Session • Set master to local[*] or local[n]
  • 14. Example Unit Test using just a SparkContext Invoke a local session: • In your unit test classes • Test logic on small datasets Add to your deployment pipeline for a nice pre-release gut check!
  • 15. Unit Testing Spark Streaming Apps can easily be unit tested - Using .queueStream() - Using a spark testing library Libraries - spark-testing-base - sscheck - spark-tests Use Cases - DStream actions - Business Logic - Integration
  • 16. Example Library: spark-testing-base - Easy to Use - Helpful wrappers - Integrates w/ scalatest - Minimal code required - Clock management - Runs alongside other tests GitHub: https://github.com/holdenk/spark-testing-base
  • 18. Monitoring is Awesome It can reveal: • How your app is performing • Problems + Bugs! And provide opportunities to: • See and address issues • Observe behavior visually But monitoring can be tough to implement!
  • 19. Monitoring (a less than ideal approach) You could do it all in the app... Example: Looping over RDDs to: • Count records • Track Kafka offsets • Processing time / delays But it’s less than ideal... • Calculating performance significantly impacts performance… not great. • All these metrics are calculated by Spark!
  • 20. Monitoring and Visualization (using Listeners) Use Spark Listeners to access metrics in the background! Let Spark do the hard work: • Batch duration, delays • Record throughput • Stream position recovery Come to our talk: Spark Listeners: A Crash Course in Fast, Easy Monitoring! • Room 3016 | Today @ 5:30 PM
  • 21. Kafka Offset Recovery Saving your place elsewhere
  • 22. Inside the Spark Listener class, after a batch completes... You can access an object generated by Spark containing your offsets processed. Take those offsets and back them up to a DB... Writing Offsets to MySQL
  • 23. Your offsets are now stored in a DB after each batch completes. Whenever your app restarts, it reads those offsets from the DB... And starts processing where it last left off! Reading Offsets from MySQL
  • 24. Getting Offsets from the Database
  • 27. - Record timing info for fast troubleshooting - Escalate alarms to the appropriate team - Quickly resolve while app continues running Timing Logging (around actions)
  • 28. React How do I react to this monitoring? ● Heartbeats ● Scheduled Monitor Jobs ○ Version Updates ○ Ensure Running ○ Act on failure/problem ● Monitoring Alarms ● Look at them!
  • 29. Batch Intervals Optimizing for speed and resource efficiency
  • 30. You want batches that process faster than the interval, but not so fast that resources are idling and therefore wasted! Setting Appropriate Batch Intervals An appropriate batch interval is key to an app that is quick and efficient. Effectiveness of interval is affected by: • Resources alloc (cpu + ram) • Quantity of work • Quantity of data Batch interval
  • 31. Setting Appropriate Batch Intervals Consider these questions: How quickly do I need to process data? • Can I slow it down to save resources? What is my resource budget / allocation? • Can I increase? Can I cut back? • Bigger interval = more time to process • … but also more data to process • Smaller interval = the opposite
  • 32. Tips for finding an optimal combination: Start small! a. Short batch interval (seconds) b. Modest resources Whichever you have in more flexible supply (a or b), increasing accordingly. Again: processing time < interval = good Comfortably less, not significantly less. Setting Appropriate Batch Intervals
  • 33. Additional Resource Notes - Scale down when possible - Free up resources or save on cloud utilization spend - Avoid preemption - Use resource pools with prioritization - With preemption disabled if you can - Set appropriate # of partitions for Kafka topics - Higher volume == higher partition count - Higher partition count == greater parallelization
  • 34. Helpful Configuration Settings Configuring your app to be performant and efficient
  • 35. Helpful Configuration Settings Spark • spark.memory.useLegacyMode = true – spark.storage.memoryFraction=0.03 • spark.submit.deployMode = cluster • spark.serializer = org.apache.spark.serializer.KryoSerializer • spark.rdd.compress = true – spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec • spark.shuffle.service.enabled = true • spark.streaming.blockInterval = 300 Kafka • enable.auto.commit = ‘false’
  • 36. Backpressure Use Case: You have irregular spikes in message throughput from Kafka topics • Backpressure dynamically alters rate data is received per batch from Kafka. • Prevents overwhelming of app at startup and peak load. Settings: • spark.streaming.backpressure.enabled = true • spark.streaming.kafka.maxRatePerPartition = 20000 – max rate (messages/second) at which each Kafka partition will be read • PID Rate Estimator: can be used to tweak the rate based on batch performance – spark.streaming.backpressure.pid.* Source: https://www.linkedin.com/pulse/enable-back-pressure-make-your-spark-streaming-production-lan-jiang/
  • 38. Transformations (Streaming + Static) transform() ● Allows RDD-level access to data. ● Use case: joining with another RDD updateStateByKey() / mapWithState() ● Apply function to each key - useful for keeping track of state ● Use case: maintaining state between batches (e.g. rolling join w/ two streams) reduceByKey() ● Reduce a keyed RDD with appropriate function. ● Use case: deduping, aggregations
  • 39. Using the transform() method on DStream: Apply an RDD-to-RDD function to every RDD of the DStream. Used for arbitrary RDD operations on the DStream. • Useful for applying arbitrary RDD operations on a DStream. • Great for enriching streaming data with supplemental static data Joining Streaming and Static Data Source: https://hadoopsters.net/2017/11/26/how-to-join-static-data-with-streaming-data-dstream-in-spark/ transactions = … // streaming dataset (dstream) transaction_details = … // static dataset (rdd) val complete_transaction_data = transactions.transform(live_transaction => live_transaction.join(transaction_details))
  • 40. Effective Static Joining How do we handle static and persistent data? Driver: ● Broadcast if small enough ● Read on driver every batch, then join Worker: ● Connect on worker - lazy val connection object ● Useful for persisting data
  • 41. Streaming isn’t always easy… but here are some great takeaways! • Testing: Use Spark Locally w/ Unit Tests • Monitoring: Use Listeners & React • Batch Intervals & Resources: Be thoughtful! • Configuration: Lots of awesome ones! • Transformations: Do more with your streaming data! • Offset Recovery: Stop worrying and love the offset management! Review
  • 42. Landon Robinson • [email protected] Jack Chapa • [email protected] hadoopsters.dev https://gist.github.com/hadoopsters Contact Us
  • 43. Q & A #UnifiedAnalytics #SparkAISummit
  • 44. DON’T FORGET TO RATE AND REVIEW THE SESSIONS SEARCH SPARK + AI SUMMIT