SlideShare a Scribd company logo
Arbitrary Stateful Aggregations
using Structured Streaming
in Apache Spark™
Burak Yavuz
5/16/2017
2
Outline
• Structured Streaming Concepts
• Stateful Processing in Structured Streaming
• Use Cases
• Demos
3
The simplest way to perform streaming analytics
is not having to reason about streaming at all
4
5
Input: data from source as an
append-only table
Trigger: how frequently to check
input for new data
Query: operations on input
usual map/filter/reduce
new window, session ops
Trigger: every 1 sec
1 2 3
Time
data up
to 1
Input data up
to 2
data up
to 3
Query
New Model
6
Trigger: every 1 sec
1 2 3
result
for data
up to 1
Result
Query
Time
data up
to 1
Input data up
to 2
result
for data
up to 2
data up
to 3
result
for data
up to 3
Output
[complete mode]
output all the rows in the result table
New Model
Result: final operated table
updated every trigger interval
Output: what part of result to
write to data sink after every
trigger
Complete output: Write full result
table every time
7
Trigger: every 1 sec
1 2 3
result
for data
up to 1
Result
Query
Time
data up
to 1
Input data up
to 2
result
for data
up to 2
data up
to 3
result
for data
up to 3
Output
[append mode]
output only new rows since
last trigger
Result: final operated table updated
every trigger interval
Output: what part of result to write
to data sink after every trigger
Complete output: Write full result table
every time
Append output: Write only new rows that got
added to result table since previous batch
*Not all output modes are feasible with all queries
New Model
8
9
Output Modes
• Append mode (default) - New rows added to the Result Table
since the last trigger will be outputted to the sink. Rows will be
output only once, and cannot be rescinded.
Example use cases: ETL
10
Output Modes
• Complete mode - The whole Result Table will be outputted to
the sink after every trigger. This is supported for aggregation
queries.
Example use cases: Monitoring
11
Output Modes
• Update mode - (Available since Spark 2.1.1) Only the rows in the
Result Table that were updated since the last trigger will be
outputted to the sink.
Example use cases: Alerting, Sessionization
12
Outline
• Structured Streaming Concepts
• Stateful Processing in Structured Streaming
• Use Cases
• Demos
13
Event time Aggregations
Many use cases require aggregate statistics by event time
E.g. what's the #errors in each system in 1 hour windows?
Many challenges
Extracting event time from data, handling late, out-of-order data
DStream APIs were insufficient for event time operations
14
Event time Aggregations
Windowing is just another type of grouping in Struct.
Streaming
number of records every hour
parsedData
.groupBy(window("timestamp","1  hour"))
.count()
parsedData
.groupBy(
"device",  
window("timestamp","10  mins"))
.avg("signal")
avg signal strength of each
device every 10 mins
Use built-in functions to extract event-time
No need for separate extractors
15
Advanced Aggregations
Powerful built-in
aggregations
Multiple simultaneous
aggregations
Custom aggs using
reduceGroups, UDAFs
parsedData
.groupBy(window("timestamp","1  hour"))
.agg(avg("signal"),  stddev("signal"),  max("signal"))
variance,  stddev,  kurtosis,  stddev_samp,  collect_list,  
collect_set,  corr,  approx_count_distinct,  ...  
//  Compute  histogram  of  age  by  name.
val hist =  ds.groupBy(_.type).mapGroups {
case (type,  data:  Iter[DeviceData])  =>
val buckets =  new Array[Int](10)            
data.map(_.signal).foreach {  a  => buckets(a/10)+=1 }        
(type,  buckets)
}
16
Stateful Processing for Aggregations
In-memory,
streaming state
maintained for
aggregations
12:00 - 13:00 1 12:00 - 13:00 3
13:00 - 14:00 1
12:00 - 13:00 3
13:00 - 14:00 2
14:00 - 15:00 5
12:00 - 13:00 5
13:00 - 14:00 2
14:00 - 15:00 5
15:00 - 16:00 4
12:00 - 13:00 3
13:00 - 14:00 2
14:00 - 15:00 6
15:00 - 16:00 4
16:00 - 17:00 3
13:00 14:00 15:00 16:00 17:00
Keeping state allows late data to
update counts of old windows
But size of the state increases
indefinitely if old windows not dropped
red = state updated
with late data
17
18
Watermarking and Late Data
Watermark [Spark 2.1] - a
moving threshold that trails
behind the max seen event time
Trailing gap defines how late
data is expected to be
event time
max event time
watermark data older
than
watermark
not expected
12:30 PM
12:20 PM
trailing gap
of 10 mins
19
Watermarking and Late Data
Data newer than watermark may
be late, but allowed to aggregate
Data older than watermark is "too
late" and dropped
State older than watermark
automatically deleted to limit the
amount of intermediate state
max event time
event time
watermark
late data
allowed to
aggregate
data too
late,
dropped
20
Watermarking and Late Data
max event time
event time
watermark
allowed
lateness
of 10 mins
parsedData
.withWatermark("timestamp",  "10  minutes")
.groupBy(window("timestamp","5  minutes"))
.count()
late data
allowed to
aggregate
data too
late,
dropped
Control the tradeoff between state
size and lateness requirements
Handle more late à keep more state
Reduce state à handle less lateness
21
Watermarking to Limit State [Spark 2.1]
data too late,
ignored in counts,
state dropped
Processing Time12:00
12:05
12:10
12:15
12:10 12:15 12:20
12:07
12:13
12:08
EventTime
12:15
12:18
12:04
watermark updated to
12:14 - 10m = 12:04
for next trigger,
state < 12:04 deleted
data is late, but
considered in counts
parsedData
.withWatermark("timestamp",  "10  minutes")
.groupBy(window("timestamp","5  minutes"))
.count()
system tracks max
observed event time
12:08
wm = 12:04
10min
12:14
More details in blog post!
22
23
Working With Time
df.withWatermark("timestampColumn",  "5  hours")
.groupBy(window("timestampColumn",  "1  minute"))
.count()
.writeStream
.trigger("10  seconds")
Separate processing details (output rate, late data tolerance)
from query semantics.
24
Working With Time
df.withWatermark("timestampColumn",  "5  hours")
.groupBy(window("timestampColumn",  "1  minute"))
.count()
.writeStream
.trigger("10  seconds")
How to group
data by time
Same in streaming & batch
25
Working With Time
df.withWatermark("timestampColumn",  "5  hours")
.groupBy(window("timestampColumn",  "1  minute"))
.count()
.writeStream
.trigger("10  seconds")
How late
data can be
26
Working With Time
df.withWatermark("timestampColumn",  "5  hours")
.groupBy(window("timestampColumn",  "1  minute"))
.count()
.writeStream
.trigger("10  seconds")
How often
to emit updates
27
Arbitrary Stateful Operations [Spark 2.2]
mapGroupsWithState
allows any user-defined
stateful ops to a
user-defined state
Direct support for per-key
timeouts in event-time or
processing-time
supports Scala and Java
ds.groupByKey(groupingFunc)
.mapGroupsWithState
(timeoutConf)
(mappingWithStateFunc)
def mappingWithStateFunc(
key: K,  
values: Iterator[V],  
state: GroupState[S]): U =  {  
//  update  or  remove  state
//  set  timeouts
//  return  mapped  value
}
28
flatMapGroupsWithState
• Applies the given function to each group of data, while
maintaining a user-defined per-group state
• Invoked once per group in batch
• Invoked each trigger (with the existence of data) per group in
streaming
• Requires user to provide an output mode for the function
29
flatMapGroupsWithState
• mapGroupsWithState is a special case with
• Output mode: Update
• Output size: 1 row per group
• Supports both Processing Time and Event Time timeouts
30
Outline
• Structured Streaming Concepts
• Stateful Processing in Structured Streaming
• Use Cases
• Demos
31
Alerting
val monitoring  =  stream
.as[Event]
.groupBy(_.id)
.flatMapGroupsWithState(Append,  GST.ProcessingTimeTimeout)  {
(id:  Int,  events:  Iterator[Event],  state:  GroupState[…])  =>
...
}
.writeStream
.queryName("alerts")
.foreach(new  PagerdutySink(credentials))
Monitor a stream using custom stateful logic with timeouts.
32
Sessionization
val monitoring  =  stream
.as[Event]
.groupBy(_.session_id)
.mapGroupsWithState(GST.EventTimeTimeout)  {
(id:  Int,  events:  Iterator[Event],  state:  GroupState[…])  =>
...
}
.writeStream
.parquet("/user/sessions")
Analyze sessions of user/system behavior
33
Demo
34
SPARK SUMMIT 2017
DATA SCIENCE AND ENGINEERING AT SCALE
JUNE 5 – 7 | MOSCONE CENTER | SAN FRANCISCO
ORGANIZED BY spark-summit.org/2017
Discount Code: Databricks
We are hiring!
https://databricks.com/company/careers
Thank You
“Does anyone have any questions for my answers?” - Henry Kissinger

More Related Content

What's hot (20)

PDF
Optimizing Delta/Parquet Data Lakes for Apache Spark
Databricks
 
PDF
Designing Apache Hudi for Incremental Processing With Vinoth Chandar and Etha...
HostedbyConfluent
 
PDF
The Apache Spark File Format Ecosystem
Databricks
 
PDF
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Databricks
 
PDF
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
Databricks
 
PDF
Designing Structured Streaming Pipelines—How to Architect Things Right
Databricks
 
PDF
Large Scale Lakehouse Implementation Using Structured Streaming
Databricks
 
PDF
Tame the small files problem and optimize data layout for streaming ingestion...
Flink Forward
 
PDF
The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose
Nikolay Samokhvalov
 
PDF
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Databricks
 
PDF
Apache Flink internals
Kostas Tzoumas
 
PDF
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewen
confluent
 
PPTX
Apache Spark Architecture
Alexey Grishchenko
 
PDF
Adaptive Query Execution: Speeding Up Spark SQL at Runtime
Databricks
 
PPTX
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17
spark-project
 
PDF
Efficient Data Storage for Analytics with Apache Parquet 2.0
Cloudera, Inc.
 
PPTX
Cassandra
Upaang Saxena
 
PPTX
One sink to rule them all: Introducing the new Async Sink
Flink Forward
 
PPTX
Change Data Capture to Data Lakes Using Apache Pulsar and Apache Hudi - Pulsa...
StreamNative
 
PDF
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...
Databricks
 
Optimizing Delta/Parquet Data Lakes for Apache Spark
Databricks
 
Designing Apache Hudi for Incremental Processing With Vinoth Chandar and Etha...
HostedbyConfluent
 
The Apache Spark File Format Ecosystem
Databricks
 
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Databricks
 
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
Databricks
 
Designing Structured Streaming Pipelines—How to Architect Things Right
Databricks
 
Large Scale Lakehouse Implementation Using Structured Streaming
Databricks
 
Tame the small files problem and optimize data layout for streaming ingestion...
Flink Forward
 
The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose
Nikolay Samokhvalov
 
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Databricks
 
Apache Flink internals
Kostas Tzoumas
 
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewen
confluent
 
Apache Spark Architecture
Alexey Grishchenko
 
Adaptive Query Execution: Speeding Up Spark SQL at Runtime
Databricks
 
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17
spark-project
 
Efficient Data Storage for Analytics with Apache Parquet 2.0
Cloudera, Inc.
 
Cassandra
Upaang Saxena
 
One sink to rule them all: Introducing the new Async Sink
Flink Forward
 
Change Data Capture to Data Lakes Using Apache Pulsar and Apache Hudi - Pulsa...
StreamNative
 
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...
Databricks
 

Similar to Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark (20)

PDF
Taking Spark Streaming to the Next Level with Datasets and DataFrames
Databricks
 
PDF
Apache Spark 2.0: A Deep Dive Into Structured Streaming - by Tathagata Das
Databricks
 
PDF
Deep dive into stateful stream processing in structured streaming by Tathaga...
Databricks
 
PDF
Dataflow - A Unified Model for Batch and Streaming Data Processing
DoiT International
 
PDF
A Deep Dive into Structured Streaming in Apache Spark
Anyscale
 
PPTX
Cloud Dataflow - A Unified Model for Batch and Streaming Data Processing
DoiT International
 
PPTX
Flink 0.10 @ Bay Area Meetup (October 2015)
Stephan Ewen
 
PDF
Continuous Application with Structured Streaming 2.0
Anyscale
 
PDF
Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...
Data Con LA
 
PPTX
A Deep Dive into Structured Streaming: Apache Spark Meetup at Bloomberg 2016
Databricks
 
PPT
Spark streaming
Venkateswaran Kandasamy
 
PDF
Stream Processing Overview
Maycon Viana Bordin
 
PPT
strata_spark_streaming.ppt
rveiga100
 
PPTX
Spark streaming
Whiteklay
 
PPTX
Unifying Stream, SWL and CEP for Declarative Stream Processing with Apache Flink
DataWorks Summit/Hadoop Summit
 
PPT
Introduction to Spark Streaming
Knoldus Inc.
 
PDF
Analyzing and Interpreting AWR
pasalapudi
 
PDF
Streaming SQL
Julian Hyde
 
PDF
How should I monitor my idaa
Cuneyt Goksu
 
Taking Spark Streaming to the Next Level with Datasets and DataFrames
Databricks
 
Apache Spark 2.0: A Deep Dive Into Structured Streaming - by Tathagata Das
Databricks
 
Deep dive into stateful stream processing in structured streaming by Tathaga...
Databricks
 
Dataflow - A Unified Model for Batch and Streaming Data Processing
DoiT International
 
A Deep Dive into Structured Streaming in Apache Spark
Anyscale
 
Cloud Dataflow - A Unified Model for Batch and Streaming Data Processing
DoiT International
 
Flink 0.10 @ Bay Area Meetup (October 2015)
Stephan Ewen
 
Continuous Application with Structured Streaming 2.0
Anyscale
 
Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...
Data Con LA
 
A Deep Dive into Structured Streaming: Apache Spark Meetup at Bloomberg 2016
Databricks
 
Spark streaming
Venkateswaran Kandasamy
 
Stream Processing Overview
Maycon Viana Bordin
 
strata_spark_streaming.ppt
rveiga100
 
Spark streaming
Whiteklay
 
Unifying Stream, SWL and CEP for Declarative Stream Processing with Apache Flink
DataWorks Summit/Hadoop Summit
 
Introduction to Spark Streaming
Knoldus Inc.
 
Analyzing and Interpreting AWR
pasalapudi
 
Streaming SQL
Julian Hyde
 
How should I monitor my idaa
Cuneyt Goksu
 
Ad

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
Databricks
 
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
PPT
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
PPTX
Data Lakehouse Symposium | Day 2
Databricks
 
PPTX
Data Lakehouse Symposium | Day 4
Databricks
 
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks
 
PDF
Democratizing Data Quality Through a Centralized Platform
Databricks
 
PDF
Learn to Use Databricks for Data Science
Databricks
 
PDF
Why APM Is Not the Same As ML Monitoring
Databricks
 
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
PDF
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
PDF
Sawtooth Windows for Feature Aggregations
Databricks
 
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
PDF
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
PDF
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
PDF
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
DW Migration Webinar-March 2022.pptx
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
Data Lakehouse Symposium | Day 2
Databricks
 
Data Lakehouse Symposium | Day 4
Databricks
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks
 
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Learn to Use Databricks for Data Science
Databricks
 
Why APM Is Not the Same As ML Monitoring
Databricks
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
Sawtooth Windows for Feature Aggregations
Databricks
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
Ad

Recently uploaded (20)

PPTX
Java Native Memory Leaks: The Hidden Villain Behind JVM Performance Issues
Tier1 app
 
PDF
Capcut Pro Crack For PC Latest Version {Fully Unlocked} 2025
hashhshs786
 
PPTX
An Introduction to ZAP by Checkmarx - Official Version
Simon Bennetts
 
DOCX
Import Data Form Excel to Tally Services
Tally xperts
 
PPTX
Tally software_Introduction_Presentation
AditiBansal54083
 
PPT
MergeSortfbsjbjsfk sdfik k
RafishaikIT02044
 
PDF
vMix Pro 28.0.0.42 Download vMix Registration key Bundle
kulindacore
 
PDF
HiHelloHR – Simplify HR Operations for Modern Workplaces
HiHelloHR
 
PPTX
Tally_Basic_Operations_Presentation.pptx
AditiBansal54083
 
PPTX
Why Businesses Are Switching to Open Source Alternatives to Crystal Reports.pptx
Varsha Nayak
 
PPTX
Migrating Millions of Users with Debezium, Apache Kafka, and an Acyclic Synch...
MD Sayem Ahmed
 
PDF
Alexander Marshalov - How to use AI Assistants with your Monitoring system Q2...
VictoriaMetrics
 
PPTX
Engineering the Java Web Application (MVC)
abhishekoza1981
 
PPTX
Revolutionizing Code Modernization with AI
KrzysztofKkol1
 
PPTX
Equipment Management Software BIS Safety UK.pptx
BIS Safety Software
 
PDF
MiniTool Partition Wizard 12.8 Crack License Key LATEST
hashhshs786
 
PPTX
Comprehensive Guide: Shoviv Exchange to Office 365 Migration Tool 2025
Shoviv Software
 
PDF
Thread In Android-Mastering Concurrency for Responsive Apps.pdf
Nabin Dhakal
 
PPTX
The Role of a PHP Development Company in Modern Web Development
SEO Company for School in Delhi NCR
 
PDF
iTop VPN With Crack Lifetime Activation Key-CODE
utfefguu
 
Java Native Memory Leaks: The Hidden Villain Behind JVM Performance Issues
Tier1 app
 
Capcut Pro Crack For PC Latest Version {Fully Unlocked} 2025
hashhshs786
 
An Introduction to ZAP by Checkmarx - Official Version
Simon Bennetts
 
Import Data Form Excel to Tally Services
Tally xperts
 
Tally software_Introduction_Presentation
AditiBansal54083
 
MergeSortfbsjbjsfk sdfik k
RafishaikIT02044
 
vMix Pro 28.0.0.42 Download vMix Registration key Bundle
kulindacore
 
HiHelloHR – Simplify HR Operations for Modern Workplaces
HiHelloHR
 
Tally_Basic_Operations_Presentation.pptx
AditiBansal54083
 
Why Businesses Are Switching to Open Source Alternatives to Crystal Reports.pptx
Varsha Nayak
 
Migrating Millions of Users with Debezium, Apache Kafka, and an Acyclic Synch...
MD Sayem Ahmed
 
Alexander Marshalov - How to use AI Assistants with your Monitoring system Q2...
VictoriaMetrics
 
Engineering the Java Web Application (MVC)
abhishekoza1981
 
Revolutionizing Code Modernization with AI
KrzysztofKkol1
 
Equipment Management Software BIS Safety UK.pptx
BIS Safety Software
 
MiniTool Partition Wizard 12.8 Crack License Key LATEST
hashhshs786
 
Comprehensive Guide: Shoviv Exchange to Office 365 Migration Tool 2025
Shoviv Software
 
Thread In Android-Mastering Concurrency for Responsive Apps.pdf
Nabin Dhakal
 
The Role of a PHP Development Company in Modern Web Development
SEO Company for School in Delhi NCR
 
iTop VPN With Crack Lifetime Activation Key-CODE
utfefguu
 

Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark

  • 1. Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark™ Burak Yavuz 5/16/2017
  • 2. 2 Outline • Structured Streaming Concepts • Stateful Processing in Structured Streaming • Use Cases • Demos
  • 3. 3 The simplest way to perform streaming analytics is not having to reason about streaming at all
  • 4. 4
  • 5. 5 Input: data from source as an append-only table Trigger: how frequently to check input for new data Query: operations on input usual map/filter/reduce new window, session ops Trigger: every 1 sec 1 2 3 Time data up to 1 Input data up to 2 data up to 3 Query New Model
  • 6. 6 Trigger: every 1 sec 1 2 3 result for data up to 1 Result Query Time data up to 1 Input data up to 2 result for data up to 2 data up to 3 result for data up to 3 Output [complete mode] output all the rows in the result table New Model Result: final operated table updated every trigger interval Output: what part of result to write to data sink after every trigger Complete output: Write full result table every time
  • 7. 7 Trigger: every 1 sec 1 2 3 result for data up to 1 Result Query Time data up to 1 Input data up to 2 result for data up to 2 data up to 3 result for data up to 3 Output [append mode] output only new rows since last trigger Result: final operated table updated every trigger interval Output: what part of result to write to data sink after every trigger Complete output: Write full result table every time Append output: Write only new rows that got added to result table since previous batch *Not all output modes are feasible with all queries New Model
  • 8. 8
  • 9. 9 Output Modes • Append mode (default) - New rows added to the Result Table since the last trigger will be outputted to the sink. Rows will be output only once, and cannot be rescinded. Example use cases: ETL
  • 10. 10 Output Modes • Complete mode - The whole Result Table will be outputted to the sink after every trigger. This is supported for aggregation queries. Example use cases: Monitoring
  • 11. 11 Output Modes • Update mode - (Available since Spark 2.1.1) Only the rows in the Result Table that were updated since the last trigger will be outputted to the sink. Example use cases: Alerting, Sessionization
  • 12. 12 Outline • Structured Streaming Concepts • Stateful Processing in Structured Streaming • Use Cases • Demos
  • 13. 13 Event time Aggregations Many use cases require aggregate statistics by event time E.g. what's the #errors in each system in 1 hour windows? Many challenges Extracting event time from data, handling late, out-of-order data DStream APIs were insufficient for event time operations
  • 14. 14 Event time Aggregations Windowing is just another type of grouping in Struct. Streaming number of records every hour parsedData .groupBy(window("timestamp","1  hour")) .count() parsedData .groupBy( "device",   window("timestamp","10  mins")) .avg("signal") avg signal strength of each device every 10 mins Use built-in functions to extract event-time No need for separate extractors
  • 15. 15 Advanced Aggregations Powerful built-in aggregations Multiple simultaneous aggregations Custom aggs using reduceGroups, UDAFs parsedData .groupBy(window("timestamp","1  hour")) .agg(avg("signal"),  stddev("signal"),  max("signal")) variance,  stddev,  kurtosis,  stddev_samp,  collect_list,   collect_set,  corr,  approx_count_distinct,  ...   //  Compute  histogram  of  age  by  name. val hist =  ds.groupBy(_.type).mapGroups { case (type,  data:  Iter[DeviceData])  => val buckets =  new Array[Int](10)             data.map(_.signal).foreach {  a  => buckets(a/10)+=1 }         (type,  buckets) }
  • 16. 16 Stateful Processing for Aggregations In-memory, streaming state maintained for aggregations 12:00 - 13:00 1 12:00 - 13:00 3 13:00 - 14:00 1 12:00 - 13:00 3 13:00 - 14:00 2 14:00 - 15:00 5 12:00 - 13:00 5 13:00 - 14:00 2 14:00 - 15:00 5 15:00 - 16:00 4 12:00 - 13:00 3 13:00 - 14:00 2 14:00 - 15:00 6 15:00 - 16:00 4 16:00 - 17:00 3 13:00 14:00 15:00 16:00 17:00 Keeping state allows late data to update counts of old windows But size of the state increases indefinitely if old windows not dropped red = state updated with late data
  • 17. 17
  • 18. 18 Watermarking and Late Data Watermark [Spark 2.1] - a moving threshold that trails behind the max seen event time Trailing gap defines how late data is expected to be event time max event time watermark data older than watermark not expected 12:30 PM 12:20 PM trailing gap of 10 mins
  • 19. 19 Watermarking and Late Data Data newer than watermark may be late, but allowed to aggregate Data older than watermark is "too late" and dropped State older than watermark automatically deleted to limit the amount of intermediate state max event time event time watermark late data allowed to aggregate data too late, dropped
  • 20. 20 Watermarking and Late Data max event time event time watermark allowed lateness of 10 mins parsedData .withWatermark("timestamp",  "10  minutes") .groupBy(window("timestamp","5  minutes")) .count() late data allowed to aggregate data too late, dropped Control the tradeoff between state size and lateness requirements Handle more late à keep more state Reduce state à handle less lateness
  • 21. 21 Watermarking to Limit State [Spark 2.1] data too late, ignored in counts, state dropped Processing Time12:00 12:05 12:10 12:15 12:10 12:15 12:20 12:07 12:13 12:08 EventTime 12:15 12:18 12:04 watermark updated to 12:14 - 10m = 12:04 for next trigger, state < 12:04 deleted data is late, but considered in counts parsedData .withWatermark("timestamp",  "10  minutes") .groupBy(window("timestamp","5  minutes")) .count() system tracks max observed event time 12:08 wm = 12:04 10min 12:14 More details in blog post!
  • 22. 22
  • 23. 23 Working With Time df.withWatermark("timestampColumn",  "5  hours") .groupBy(window("timestampColumn",  "1  minute")) .count() .writeStream .trigger("10  seconds") Separate processing details (output rate, late data tolerance) from query semantics.
  • 24. 24 Working With Time df.withWatermark("timestampColumn",  "5  hours") .groupBy(window("timestampColumn",  "1  minute")) .count() .writeStream .trigger("10  seconds") How to group data by time Same in streaming & batch
  • 25. 25 Working With Time df.withWatermark("timestampColumn",  "5  hours") .groupBy(window("timestampColumn",  "1  minute")) .count() .writeStream .trigger("10  seconds") How late data can be
  • 26. 26 Working With Time df.withWatermark("timestampColumn",  "5  hours") .groupBy(window("timestampColumn",  "1  minute")) .count() .writeStream .trigger("10  seconds") How often to emit updates
  • 27. 27 Arbitrary Stateful Operations [Spark 2.2] mapGroupsWithState allows any user-defined stateful ops to a user-defined state Direct support for per-key timeouts in event-time or processing-time supports Scala and Java ds.groupByKey(groupingFunc) .mapGroupsWithState (timeoutConf) (mappingWithStateFunc) def mappingWithStateFunc( key: K,   values: Iterator[V],   state: GroupState[S]): U =  {   //  update  or  remove  state //  set  timeouts //  return  mapped  value }
  • 28. 28 flatMapGroupsWithState • Applies the given function to each group of data, while maintaining a user-defined per-group state • Invoked once per group in batch • Invoked each trigger (with the existence of data) per group in streaming • Requires user to provide an output mode for the function
  • 29. 29 flatMapGroupsWithState • mapGroupsWithState is a special case with • Output mode: Update • Output size: 1 row per group • Supports both Processing Time and Event Time timeouts
  • 30. 30 Outline • Structured Streaming Concepts • Stateful Processing in Structured Streaming • Use Cases • Demos
  • 31. 31 Alerting val monitoring  =  stream .as[Event] .groupBy(_.id) .flatMapGroupsWithState(Append,  GST.ProcessingTimeTimeout)  { (id:  Int,  events:  Iterator[Event],  state:  GroupState[…])  => ... } .writeStream .queryName("alerts") .foreach(new  PagerdutySink(credentials)) Monitor a stream using custom stateful logic with timeouts.
  • 32. 32 Sessionization val monitoring  =  stream .as[Event] .groupBy(_.session_id) .mapGroupsWithState(GST.EventTimeTimeout)  { (id:  Int,  events:  Iterator[Event],  state:  GroupState[…])  => ... } .writeStream .parquet("/user/sessions") Analyze sessions of user/system behavior
  • 34. 34 SPARK SUMMIT 2017 DATA SCIENCE AND ENGINEERING AT SCALE JUNE 5 – 7 | MOSCONE CENTER | SAN FRANCISCO ORGANIZED BY spark-summit.org/2017 Discount Code: Databricks
  • 36. Thank You “Does anyone have any questions for my answers?” - Henry Kissinger