SlideShare a Scribd company logo
DESIGNING ETL PIPELINES WITH
How to architect things right
Spark Summit Europe
16 October 2019
Tathagata “TD” Das
@tathadas
STRUCTURED
STREAMING
About Me
Started Streaming project in
AMPLab, UC Berkeley
Currently focused on Structured Streaming
and Delta Lake
Staff Engineer on the StreamTeam @
Team Motto: "We make all your streams come true"
Structured Streaming
Distributed stream processing built on SQL engine
High throughput, second-scale latencies
Fault-tolerant, exactly-once
Great set of connectors
Philosophy: Treat data streams like unbounded tables
Users write batch-like queries on tables
Spark will continuously execute the queries incrementally on streams
3
Structured Streaming
4
Example
Read JSON data from Kafka
Parse nested JSON
Store in structured Parquet table
Get end-to-end failure guarantees
ETL
spark.readStream.format("kafka")
.option("kafka.boostrap.servers",...)
.option("subscribe", "topic")
.load()
.selectExpr("cast (value as string) as json")
.select(from_json("json", schema).as("data"))
.writeStream
.format("parquet")
.option("path", "/parquetTable/"
.trigger("1 minute")
.option("checkpointLocation", "…")
.start()
Anatomy of a Streaming Query
5
spark.readStream.format("kafka")
.option("kafka.boostrap.servers",...)
.option("subscribe", "topic")
.load()
.selectExpr("cast (value as string) as json")
.select(from_json("json", schema).as("data"))
.writeStream
.format("parquet")
.option("path", "/parquetTable/"
.trigger("1 minute")
.option("checkpointLocation", "…")
.start()
Specify where to read data from
Specify data transformations
Specify where to write data to
Specify how to process data
DataFrames,
Datasets, SQL
Logical
Plan
Read from
Kafka
Project
cast(value) as json
Project
from_json(json)
Write to
Parquet
Spark automatically streamifies!
Spark SQL converts batch-like query to a series of incremental
execution plans operating on new batches of data
Kafka
Source
Optimized
Operator
codegen, off-
heap, etc.
Parquet
Sink
Optimized
Plan
Series of Incremental
Execution Plans
process
newdata
t = 1 t = 2 t = 3
process
newdata
process
newdata
spark.readStream.format("kafka")
.option("kafka.boostrap.servers",...)
.option("subscribe", "topic")
.load()
.selectExpr("cast (value as string) as json")
.select(from_json("json", schema).as("data"))
.writeStream
.format("parquet")
.option("path", "/parquetTable/"
.trigger("1 minute")
.option("checkpointLocation", "…")
.start()
• ACID transactions
• Schema Enforcement and Evolution
• Data versioning and Audit History
• Time travel to old versions
• Open formats
• Scalable Metadata Handling
• Great with batch + streaming
• Upserts and Deletes
Open-source storage layer that brings ACID transactions to
Apache Spark™ and big data workloads.
https://delta.io/
THE GOOD OF DATA LAKES
• Massive scale out
• Open Formats
• Mixed workloads
THE GOOD OF DATA WAREHOUSES
• Pristine Data
• Transactional Reliability
• Fast SQL Queries
Open-source storage layer that brings ACID transactions to
Apache Spark™ and big data workloads.
https://delta.io/
STRUCTURED
STREAMING
How to build streaming data
pipelines with them?
STRUCTURED
STREAMING
What are the design patterns
to correctly architect
streaming data pipelines?
Another streaming design pattern talk????
Most talks
Focus on a pure
streaming engine
Explain one way of
achieving the end goal
11
This talk
Spark is more than a
streaming engine
Spark has multiple ways of
achieving the end goal with
tunable perf, cost and quality
This talk
How to think about design
Common design patterns
How we are making this easier
12
Streaming Pipeline Design
13
Data streams Insights
????
14
????
What?
Why?
How?
What?
15
What is your input?
What is your data?
What format and system is
your data in?
What is your output?
What results do you need?
What throughput and
latency do you need?
Why?
16
Why do you want this output in this way?
Who is going to take actions based on it?
When and how are they going to consume it?
humans?
computers?
Why? Common mistakes!
#1
"I want my dashboard with counts to
be updated every second"
#2
"I want to generate automatic alerts
with up-to-the-last second counts"
(but my input data is often delayed)
17
No point of updating every
second if humans are going to
take actions in minutes or hours
No point taking fast actions on
low quality data and results
Why? Common mistakes!
#3
"I want to train machine learning
models on the results"
(but my results are in a key-value store)
18
Key-value stores are not great
for large, repeated data scans
which machine learning
workloads perform
How?
19
????
How to process
the data?
????
How to store
the results?
Streaming Design Patterns
20
What?
Why?
How?Complex
ETL
Pattern 1: ETL
21
Input: unstructured input stream
from files, Kafka, etc.
What?
Why? Query latest structured data interactively or with periodic jobs
01:06:45 WARN id = 1 , update failed
01:06:45 INFO id=23, update success
01:06:57 INFO id=87: update postpo
…
Output: structured
tabular data
P1: ETL
22
Convert unstructured input to
structured tabular data
Latency: few minutes
What? How?
Why?
Query latest structured data
interactively or with periodic jobs
Process: Use Structured Streaming query to transform
unstructured, dirty data
Run 24/7 on a cluster with default trigger
Store: Save to structured scalable storage that supports
data skipping, etc.
E.g.: Parquet, ORC, or even better, Delta Lake
STRUCTURED
STREAMING
ETL QUERY
01:06:45 WARN id = 1 , update failed
01:06:45 INFO id=23, update success
01:06:57 INFO id=87: update postpo
…
P1: ETL with Delta Lake
23
How?
Store: Save to
STRUCTURED
STREAMING
Read with snapshot guarantees while writes are in progress
Concurrently reprocess data with full ACID guarantees
Coalesce small files into larger files
Update table to fix mistakes in data
Delete data for GDPR
REPROCESS
ETL QUERY
01:06:45 WARN id = 1 , update failed
01:06:45 INFO id=23, update success
01:06:57 INFO id=87: update postpo
…
P1.1: Cheaper ETL
24
Convert unstructured input to
structured tabular data
Latency: few minutes hours
Not have clusters up 24/7
What? How?
Why?
Query latest data interactively
or with periodic jobs
Cheaper solution
Process: Still use Structured Streaming query!
Run streaming query with "trigger.once" for
processing all available data since last batch
Set up external schedule (every few hours?) to
periodically start a cluster and run one batch
STRUCTURED
STREAMING
RESTART ON
SCHEDULE
01:06:45 WARN id = 1 , update failed
01:06:45 INFO id=23, update success
01:06:57 INFO id=87: update postpo
…
P1.2: Query faster than ETL!
25
Latency: hours seconds
What? How?
Why?
Query latest up-to-the last
second data interactively
Query data in Kafka directly using Spark SQL
Can process up to the last records received by
Kafka when the query was started
SQL
Pattern 2: Key-value output
26
Input: new data
for each key
Lookup latest value for key (dashboards, websites, etc.)
OR
Summary tables for querying interactively or with periodic jobs
KEY LATEST VALUE
key1 value2
key2 value3
{ "key1": "value1" }
{ "key1": "value2" }
{ "key2": "value3" }
Output: updated
values for each key
Aggregations (sum, count, …)
Sessionizations
What?
Why?
P2.1: Key-value output for lookup
27
Generate updated values for keys
Latency: seconds/minutes
What? How?
Why?
Lookup latest value for key
STRUCTURED
STREAMING
Process: Use Structured Streaming with
stateful operations for aggregation
Store: Save in key-values stores optimized
for single key lookups
STATEFUL AGGREGATION LOOKUP
{ "key1": "value1" }
{ "key1": "value2" }
{ "key2": "value3" }
P2.2: Key-value output for analytics
28
Generate updated values for keys
Latency: seconds/minutes
What? How?
Why?
Lookup latest value for key
Summary tables for analytics
Process: Use Structured Streaming with
Store: Save in Delta Lake!
STRUCTURED
STREAMING
STATEFUL AGGREGATION SUMMARY
stateful operations for aggregation
Delta Lake supports upserts using MERGE
{ "key1": "value1" }
{ "key1": "value2" }
{ "key2": "value3" }
P2.2: Key-value output for analytics
29
How?
STRUCTURED
STREAMING
STATEFUL AGGREGATION
streamingDataFrame.foreachBatch { batchOutputDF =>
DeltaTable.forPath(spark, "/aggs/").as("t")
.merge(
batchOutputDF.as("s"),
"t.key = s.key")
.whenMatched().update(...)
.whenNotMatched().insert(...)
.execute()
}.start()
SUMMARY
Stateful operations for aggregation
{ "key1": "value1" }
{ "key1": "value2" }
{ "key2": "value3" }
Delta Lake supports upserts using
Merge SQL operation
Scala/Java/Python APIs with
same semantics as SQL Merge
SQL Merge supported in
Databricks Delta, not in OSS yet
P2.2: Key-value output for analytics
30
How?
Stateful aggregation requires setting
watermark to drop very late data
Dropping some data leads some
inaccuracies
Stateful operations for aggregation
Delta Lake supports upserts
using Merge
STRUCTURED
STREAMING
STATEFUL AGGREGATION SUMMARY
{ "key1": "value1" }
{ "key1": "value2" }
{ "key2": "value3" }
P2.3: Key-value aggregations for analytics
31
Generate aggregated values
for keys
Latency: hours/days
Do not drop any late data
What? How?
Why?
Summary tables for analytics
Correct
Process: ETL to structured table (no stateful aggregation)
Store: Save in Delta Lake
Post-process: Aggregate after all delayed data received
STRUCTURED
STREAMING
STATEFUL AGGREGATION
AGGREGATIONETL SUMMARY
{ "key1": "value1" }
{ "key1": "value2" }
{ "key2": "value3" }
Pattern 3: Joining multiple inputs
32
#UnifiedAnalytics #SparkAISummit
{ "id": 14, "name": "td", "v": 100..
{ "id": 23, "name": "by", "v": -10..
{ "id": 57, "name": "sz", "v": 34..
…
01:06:45 WARN id = 1 , update failed
01:06:45 INFO id=23, update success
01:06:57 INFO id=87: update postpo
…
id update value
Input: Multiple data streams
based on common key
Output:
Combined
information
What?
P3.1: Joining fast and slow streams
33
Input: One fast stream of facts
and one slow stream of
dimension changes
Output: Fast stream enriched by
data from slow stream
Example:
product sales info ("facts") enriched
by more product info ("dimensions")
What + Why?
{ "id": 14, "name": "td", "v": 100..
{ "id": 23, "name": "by", "v": -10..
{ "id": 57, "name": "sz", "v": 34..
…
01:06:45 WARN id = 1 , update failed
01:06:45 INFO id=23, update success
01:06:57 INFO id=87: update postpo
…
id update value
How?
ETL slow stream to a dimension table
Join fast stream with snapshots of the dimension table
FAST ETL
DIMENSION
TABLE
COMBINED
TABLE
JOIN
SLOW ETL
P3.1: Joining fast and slow streams
34
{ "id": 14, "name": "td", "v": 100..
{ "id": 23, "name": "by", "v": -10..
{ "id": 57, "name": "sz", "v": 34..
…
01:06:45 WARN id = 1 , update failed
01:06:45 INFO id=23, update success
01:06:57 INFO id=87: update postpo
…
id update value
SLOW ETL
How? - Caveats
FAST ETL
JOIN COMBINED
TABLE
DIMENSION
TABLE
Store dimension table in Delta Lake
Delta Lake's versioning allows changes
to be detected and the snapshot
automatically reloaded without restart**
Better Solution
** available only in Databricks Delta Lake
Structured Streaming does not reload
dimension table snapshot
Changes by slow ETL wont be seen until restart
P3.1: Joining fast and slow stream
35
How? - Caveats
Treat it as a "joining fast and fast data"
Better Solution
Delays in updates to dimension table can
cause joining with stale dimension data
E.g. sale of a product received even before product
table has any info on the product
P3.2: Joining fast and fast data
36
Input: Two fast streams where
either stream maybe delayed
Output: Combined info even if
one is delayed over another
Example:
ad impressions and ad clicks
Use stream-stream joins in Structured Streaming
Data will be buffered as state
Watermarks define how long to buffer before
giving up on matching
What + Why?
id update valueSTREAM
STREAM
JOIN
{ "id": 14, "name": "td", "v": 100..
{ "id": 23, "name": "by", "v": -10..
{ "id": 57, "name": "sz", "v": 34..
…
01:06:45 WARN id = 1 , update failed
01:06:45 INFO id=23, update success
01:06:57 INFO id=87: update postpo
…
See my past deep dive talk for more info
How?
Pattern 4: Change data capture
37
#UnifiedAnalytics #SparkAISummit
INSERT a, 1
INSERT b, 2
UPDATE a, 3
DELETE b
INSERT b, 4
key value
a 3
b 4
Input: Change data based
on a primary key
Output: Final table
after changes
End-to-end replication of transactional tables into analytical tables
What?
Why?
P4: Change data capture
38
How?
Use foreachBatch and Merge
In each batch, apply changes to the
Delta table using Merge
Delta Lake's Merge support extended
syntax with includes multiple match
clauses, clause conditions and
deletes
INSERT a, 1
INSERT b, 2
UPDATE a, 3
DELETE b
INSERT b, 4
STRUCTURED
STREAMING
streamingDataFrame.foreachBatch { batchOutputDF =>
DeltaTable.forPath(spark, "/deltaTable/").as("t")
.merge(batchOutputDF.as("updates"),
"t.key = s.key")
.whenMatched("delete = false").update(...)
.whenMatched("delete = true").delete()
.whenNotMatched().insert(...)
.execute()
}.start()
See merge examples in docs for more info
Pattern 5: Writing to multiple outputs
39
id val1 val2
id sum count
RAW LOGS
+
SUMMARIES
TABLE 2
TABLE 1
{ "id": 14, "name": "td", "v": 100..
{ "id": 23, "name": "by", "v": -10..
{ "id": 57, "name": "sz", "v": 34..
…
SUMMARY FOR
ANALYTICS
+
SUMMARY FOR
LOOKUP
TABLE CHANGE LOG
+
UPDATED TABLE
What?
P5: Serial or Parallel?
40
id val1 val2
id sum count
TABLE 2
TABLE 1
{ "id": 14, "name": "td", "v": 100..
{ "id": 23, "name": "by", "v": -10..
{ "id": 57, "name": "sz", "v": 34..
…
Parallel
How?
id val1 val2
TABLE 1
{ "id": 14, "name": "td", "v": 100..
{ "id": 23, "name": "by", "v": -10..
{ "id": 57, "name": "sz", "v": 34..
…
id val1 val2
TABLE 2
Writes table 1 and reads it again
Cheap or expensive depends on
the size and format of table 1
Higher latency
Reads input twice, may have to
parse the data twice
Cheap or expensive depends on
size of raw input + parsing cost
Serial
P5: Combination!
41
Combo 1: Multiple streaming queries
id val1 val2
TABLE 1
{ "id": 14, "name": "td", "v": 100..
{ "id": 23, "name": "by", "v": -10..
{ "id": 57, "name": "sz", "v": 34..
…
id val1 val2
TABLE 2
Do expensive parsing once, write to table 1
Do cheaper follow up processing from table 1
Good for
ETL + multiple levels of summaries
Change log + updated table
Still writing + reading table1
Compute once, write multiple times
Cheaper, but looses exactly-once guarantee
id val1 val2
TABLE 3
Combo 2: Single query + foreachBatch
EXPENSIVE
CHEAP
{ "id": 14, "name": "td", "v": 100..
{ "id": 23, "name": "by", "v": -10..
{ "id": 57, "name": "sz", "v": 34..
…
id val1 val2
LOCATION 1
id val1 val2
LOCATION 2
streamingDataFrame.foreachBatch { batchSummaryData =>
// write summary to Delta Lake
// write summary to key value store
}.start()
How?
Production Pipelines @
Bronze Tables Silver Tables Gold Tables
REPORTING
STREAMING
ANALYTICS
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTUREDSTREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTUREDSTREAMING
In Progress: Declarative Pipelines
43
Allow users to declaratively express the
entire DAG of pipelines as a dataflow graph
REPORTING
STREAMING
ANALYTICS
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTUREDSTREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTUREDSTREAMING
In Progress: Declarative Pipelines
Enforce metadata, storage, and quality declaratively
dataset("warehouse")
.query(input("kafka").select(…).join(…)) // Query to materialize
.location(…) // Storage Location
.schema(…) // Optional strict schema checking
.metastoreName(…) // Hive Metastore
.description(…) // Human readable description
.expect("validTimestamp", // Expectations on data quality
"timestamp > 2012-01-01 AND …",
"fail / alert / quarantine")
*Coming Soon
In Progress: Declarative Pipelines
Enforce metadata, storage, and quality declaratively
dataset("warehouse")
.query(input("kafka").withColumn(…).join(…)) // Query to materialize
.location(…) // Storage Location
.schema(…) // Optional strict schema checking
.metastoreName(…) // Hive Metastore
.description(…) // Human readable description
.expect("validTimestamp", // Expectations on data quality
"timestamp > 2012-01-01 AND …",
"fail / alert / quarantine")
*Coming Soon
In Progress: Declarative Pipelines
Enforce metadata, storage, and quality declaratively
dataset("warehouse")
.query(input("kafka").withColumn(…).join(…)) // Query to materialize
.location(…) // Storage Location
.schema(…) // Optional strict schema checking
.metastoreName(…) // Hive Metastore
.description(…) // Human readable description
.expect("validTimestamp", // Expectations on data quality
"timestamp > 2012-01-01 AND …",
"fail / alert / quarantine")
*Coming Soon
In Progress: Declarative Pipelines
47
Unit test part or whole DAG with
sample input data
Integration test DAG with
production data
Deploy/upgrade new DAG code
Monitor whole DAG in one place
Rollback code / data when needed
REPORTING
STREAMING ANALYTICS
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTUREDSTREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTURED
STREAMING
STRUCTUREDSTREAMING
*Coming Soon
Build your own Delta Lake
at https://delta.io

More Related Content

What's hot (20)

PDF
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Cloudera, Inc.
 
PDF
Change Data Feed in Delta
Databricks
 
PDF
Efficient Data Storage for Analytics with Apache Parquet 2.0
Cloudera, Inc.
 
PDF
Simplifying Change Data Capture using Databricks Delta
Databricks
 
PDF
The Parquet Format and Performance Optimization Opportunities
Databricks
 
PDF
Optimizing Delta/Parquet Data Lakes for Apache Spark
Databricks
 
PDF
Apache Spark Core—Deep Dive—Proper Optimization
Databricks
 
PDF
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 
PDF
Understanding Query Plans and Spark UIs
Databricks
 
PDF
Performance Troubleshooting Using Apache Spark Metrics
Databricks
 
PDF
Hyperspace for Delta Lake
Databricks
 
PDF
Deep Dive: Memory Management in Apache Spark
Databricks
 
PDF
Parquet Strata/Hadoop World, New York 2013
Julien Le Dem
 
PDF
Apache Spark Core – Practical Optimization
Databricks
 
PDF
Common Strategies for Improving Performance on Your Delta Lakehouse
Databricks
 
PDF
Databricks Delta Lake and Its Benefits
Databricks
 
PDF
Apache Spark in Depth: Core Concepts, Architecture & Internals
Anton Kirillov
 
PDF
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark Summit
 
PDF
Spark (Structured) Streaming vs. Kafka Streams
Guido Schmutz
 
PPTX
Delta lake and the delta architecture
Adam Doyle
 
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Cloudera, Inc.
 
Change Data Feed in Delta
Databricks
 
Efficient Data Storage for Analytics with Apache Parquet 2.0
Cloudera, Inc.
 
Simplifying Change Data Capture using Databricks Delta
Databricks
 
The Parquet Format and Performance Optimization Opportunities
Databricks
 
Optimizing Delta/Parquet Data Lakes for Apache Spark
Databricks
 
Apache Spark Core—Deep Dive—Proper Optimization
Databricks
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 
Understanding Query Plans and Spark UIs
Databricks
 
Performance Troubleshooting Using Apache Spark Metrics
Databricks
 
Hyperspace for Delta Lake
Databricks
 
Deep Dive: Memory Management in Apache Spark
Databricks
 
Parquet Strata/Hadoop World, New York 2013
Julien Le Dem
 
Apache Spark Core – Practical Optimization
Databricks
 
Common Strategies for Improving Performance on Your Delta Lakehouse
Databricks
 
Databricks Delta Lake and Its Benefits
Databricks
 
Apache Spark in Depth: Core Concepts, Architecture & Internals
Anton Kirillov
 
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark Summit
 
Spark (Structured) Streaming vs. Kafka Streams
Guido Schmutz
 
Delta lake and the delta architecture
Adam Doyle
 

Similar to Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Architect Things Right (20)

PDF
Designing Structured Streaming Pipelines—How to Architect Things Right
Databricks
 
PDF
Easy, scalable, fault tolerant stream processing with structured streaming - ...
Anyscale
 
PDF
Real-Time Data Pipelines Made Easy with Structured Streaming in Apache Spark.pdf
nilanjan172nsvian
 
PDF
Making Structured Streaming Ready for Production
Databricks
 
PDF
Apache Spark 2.0: A Deep Dive Into Structured Streaming - by Tathagata Das
Databricks
 
PDF
Writing Continuous Applications with Structured Streaming in PySpark
Databricks
 
PDF
Continuous Application with Structured Streaming 2.0
Anyscale
 
PDF
Streaming Analytics with Spark, Kafka, Cassandra and Akka by Helena Edelson
Spark Summit
 
PDF
Streaming Data Into Your Lakehouse With Frank Munz | Current 2022
HostedbyConfluent
 
PPTX
A Deep Dive into Structured Streaming: Apache Spark Meetup at Bloomberg 2016
Databricks
 
PDF
Writing Continuous Applications with Structured Streaming PySpark API
Databricks
 
PDF
Stream, Stream, Stream: Different Streaming Methods with Apache Spark and Kafka
Databricks
 
PPT
Stream, Stream, Stream: Different Streaming Methods with Spark and Kafka
DataWorks Summit
 
PPTX
Meetup spark structured streaming
José Carlos García Serrano
 
PDF
Easy, scalable, fault tolerant stream processing with structured streaming - ...
Databricks
 
PPTX
Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...
confluent
 
PDF
Taking Spark Streaming to the Next Level with Datasets and DataFrames
Databricks
 
PDF
A Deep Dive into Structured Streaming in Apache Spark
Anyscale
 
PDF
What's new with Apache Spark's Structured Streaming?
Miklos Christine
 
PDF
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
Helena Edelson
 
Designing Structured Streaming Pipelines—How to Architect Things Right
Databricks
 
Easy, scalable, fault tolerant stream processing with structured streaming - ...
Anyscale
 
Real-Time Data Pipelines Made Easy with Structured Streaming in Apache Spark.pdf
nilanjan172nsvian
 
Making Structured Streaming Ready for Production
Databricks
 
Apache Spark 2.0: A Deep Dive Into Structured Streaming - by Tathagata Das
Databricks
 
Writing Continuous Applications with Structured Streaming in PySpark
Databricks
 
Continuous Application with Structured Streaming 2.0
Anyscale
 
Streaming Analytics with Spark, Kafka, Cassandra and Akka by Helena Edelson
Spark Summit
 
Streaming Data Into Your Lakehouse With Frank Munz | Current 2022
HostedbyConfluent
 
A Deep Dive into Structured Streaming: Apache Spark Meetup at Bloomberg 2016
Databricks
 
Writing Continuous Applications with Structured Streaming PySpark API
Databricks
 
Stream, Stream, Stream: Different Streaming Methods with Apache Spark and Kafka
Databricks
 
Stream, Stream, Stream: Different Streaming Methods with Spark and Kafka
DataWorks Summit
 
Meetup spark structured streaming
José Carlos García Serrano
 
Easy, scalable, fault tolerant stream processing with structured streaming - ...
Databricks
 
Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...
confluent
 
Taking Spark Streaming to the Next Level with Datasets and DataFrames
Databricks
 
A Deep Dive into Structured Streaming in Apache Spark
Anyscale
 
What's new with Apache Spark's Structured Streaming?
Miklos Christine
 
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
Helena Edelson
 
Ad

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
Databricks
 
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
PPT
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
PPTX
Data Lakehouse Symposium | Day 2
Databricks
 
PPTX
Data Lakehouse Symposium | Day 4
Databricks
 
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks
 
PDF
Democratizing Data Quality Through a Centralized Platform
Databricks
 
PDF
Learn to Use Databricks for Data Science
Databricks
 
PDF
Why APM Is Not the Same As ML Monitoring
Databricks
 
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
PDF
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
PDF
Sawtooth Windows for Feature Aggregations
Databricks
 
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
PDF
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
PDF
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
PDF
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
DW Migration Webinar-March 2022.pptx
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
Data Lakehouse Symposium | Day 2
Databricks
 
Data Lakehouse Symposium | Day 4
Databricks
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks
 
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Learn to Use Databricks for Data Science
Databricks
 
Why APM Is Not the Same As ML Monitoring
Databricks
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
Sawtooth Windows for Feature Aggregations
Databricks
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
Ad

Recently uploaded (20)

PPTX
apidays Helsinki & North 2025 - API access control strategies beyond JWT bear...
apidays
 
PDF
What does good look like - CRAP Brighton 8 July 2025
Jan Kierzyk
 
PPTX
Aict presentation on dpplppp sjdhfh.pptx
vabaso5932
 
PDF
Merits and Demerits of DBMS over File System & 3-Tier Architecture in DBMS
MD RIZWAN MOLLA
 
PDF
Product Management in HealthTech (Case Studies from SnappDoctor)
Hamed Shams
 
PPTX
AI Presentation Tool Pitch Deck Presentation.pptx
ShyamPanthavoor1
 
PPTX
apidays Munich 2025 - Building an AWS Serverless Application with Terraform, ...
apidays
 
PDF
OOPs with Java_unit2.pdf. sarthak bookkk
Sarthak964187
 
PDF
Copia de Strategic Roadmap Infographics by Slidesgo.pptx (1).pdf
ssuserd4c6911
 
PDF
apidays Helsinki & North 2025 - APIs in the healthcare sector: hospitals inte...
apidays
 
PPTX
ER_Model_with_Diagrams_Presentation.pptx
dharaadhvaryu1992
 
PPTX
apidays Helsinki & North 2025 - Vero APIs - Experiences of API development in...
apidays
 
PPTX
SlideEgg_501298-Agentic AI.pptx agentic ai
530BYManoj
 
PDF
apidays Helsinki & North 2025 - How (not) to run a Graphql Stewardship Group,...
apidays
 
PDF
Simplifying Document Processing with Docling for AI Applications.pdf
Tamanna
 
PPT
Growth of Public Expendituuure_55423.ppt
NavyaDeora
 
PPTX
apidays Singapore 2025 - The Quest for the Greenest LLM , Jean Philippe Ehre...
apidays
 
PDF
Driving Employee Engagement in a Hybrid World.pdf
Mia scott
 
PPT
AI Future trends and opportunities_oct7v1.ppt
SHIKHAKMEHTA
 
PPTX
b6057ea5-8e8c-4415-90c0-ed8e9666ffcd.pptx
Anees487379
 
apidays Helsinki & North 2025 - API access control strategies beyond JWT bear...
apidays
 
What does good look like - CRAP Brighton 8 July 2025
Jan Kierzyk
 
Aict presentation on dpplppp sjdhfh.pptx
vabaso5932
 
Merits and Demerits of DBMS over File System & 3-Tier Architecture in DBMS
MD RIZWAN MOLLA
 
Product Management in HealthTech (Case Studies from SnappDoctor)
Hamed Shams
 
AI Presentation Tool Pitch Deck Presentation.pptx
ShyamPanthavoor1
 
apidays Munich 2025 - Building an AWS Serverless Application with Terraform, ...
apidays
 
OOPs with Java_unit2.pdf. sarthak bookkk
Sarthak964187
 
Copia de Strategic Roadmap Infographics by Slidesgo.pptx (1).pdf
ssuserd4c6911
 
apidays Helsinki & North 2025 - APIs in the healthcare sector: hospitals inte...
apidays
 
ER_Model_with_Diagrams_Presentation.pptx
dharaadhvaryu1992
 
apidays Helsinki & North 2025 - Vero APIs - Experiences of API development in...
apidays
 
SlideEgg_501298-Agentic AI.pptx agentic ai
530BYManoj
 
apidays Helsinki & North 2025 - How (not) to run a Graphql Stewardship Group,...
apidays
 
Simplifying Document Processing with Docling for AI Applications.pdf
Tamanna
 
Growth of Public Expendituuure_55423.ppt
NavyaDeora
 
apidays Singapore 2025 - The Quest for the Greenest LLM , Jean Philippe Ehre...
apidays
 
Driving Employee Engagement in a Hybrid World.pdf
Mia scott
 
AI Future trends and opportunities_oct7v1.ppt
SHIKHAKMEHTA
 
b6057ea5-8e8c-4415-90c0-ed8e9666ffcd.pptx
Anees487379
 

Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Architect Things Right

  • 1. DESIGNING ETL PIPELINES WITH How to architect things right Spark Summit Europe 16 October 2019 Tathagata “TD” Das @tathadas STRUCTURED STREAMING
  • 2. About Me Started Streaming project in AMPLab, UC Berkeley Currently focused on Structured Streaming and Delta Lake Staff Engineer on the StreamTeam @ Team Motto: "We make all your streams come true"
  • 3. Structured Streaming Distributed stream processing built on SQL engine High throughput, second-scale latencies Fault-tolerant, exactly-once Great set of connectors Philosophy: Treat data streams like unbounded tables Users write batch-like queries on tables Spark will continuously execute the queries incrementally on streams 3
  • 4. Structured Streaming 4 Example Read JSON data from Kafka Parse nested JSON Store in structured Parquet table Get end-to-end failure guarantees ETL spark.readStream.format("kafka") .option("kafka.boostrap.servers",...) .option("subscribe", "topic") .load() .selectExpr("cast (value as string) as json") .select(from_json("json", schema).as("data")) .writeStream .format("parquet") .option("path", "/parquetTable/" .trigger("1 minute") .option("checkpointLocation", "…") .start()
  • 5. Anatomy of a Streaming Query 5 spark.readStream.format("kafka") .option("kafka.boostrap.servers",...) .option("subscribe", "topic") .load() .selectExpr("cast (value as string) as json") .select(from_json("json", schema).as("data")) .writeStream .format("parquet") .option("path", "/parquetTable/" .trigger("1 minute") .option("checkpointLocation", "…") .start() Specify where to read data from Specify data transformations Specify where to write data to Specify how to process data
  • 6. DataFrames, Datasets, SQL Logical Plan Read from Kafka Project cast(value) as json Project from_json(json) Write to Parquet Spark automatically streamifies! Spark SQL converts batch-like query to a series of incremental execution plans operating on new batches of data Kafka Source Optimized Operator codegen, off- heap, etc. Parquet Sink Optimized Plan Series of Incremental Execution Plans process newdata t = 1 t = 2 t = 3 process newdata process newdata spark.readStream.format("kafka") .option("kafka.boostrap.servers",...) .option("subscribe", "topic") .load() .selectExpr("cast (value as string) as json") .select(from_json("json", schema).as("data")) .writeStream .format("parquet") .option("path", "/parquetTable/" .trigger("1 minute") .option("checkpointLocation", "…") .start()
  • 7. • ACID transactions • Schema Enforcement and Evolution • Data versioning and Audit History • Time travel to old versions • Open formats • Scalable Metadata Handling • Great with batch + streaming • Upserts and Deletes Open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. https://delta.io/
  • 8. THE GOOD OF DATA LAKES • Massive scale out • Open Formats • Mixed workloads THE GOOD OF DATA WAREHOUSES • Pristine Data • Transactional Reliability • Fast SQL Queries Open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. https://delta.io/
  • 9. STRUCTURED STREAMING How to build streaming data pipelines with them?
  • 10. STRUCTURED STREAMING What are the design patterns to correctly architect streaming data pipelines?
  • 11. Another streaming design pattern talk???? Most talks Focus on a pure streaming engine Explain one way of achieving the end goal 11 This talk Spark is more than a streaming engine Spark has multiple ways of achieving the end goal with tunable perf, cost and quality
  • 12. This talk How to think about design Common design patterns How we are making this easier 12
  • 13. Streaming Pipeline Design 13 Data streams Insights ????
  • 15. What? 15 What is your input? What is your data? What format and system is your data in? What is your output? What results do you need? What throughput and latency do you need?
  • 16. Why? 16 Why do you want this output in this way? Who is going to take actions based on it? When and how are they going to consume it? humans? computers?
  • 17. Why? Common mistakes! #1 "I want my dashboard with counts to be updated every second" #2 "I want to generate automatic alerts with up-to-the-last second counts" (but my input data is often delayed) 17 No point of updating every second if humans are going to take actions in minutes or hours No point taking fast actions on low quality data and results
  • 18. Why? Common mistakes! #3 "I want to train machine learning models on the results" (but my results are in a key-value store) 18 Key-value stores are not great for large, repeated data scans which machine learning workloads perform
  • 19. How? 19 ???? How to process the data? ???? How to store the results?
  • 21. Pattern 1: ETL 21 Input: unstructured input stream from files, Kafka, etc. What? Why? Query latest structured data interactively or with periodic jobs 01:06:45 WARN id = 1 , update failed 01:06:45 INFO id=23, update success 01:06:57 INFO id=87: update postpo … Output: structured tabular data
  • 22. P1: ETL 22 Convert unstructured input to structured tabular data Latency: few minutes What? How? Why? Query latest structured data interactively or with periodic jobs Process: Use Structured Streaming query to transform unstructured, dirty data Run 24/7 on a cluster with default trigger Store: Save to structured scalable storage that supports data skipping, etc. E.g.: Parquet, ORC, or even better, Delta Lake STRUCTURED STREAMING ETL QUERY 01:06:45 WARN id = 1 , update failed 01:06:45 INFO id=23, update success 01:06:57 INFO id=87: update postpo …
  • 23. P1: ETL with Delta Lake 23 How? Store: Save to STRUCTURED STREAMING Read with snapshot guarantees while writes are in progress Concurrently reprocess data with full ACID guarantees Coalesce small files into larger files Update table to fix mistakes in data Delete data for GDPR REPROCESS ETL QUERY 01:06:45 WARN id = 1 , update failed 01:06:45 INFO id=23, update success 01:06:57 INFO id=87: update postpo …
  • 24. P1.1: Cheaper ETL 24 Convert unstructured input to structured tabular data Latency: few minutes hours Not have clusters up 24/7 What? How? Why? Query latest data interactively or with periodic jobs Cheaper solution Process: Still use Structured Streaming query! Run streaming query with "trigger.once" for processing all available data since last batch Set up external schedule (every few hours?) to periodically start a cluster and run one batch STRUCTURED STREAMING RESTART ON SCHEDULE 01:06:45 WARN id = 1 , update failed 01:06:45 INFO id=23, update success 01:06:57 INFO id=87: update postpo …
  • 25. P1.2: Query faster than ETL! 25 Latency: hours seconds What? How? Why? Query latest up-to-the last second data interactively Query data in Kafka directly using Spark SQL Can process up to the last records received by Kafka when the query was started SQL
  • 26. Pattern 2: Key-value output 26 Input: new data for each key Lookup latest value for key (dashboards, websites, etc.) OR Summary tables for querying interactively or with periodic jobs KEY LATEST VALUE key1 value2 key2 value3 { "key1": "value1" } { "key1": "value2" } { "key2": "value3" } Output: updated values for each key Aggregations (sum, count, …) Sessionizations What? Why?
  • 27. P2.1: Key-value output for lookup 27 Generate updated values for keys Latency: seconds/minutes What? How? Why? Lookup latest value for key STRUCTURED STREAMING Process: Use Structured Streaming with stateful operations for aggregation Store: Save in key-values stores optimized for single key lookups STATEFUL AGGREGATION LOOKUP { "key1": "value1" } { "key1": "value2" } { "key2": "value3" }
  • 28. P2.2: Key-value output for analytics 28 Generate updated values for keys Latency: seconds/minutes What? How? Why? Lookup latest value for key Summary tables for analytics Process: Use Structured Streaming with Store: Save in Delta Lake! STRUCTURED STREAMING STATEFUL AGGREGATION SUMMARY stateful operations for aggregation Delta Lake supports upserts using MERGE { "key1": "value1" } { "key1": "value2" } { "key2": "value3" }
  • 29. P2.2: Key-value output for analytics 29 How? STRUCTURED STREAMING STATEFUL AGGREGATION streamingDataFrame.foreachBatch { batchOutputDF => DeltaTable.forPath(spark, "/aggs/").as("t") .merge( batchOutputDF.as("s"), "t.key = s.key") .whenMatched().update(...) .whenNotMatched().insert(...) .execute() }.start() SUMMARY Stateful operations for aggregation { "key1": "value1" } { "key1": "value2" } { "key2": "value3" } Delta Lake supports upserts using Merge SQL operation Scala/Java/Python APIs with same semantics as SQL Merge SQL Merge supported in Databricks Delta, not in OSS yet
  • 30. P2.2: Key-value output for analytics 30 How? Stateful aggregation requires setting watermark to drop very late data Dropping some data leads some inaccuracies Stateful operations for aggregation Delta Lake supports upserts using Merge STRUCTURED STREAMING STATEFUL AGGREGATION SUMMARY { "key1": "value1" } { "key1": "value2" } { "key2": "value3" }
  • 31. P2.3: Key-value aggregations for analytics 31 Generate aggregated values for keys Latency: hours/days Do not drop any late data What? How? Why? Summary tables for analytics Correct Process: ETL to structured table (no stateful aggregation) Store: Save in Delta Lake Post-process: Aggregate after all delayed data received STRUCTURED STREAMING STATEFUL AGGREGATION AGGREGATIONETL SUMMARY { "key1": "value1" } { "key1": "value2" } { "key2": "value3" }
  • 32. Pattern 3: Joining multiple inputs 32 #UnifiedAnalytics #SparkAISummit { "id": 14, "name": "td", "v": 100.. { "id": 23, "name": "by", "v": -10.. { "id": 57, "name": "sz", "v": 34.. … 01:06:45 WARN id = 1 , update failed 01:06:45 INFO id=23, update success 01:06:57 INFO id=87: update postpo … id update value Input: Multiple data streams based on common key Output: Combined information What?
  • 33. P3.1: Joining fast and slow streams 33 Input: One fast stream of facts and one slow stream of dimension changes Output: Fast stream enriched by data from slow stream Example: product sales info ("facts") enriched by more product info ("dimensions") What + Why? { "id": 14, "name": "td", "v": 100.. { "id": 23, "name": "by", "v": -10.. { "id": 57, "name": "sz", "v": 34.. … 01:06:45 WARN id = 1 , update failed 01:06:45 INFO id=23, update success 01:06:57 INFO id=87: update postpo … id update value How? ETL slow stream to a dimension table Join fast stream with snapshots of the dimension table FAST ETL DIMENSION TABLE COMBINED TABLE JOIN SLOW ETL
  • 34. P3.1: Joining fast and slow streams 34 { "id": 14, "name": "td", "v": 100.. { "id": 23, "name": "by", "v": -10.. { "id": 57, "name": "sz", "v": 34.. … 01:06:45 WARN id = 1 , update failed 01:06:45 INFO id=23, update success 01:06:57 INFO id=87: update postpo … id update value SLOW ETL How? - Caveats FAST ETL JOIN COMBINED TABLE DIMENSION TABLE Store dimension table in Delta Lake Delta Lake's versioning allows changes to be detected and the snapshot automatically reloaded without restart** Better Solution ** available only in Databricks Delta Lake Structured Streaming does not reload dimension table snapshot Changes by slow ETL wont be seen until restart
  • 35. P3.1: Joining fast and slow stream 35 How? - Caveats Treat it as a "joining fast and fast data" Better Solution Delays in updates to dimension table can cause joining with stale dimension data E.g. sale of a product received even before product table has any info on the product
  • 36. P3.2: Joining fast and fast data 36 Input: Two fast streams where either stream maybe delayed Output: Combined info even if one is delayed over another Example: ad impressions and ad clicks Use stream-stream joins in Structured Streaming Data will be buffered as state Watermarks define how long to buffer before giving up on matching What + Why? id update valueSTREAM STREAM JOIN { "id": 14, "name": "td", "v": 100.. { "id": 23, "name": "by", "v": -10.. { "id": 57, "name": "sz", "v": 34.. … 01:06:45 WARN id = 1 , update failed 01:06:45 INFO id=23, update success 01:06:57 INFO id=87: update postpo … See my past deep dive talk for more info How?
  • 37. Pattern 4: Change data capture 37 #UnifiedAnalytics #SparkAISummit INSERT a, 1 INSERT b, 2 UPDATE a, 3 DELETE b INSERT b, 4 key value a 3 b 4 Input: Change data based on a primary key Output: Final table after changes End-to-end replication of transactional tables into analytical tables What? Why?
  • 38. P4: Change data capture 38 How? Use foreachBatch and Merge In each batch, apply changes to the Delta table using Merge Delta Lake's Merge support extended syntax with includes multiple match clauses, clause conditions and deletes INSERT a, 1 INSERT b, 2 UPDATE a, 3 DELETE b INSERT b, 4 STRUCTURED STREAMING streamingDataFrame.foreachBatch { batchOutputDF => DeltaTable.forPath(spark, "/deltaTable/").as("t") .merge(batchOutputDF.as("updates"), "t.key = s.key") .whenMatched("delete = false").update(...) .whenMatched("delete = true").delete() .whenNotMatched().insert(...) .execute() }.start() See merge examples in docs for more info
  • 39. Pattern 5: Writing to multiple outputs 39 id val1 val2 id sum count RAW LOGS + SUMMARIES TABLE 2 TABLE 1 { "id": 14, "name": "td", "v": 100.. { "id": 23, "name": "by", "v": -10.. { "id": 57, "name": "sz", "v": 34.. … SUMMARY FOR ANALYTICS + SUMMARY FOR LOOKUP TABLE CHANGE LOG + UPDATED TABLE What?
  • 40. P5: Serial or Parallel? 40 id val1 val2 id sum count TABLE 2 TABLE 1 { "id": 14, "name": "td", "v": 100.. { "id": 23, "name": "by", "v": -10.. { "id": 57, "name": "sz", "v": 34.. … Parallel How? id val1 val2 TABLE 1 { "id": 14, "name": "td", "v": 100.. { "id": 23, "name": "by", "v": -10.. { "id": 57, "name": "sz", "v": 34.. … id val1 val2 TABLE 2 Writes table 1 and reads it again Cheap or expensive depends on the size and format of table 1 Higher latency Reads input twice, may have to parse the data twice Cheap or expensive depends on size of raw input + parsing cost Serial
  • 41. P5: Combination! 41 Combo 1: Multiple streaming queries id val1 val2 TABLE 1 { "id": 14, "name": "td", "v": 100.. { "id": 23, "name": "by", "v": -10.. { "id": 57, "name": "sz", "v": 34.. … id val1 val2 TABLE 2 Do expensive parsing once, write to table 1 Do cheaper follow up processing from table 1 Good for ETL + multiple levels of summaries Change log + updated table Still writing + reading table1 Compute once, write multiple times Cheaper, but looses exactly-once guarantee id val1 val2 TABLE 3 Combo 2: Single query + foreachBatch EXPENSIVE CHEAP { "id": 14, "name": "td", "v": 100.. { "id": 23, "name": "by", "v": -10.. { "id": 57, "name": "sz", "v": 34.. … id val1 val2 LOCATION 1 id val1 val2 LOCATION 2 streamingDataFrame.foreachBatch { batchSummaryData => // write summary to Delta Lake // write summary to key value store }.start() How?
  • 42. Production Pipelines @ Bronze Tables Silver Tables Gold Tables REPORTING STREAMING ANALYTICS STRUCTURED STREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTUREDSTREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTUREDSTREAMING
  • 43. In Progress: Declarative Pipelines 43 Allow users to declaratively express the entire DAG of pipelines as a dataflow graph REPORTING STREAMING ANALYTICS STRUCTURED STREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTUREDSTREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTUREDSTREAMING
  • 44. In Progress: Declarative Pipelines Enforce metadata, storage, and quality declaratively dataset("warehouse") .query(input("kafka").select(…).join(…)) // Query to materialize .location(…) // Storage Location .schema(…) // Optional strict schema checking .metastoreName(…) // Hive Metastore .description(…) // Human readable description .expect("validTimestamp", // Expectations on data quality "timestamp > 2012-01-01 AND …", "fail / alert / quarantine") *Coming Soon
  • 45. In Progress: Declarative Pipelines Enforce metadata, storage, and quality declaratively dataset("warehouse") .query(input("kafka").withColumn(…).join(…)) // Query to materialize .location(…) // Storage Location .schema(…) // Optional strict schema checking .metastoreName(…) // Hive Metastore .description(…) // Human readable description .expect("validTimestamp", // Expectations on data quality "timestamp > 2012-01-01 AND …", "fail / alert / quarantine") *Coming Soon
  • 46. In Progress: Declarative Pipelines Enforce metadata, storage, and quality declaratively dataset("warehouse") .query(input("kafka").withColumn(…).join(…)) // Query to materialize .location(…) // Storage Location .schema(…) // Optional strict schema checking .metastoreName(…) // Hive Metastore .description(…) // Human readable description .expect("validTimestamp", // Expectations on data quality "timestamp > 2012-01-01 AND …", "fail / alert / quarantine") *Coming Soon
  • 47. In Progress: Declarative Pipelines 47 Unit test part or whole DAG with sample input data Integration test DAG with production data Deploy/upgrade new DAG code Monitor whole DAG in one place Rollback code / data when needed REPORTING STREAMING ANALYTICS STRUCTURED STREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTUREDSTREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTURED STREAMING STRUCTUREDSTREAMING *Coming Soon
  • 48. Build your own Delta Lake at https://delta.io