SlideShare a Scribd company logo
Changelog
Stream
Processing with
Apache Flink
Timo Walther
@twalthr
–
Flink Forward 2022
2022-08-03
About me
Open source
● Long-term committer since 2014 (before ASF)
● Member of the project management committee (PMC)
● Top 5 contributor (commits), top 1 contributor (additions)
● Among core architects of Flink SQL
Career
● Early software engineer @ DataArtisans
● SDK team @ DataArtisans/Ververica (acquisition by Alibaba)
● SQL team lead @ Ververica
● Co-founder @ Immerok
2
What is Apache Flink?
3
Building Blocks for Stream Processing
4
Time
● Synchronize
● Progress
● Wait
● Timeout
● Fast-forward
● Replay
State
● Store
● Buffer
● Cache
● Model
● Grow
● Expire
Streams
● Pipeline
● Distribute
● Join
● Enrich
● Control
● Replay
Snapshots
● Backup
● Version
● Fork
● A/B test
● Time-travel
● Restore
What is Apache Flink used for?
5
Transactions
Logs
IoT
Interactions
Events
…
Analytics
Event-driven
Applications
Data
Integration
ETL
Messaging
Systems
Files
Databases
Key/Value Stores
Applications
Messaging
Systems
Files
Databases
Key/Value Stores
Apache Flink’s APIs
6
API Stack
7
Dataflow Runtime
Low-Level Stream Operator API
Optimizer / Planner
Table / SQL API
DataStream API Stateful Functions
DataStream API
8
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(STREAMING);
DataStream<Integer> stream = env.fromElements(1, 2, 3);
stream.executeAndCollect().forEachRemaining(System.out::println);
Properties
● Exposes the building blocks for stream processing
● Arbitrary operator topologies using map(), process(), connect(), ...
● Business logic is written in user-defined functions
● Arbitrary user-defined record types flow in-between
● Conceptually always an append-only / insert-only log!
1
2
3
Output
Table / SQL API
9
TableEnvironment env = TableEnvironment.create(EnvironmentSettings.inStreamingMode());
// Programmatic
Table table = env.fromValues(row(1), row(2), row(3));
// SQL
Table table = env.sqlQuery("SELECT * FROM (VALUES (1), (2), (3))");
table.execute().print();
Properties
● Abstracts the building blocks for stream processing
● Operator topology is determined by planner
● Business logic is declared in SQL and/or Table API
● Internal record types flow, Flink’s Row type is exposed in Table API
● Conceptually a table, but a changelog under the hood!
+----+-------------+
| op | f0 |
+----+-------------+
| +I | 1 |
| +I | 2 |
| +I | 3 |
Output
DataStream API ↔Table / SQL API
10
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
// Stream -> Table
DataStream<?> inStream1 = ...
Table appendOnlyTable = tableEnv.fromDataStream(inStream1)
DataStream<Row> inStream2 = ...
Table anyTable = tableEnv.fromChangelogStream(inStream2)
// Table -> Stream
DataStream<T> appendOnlyStream = tableEnv.toDataStream(insertOnlyTable, T.class)
DataStream<Row> changelogStream = tableEnv.toChangelogStream(anyTable)
Mix and match APIs!
Changelog Stream
Processing
11
Change is the law of life and those who look only to
the past or present are certain to miss the future.
John F. Kennedy
Data Processing is a Stream of Changes
12
● Business data is always a stream: bounded or unbounded
● Every record is a changelog entry: insertion as the default
● Batch processing is just a special case in the runtime
now
past future
start end of stream
bounded stream unbounded stream
unbounded stream
How do I Work with Streams in Flink SQL?
13
● You don’t. You work with dynamic tables!
● A concept similar to materialized views
CREATE TABLE Revenue
(name STRING, total INT)
WITH (…)
INSERT INTO Revenue
SELECT name, SUM(amount)
FROM Transactions
GROUP BY name
CREATE TABLE Transactions
(name STRING, amount INT)
WITH (…)
name amount
Alice 56
Bob 10
Alice 89
name total
Alice 145
Bob 10
So, is Flink SQL a database? No, bring your own data and systems!
Stream-Table Duality - Basics
14
● A stream is the changelog of a dynamic table
● Sources, operators, and sinks work on changelogs under the hood
● Each component declares the kind of changes it consumes/produces
only +I Appending/Insert-only
contains -… Updating
contains -U Retracting
never –U but +U Upserting
Short name Long name
+I Insertion Default for scans + output of bounded results.
-U Update Before Retracts a previously emitted result.
+U Update After Updates a previously emitted result.
Requires a primary key if -U is omitted for idempotent updates.
-D Delete Removes the last result.
Stream-Table Duality - Example
15
An applied changelog becomes a real (materialized) table.
name amount
Alice 56
Bob 10
Alice 89
name total
Alice 56
Bob 10
changelog
+I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56]
145
materialization
CREATE TABLE Revenue
(name STRING, total INT)
WITH (…)
INSERT INTO Revenue
SELECT name, SUM(amount)
FROM Transactions
GROUP BY name
CREATE TABLE Transactions
(name STRING, amount INT)
WITH (…)
Stream-Table Duality - Example
16
An applied changelog becomes a real (materialized) table.
name amount
Alice 56
Bob 10
Alice 89
name total
Alice 56
Bob 10
+I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56]
145
materialization
CREATE TABLE Revenue
(PRIMARY KEY(name) …)
WITH (…)
INSERT INTO Revenue
SELECT name, SUM(amount)
FROM Transactions
GROUP BY name
CREATE TABLE Transactions
(name STRING, amount INT)
WITH (…)
Save ~50% of traffic if downstream system supports upserting!
Stream-Table Duality - Propagation
17
● Sources declares set of emitted changes i.e. changelog mode
● Optimizer tracks changelog mode and primary key through pipeline
● Sink declares changes it can digest
CREATE TABLE …
… WITH ('connector'='filesystem')
… WITH ('connector'='kafka')
… WITH ('connector'='kafka-upsert')
… WITH ('connector'='jdbc')
… WITH ('connector'='kafka', 'format' = 'debezium-json')
+I
+I
+I -D
+I -U +U -D
+I
(for sources)
Retract vs. Upsert
18
Retract
● No primary key requirements
● Works for almost every external system
● Supports duplicate rows
● In distributed system often unavoidable
à most flexible changelog mode
à default mode
Upsert
● Traffic + computation optimization
● In-place updates (idempotency)
SELECT c, COUNT(*) FROM (
SELECT COUNT(*) AS c
FROM T
GROUP BY user
)
GROUP BY c
Count 1
Subtask 1
Count 2
Subtask 1
Subtask 2
+U[1]
+U[2]
+I[…]
1=>1
2=>1
Subtask 2
+I[…]
Changelog Insights – Append-only
19
CREATE TABLE Transaction (tid BIGINT, amount INT);
CREATE TABLE Payment (tid BIGINT, method STRING);
CREATE TABLE Result (tid BIGINT, …); // accepts all changes
INSERT INTO Result SELECT * FROM Transactions T JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE])
+- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Changelog Insights – Updating
20
CREATE TABLE Transaction (tid BIGINT, amount INT);
CREATE TABLE Payment (tid BIGINT, method STRING);
CREATE TABLE Result (tid BIGINT, …);
INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE])
+- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Changelog Insights – Updating with PK
21
CREATE TABLE Transaction (tid BIGINT, amount INT);
CREATE TABLE Payment (tid BIGINT, method STRING);
CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE], upsertMaterialize=[true])
+- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Changelog Insights – Updating with PK
22
CREATE TABLE Transaction (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
CREATE TABLE Payment (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE])
+- Join(leftInputSpec=[UniqueKey], rightInputSpec=[UniqueKey], changelogMode=[I,UA,D])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Mode Transitions
23
Append-only
Retracting
Updating
through operation
if operator/sink requires it
ChangelogNormalize
if sink requires it
UpsertMaterialize
Mode Transitions – Characteristics
24
Append-only
● Event-time column backed
by watermarks
● Highly state efficient due to
notion of completeness
● Usually no event-time
column
● State usage needs to
be kept in mind
● Pure materialized view
maintenance
Retracting
Updating
aka "TABLE"
aka "STREAM"
aka ?
Demo
29
Summary
TLDR
● Flink's SQL engine is a powerful changelog processor
● Flexible tool for integrating systems with different semantics
There is more…
● CDC connector ecosystem
à 2.6k Github stars
https://flink-packages.org/packages/cdc-connectors
● Table Store
unified storage engine for dynamic tables
à native changelog support
à deep integration into Flink SQL "like a DB"
https://flink.apache.org/news/2022/05/11/release-table-store-0.1.0.html
30
Thanks
Timo Walther
@twalthr
mrsql@immerok.com

More Related Content

What's hot (20)

PDF
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Flink Forward
 
PDF
Apache Flink internals
Kostas Tzoumas
 
PDF
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...
ScaleGrid.io
 
PDF
Fundamentals of Apache Kafka
Chhavi Parasher
 
PPTX
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Flink Forward
 
PDF
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
mumrah
 
PPTX
Autoscaling Flink with Reactive Mode
Flink Forward
 
PPTX
Flink SQL & TableAPI in Large Scale Production at Alibaba
DataWorks Summit
 
PDF
FlinkML: Large Scale Machine Learning with Apache Flink
Theodoros Vasiloudis
 
PPTX
Introduction to Apache Flink
mxmxm
 
PPTX
Kafka replication apachecon_2013
Jun Rao
 
PPTX
Dynamic filtering for presto join optimisation
Ori Reshef
 
PPTX
Practical learnings from running thousands of Flink jobs
Flink Forward
 
PPTX
Introduction to Apache Kafka
Jeff Holoman
 
PDF
ksqlDB: A Stream-Relational Database System
confluent
 
PPTX
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Flink Forward
 
PDF
How to Avoid Common Mistakes When Using Reactor Netty
VMware Tanzu
 
PPTX
Apache Flink in the Cloud-Native Era
Flink Forward
 
PDF
Dataflow with Apache NiFi
DataWorks Summit/Hadoop Summit
 
PDF
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
HostedbyConfluent
 
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Flink Forward
 
Apache Flink internals
Kostas Tzoumas
 
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...
ScaleGrid.io
 
Fundamentals of Apache Kafka
Chhavi Parasher
 
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Flink Forward
 
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
mumrah
 
Autoscaling Flink with Reactive Mode
Flink Forward
 
Flink SQL & TableAPI in Large Scale Production at Alibaba
DataWorks Summit
 
FlinkML: Large Scale Machine Learning with Apache Flink
Theodoros Vasiloudis
 
Introduction to Apache Flink
mxmxm
 
Kafka replication apachecon_2013
Jun Rao
 
Dynamic filtering for presto join optimisation
Ori Reshef
 
Practical learnings from running thousands of Flink jobs
Flink Forward
 
Introduction to Apache Kafka
Jeff Holoman
 
ksqlDB: A Stream-Relational Database System
confluent
 
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Flink Forward
 
How to Avoid Common Mistakes When Using Reactor Netty
VMware Tanzu
 
Apache Flink in the Cloud-Native Era
Flink Forward
 
Dataflow with Apache NiFi
DataWorks Summit/Hadoop Summit
 
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
HostedbyConfluent
 

Similar to Changelog Stream Processing with Apache Flink (20)

PDF
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
HostedbyConfluent
 
PDF
CDC Stream Processing with Apache Flink
Timo Walther
 
PDF
Flink's SQL Engine: Let's Open the Engine Room!
HostedbyConfluent
 
PPTX
Why and how to leverage the power and simplicity of SQL on Apache Flink
Fabian Hueske
 
PDF
Streaming SQL
Julian Hyde
 
PPTX
2019-01-29 - Demystifying Kotlin Coroutines
Eamonn Boyle
 
PDF
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
DataWorks Summit/Hadoop Summit
 
PDF
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Julian Hyde
 
PPTX
The Current State of Table API in 2022
Flink Forward
 
PDF
Streaming SQL w/ Apache Calcite
Hortonworks
 
PDF
Streaming SQL with Apache Calcite
Julian Hyde
 
PPTX
Omid: scalable and highly available transaction processing for Apache Phoenix
DataWorks Summit
 
PDF
The Ring programming language version 1.5.2 book - Part 7 of 181
Mahmoud Samir Fayed
 
PPSX
ARIES Recovery Algorithms
Pulasthi Lankeshwara
 
PDF
PyCon Ukraine 2017: Operational Transformation
Max Klymyshyn
 
PDF
Streaming SQL
Julian Hyde
 
PPTX
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward
 
PPTX
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...
Flink Forward
 
PDF
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
2ГИС Технологии
 
PDF
Практическое применения Akka Streams
Alexey Romanchuk
 
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
HostedbyConfluent
 
CDC Stream Processing with Apache Flink
Timo Walther
 
Flink's SQL Engine: Let's Open the Engine Room!
HostedbyConfluent
 
Why and how to leverage the power and simplicity of SQL on Apache Flink
Fabian Hueske
 
Streaming SQL
Julian Hyde
 
2019-01-29 - Demystifying Kotlin Coroutines
Eamonn Boyle
 
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
DataWorks Summit/Hadoop Summit
 
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Julian Hyde
 
The Current State of Table API in 2022
Flink Forward
 
Streaming SQL w/ Apache Calcite
Hortonworks
 
Streaming SQL with Apache Calcite
Julian Hyde
 
Omid: scalable and highly available transaction processing for Apache Phoenix
DataWorks Summit
 
The Ring programming language version 1.5.2 book - Part 7 of 181
Mahmoud Samir Fayed
 
ARIES Recovery Algorithms
Pulasthi Lankeshwara
 
PyCon Ukraine 2017: Operational Transformation
Max Klymyshyn
 
Streaming SQL
Julian Hyde
 
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward
 
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...
Flink Forward
 
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
2ГИС Технологии
 
Практическое применения Akka Streams
Alexey Romanchuk
 
Ad

More from Flink Forward (17)

PPTX
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
Flink Forward
 
PDF
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Flink Forward
 
PDF
Introducing the Apache Flink Kubernetes Operator
Flink Forward
 
PPTX
Tuning Apache Kafka Connectors for Flink.pptx
Flink Forward
 
PPTX
Where is my bottleneck? Performance troubleshooting in Flink
Flink Forward
 
PPTX
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Flink Forward
 
PDF
Flink SQL on Pulsar made easy
Flink Forward
 
PPTX
Dynamic Rule-based Real-time Market Data Alerts
Flink Forward
 
PPTX
Processing Semantically-Ordered Streams in Financial Services
Flink Forward
 
PPTX
Welcome to the Flink Community!
Flink Forward
 
PPTX
Extending Flink SQL for stream processing use cases
Flink Forward
 
PPTX
The top 3 challenges running multi-tenant Flink at scale
Flink Forward
 
PPTX
Using Queryable State for Fun and Profit
Flink Forward
 
PPTX
Large Scale Real Time Fraudulent Web Behavior Detection
Flink Forward
 
PPTX
Building Reliable Lakehouses with Apache Flink and Delta Lake
Flink Forward
 
PPTX
Near real-time statistical modeling and anomaly detection using Flink!
Flink Forward
 
PPTX
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Flink Forward
 
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
Flink Forward
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Flink Forward
 
Introducing the Apache Flink Kubernetes Operator
Flink Forward
 
Tuning Apache Kafka Connectors for Flink.pptx
Flink Forward
 
Where is my bottleneck? Performance troubleshooting in Flink
Flink Forward
 
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Flink Forward
 
Flink SQL on Pulsar made easy
Flink Forward
 
Dynamic Rule-based Real-time Market Data Alerts
Flink Forward
 
Processing Semantically-Ordered Streams in Financial Services
Flink Forward
 
Welcome to the Flink Community!
Flink Forward
 
Extending Flink SQL for stream processing use cases
Flink Forward
 
The top 3 challenges running multi-tenant Flink at scale
Flink Forward
 
Using Queryable State for Fun and Profit
Flink Forward
 
Large Scale Real Time Fraudulent Web Behavior Detection
Flink Forward
 
Building Reliable Lakehouses with Apache Flink and Delta Lake
Flink Forward
 
Near real-time statistical modeling and anomaly detection using Flink!
Flink Forward
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Flink Forward
 
Ad

Recently uploaded (20)

PDF
Evolution: How True AI is Redefining Safety in Industry 4.0
vikaassingh4433
 
PDF
Transforming Utility Networks: Large-scale Data Migrations with FME
Safe Software
 
DOCX
Cryptography Quiz: test your knowledge of this important security concept.
Rajni Bhardwaj Grover
 
PDF
Peak of Data & AI Encore AI-Enhanced Workflows for the Real World
Safe Software
 
PDF
Home Cleaning App Development Services.pdf
V3cube
 
PDF
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
Edge AI and Vision Alliance
 
PPTX
Essential Content-centric Plugins for your Website
Laura Byrne
 
PDF
🚀 Let’s Build Our First Slack Workflow! 🔧.pdf
SanjeetMishra29
 
DOCX
Python coding for beginners !! Start now!#
Rajni Bhardwaj Grover
 
PDF
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
Edge AI and Vision Alliance
 
PPTX
MuleSoft MCP Support (Model Context Protocol) and Use Case Demo
shyamraj55
 
PPTX
The Project Compass - GDG on Campus MSIT
dscmsitkol
 
PPTX
Talbott's brief History of Computers for CollabDays Hamburg 2025
Talbott Crowell
 
PDF
Software Development Company Keene Systems, Inc (1).pdf
Custom Software Development Company | Keene Systems, Inc.
 
PPTX
Securing Model Context Protocol with Keycloak: AuthN/AuthZ for MCP Servers
Hitachi, Ltd. OSS Solution Center.
 
PDF
Modern Decentralized Application Architectures.pdf
Kalema Edgar
 
PPTX
CapCut Pro PC Crack Latest Version Free Free
josanj305
 
PDF
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PPTX
Agentforce World Tour Toronto '25 - MCP with MuleSoft
Alexandra N. Martinez
 
Evolution: How True AI is Redefining Safety in Industry 4.0
vikaassingh4433
 
Transforming Utility Networks: Large-scale Data Migrations with FME
Safe Software
 
Cryptography Quiz: test your knowledge of this important security concept.
Rajni Bhardwaj Grover
 
Peak of Data & AI Encore AI-Enhanced Workflows for the Real World
Safe Software
 
Home Cleaning App Development Services.pdf
V3cube
 
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
Edge AI and Vision Alliance
 
Essential Content-centric Plugins for your Website
Laura Byrne
 
🚀 Let’s Build Our First Slack Workflow! 🔧.pdf
SanjeetMishra29
 
Python coding for beginners !! Start now!#
Rajni Bhardwaj Grover
 
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
Edge AI and Vision Alliance
 
MuleSoft MCP Support (Model Context Protocol) and Use Case Demo
shyamraj55
 
The Project Compass - GDG on Campus MSIT
dscmsitkol
 
Talbott's brief History of Computers for CollabDays Hamburg 2025
Talbott Crowell
 
Software Development Company Keene Systems, Inc (1).pdf
Custom Software Development Company | Keene Systems, Inc.
 
Securing Model Context Protocol with Keycloak: AuthN/AuthZ for MCP Servers
Hitachi, Ltd. OSS Solution Center.
 
Modern Decentralized Application Architectures.pdf
Kalema Edgar
 
CapCut Pro PC Crack Latest Version Free Free
josanj305
 
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
Agentforce World Tour Toronto '25 - MCP with MuleSoft
Alexandra N. Martinez
 

Changelog Stream Processing with Apache Flink

  • 1. Changelog Stream Processing with Apache Flink Timo Walther @twalthr – Flink Forward 2022 2022-08-03
  • 2. About me Open source ● Long-term committer since 2014 (before ASF) ● Member of the project management committee (PMC) ● Top 5 contributor (commits), top 1 contributor (additions) ● Among core architects of Flink SQL Career ● Early software engineer @ DataArtisans ● SDK team @ DataArtisans/Ververica (acquisition by Alibaba) ● SQL team lead @ Ververica ● Co-founder @ Immerok 2
  • 3. What is Apache Flink? 3
  • 4. Building Blocks for Stream Processing 4 Time ● Synchronize ● Progress ● Wait ● Timeout ● Fast-forward ● Replay State ● Store ● Buffer ● Cache ● Model ● Grow ● Expire Streams ● Pipeline ● Distribute ● Join ● Enrich ● Control ● Replay Snapshots ● Backup ● Version ● Fork ● A/B test ● Time-travel ● Restore
  • 5. What is Apache Flink used for? 5 Transactions Logs IoT Interactions Events … Analytics Event-driven Applications Data Integration ETL Messaging Systems Files Databases Key/Value Stores Applications Messaging Systems Files Databases Key/Value Stores
  • 7. API Stack 7 Dataflow Runtime Low-Level Stream Operator API Optimizer / Planner Table / SQL API DataStream API Stateful Functions
  • 8. DataStream API 8 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setRuntimeMode(STREAMING); DataStream<Integer> stream = env.fromElements(1, 2, 3); stream.executeAndCollect().forEachRemaining(System.out::println); Properties ● Exposes the building blocks for stream processing ● Arbitrary operator topologies using map(), process(), connect(), ... ● Business logic is written in user-defined functions ● Arbitrary user-defined record types flow in-between ● Conceptually always an append-only / insert-only log! 1 2 3 Output
  • 9. Table / SQL API 9 TableEnvironment env = TableEnvironment.create(EnvironmentSettings.inStreamingMode()); // Programmatic Table table = env.fromValues(row(1), row(2), row(3)); // SQL Table table = env.sqlQuery("SELECT * FROM (VALUES (1), (2), (3))"); table.execute().print(); Properties ● Abstracts the building blocks for stream processing ● Operator topology is determined by planner ● Business logic is declared in SQL and/or Table API ● Internal record types flow, Flink’s Row type is exposed in Table API ● Conceptually a table, but a changelog under the hood! +----+-------------+ | op | f0 | +----+-------------+ | +I | 1 | | +I | 2 | | +I | 3 | Output
  • 10. DataStream API ↔Table / SQL API 10 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env); // Stream -> Table DataStream<?> inStream1 = ... Table appendOnlyTable = tableEnv.fromDataStream(inStream1) DataStream<Row> inStream2 = ... Table anyTable = tableEnv.fromChangelogStream(inStream2) // Table -> Stream DataStream<T> appendOnlyStream = tableEnv.toDataStream(insertOnlyTable, T.class) DataStream<Row> changelogStream = tableEnv.toChangelogStream(anyTable) Mix and match APIs!
  • 11. Changelog Stream Processing 11 Change is the law of life and those who look only to the past or present are certain to miss the future. John F. Kennedy
  • 12. Data Processing is a Stream of Changes 12 ● Business data is always a stream: bounded or unbounded ● Every record is a changelog entry: insertion as the default ● Batch processing is just a special case in the runtime now past future start end of stream bounded stream unbounded stream unbounded stream
  • 13. How do I Work with Streams in Flink SQL? 13 ● You don’t. You work with dynamic tables! ● A concept similar to materialized views CREATE TABLE Revenue (name STRING, total INT) WITH (…) INSERT INTO Revenue SELECT name, SUM(amount) FROM Transactions GROUP BY name CREATE TABLE Transactions (name STRING, amount INT) WITH (…) name amount Alice 56 Bob 10 Alice 89 name total Alice 145 Bob 10 So, is Flink SQL a database? No, bring your own data and systems!
  • 14. Stream-Table Duality - Basics 14 ● A stream is the changelog of a dynamic table ● Sources, operators, and sinks work on changelogs under the hood ● Each component declares the kind of changes it consumes/produces only +I Appending/Insert-only contains -… Updating contains -U Retracting never –U but +U Upserting Short name Long name +I Insertion Default for scans + output of bounded results. -U Update Before Retracts a previously emitted result. +U Update After Updates a previously emitted result. Requires a primary key if -U is omitted for idempotent updates. -D Delete Removes the last result.
  • 15. Stream-Table Duality - Example 15 An applied changelog becomes a real (materialized) table. name amount Alice 56 Bob 10 Alice 89 name total Alice 56 Bob 10 changelog +I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56] 145 materialization CREATE TABLE Revenue (name STRING, total INT) WITH (…) INSERT INTO Revenue SELECT name, SUM(amount) FROM Transactions GROUP BY name CREATE TABLE Transactions (name STRING, amount INT) WITH (…)
  • 16. Stream-Table Duality - Example 16 An applied changelog becomes a real (materialized) table. name amount Alice 56 Bob 10 Alice 89 name total Alice 56 Bob 10 +I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56] 145 materialization CREATE TABLE Revenue (PRIMARY KEY(name) …) WITH (…) INSERT INTO Revenue SELECT name, SUM(amount) FROM Transactions GROUP BY name CREATE TABLE Transactions (name STRING, amount INT) WITH (…) Save ~50% of traffic if downstream system supports upserting!
  • 17. Stream-Table Duality - Propagation 17 ● Sources declares set of emitted changes i.e. changelog mode ● Optimizer tracks changelog mode and primary key through pipeline ● Sink declares changes it can digest CREATE TABLE … … WITH ('connector'='filesystem') … WITH ('connector'='kafka') … WITH ('connector'='kafka-upsert') … WITH ('connector'='jdbc') … WITH ('connector'='kafka', 'format' = 'debezium-json') +I +I +I -D +I -U +U -D +I (for sources)
  • 18. Retract vs. Upsert 18 Retract ● No primary key requirements ● Works for almost every external system ● Supports duplicate rows ● In distributed system often unavoidable à most flexible changelog mode à default mode Upsert ● Traffic + computation optimization ● In-place updates (idempotency) SELECT c, COUNT(*) FROM ( SELECT COUNT(*) AS c FROM T GROUP BY user ) GROUP BY c Count 1 Subtask 1 Count 2 Subtask 1 Subtask 2 +U[1] +U[2] +I[…] 1=>1 2=>1 Subtask 2 +I[…]
  • 19. Changelog Insights – Append-only 19 CREATE TABLE Transaction (tid BIGINT, amount INT); CREATE TABLE Payment (tid BIGINT, method STRING); CREATE TABLE Result (tid BIGINT, …); // accepts all changes INSERT INTO Result SELECT * FROM Transactions T JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE]) +- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 20. Changelog Insights – Updating 20 CREATE TABLE Transaction (tid BIGINT, amount INT); CREATE TABLE Payment (tid BIGINT, method STRING); CREATE TABLE Result (tid BIGINT, …); INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE]) +- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 21. Changelog Insights – Updating with PK 21 CREATE TABLE Transaction (tid BIGINT, amount INT); CREATE TABLE Payment (tid BIGINT, method STRING); CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE], upsertMaterialize=[true]) +- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 22. Changelog Insights – Updating with PK 22 CREATE TABLE Transaction (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); CREATE TABLE Payment (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE]) +- Join(leftInputSpec=[UniqueKey], rightInputSpec=[UniqueKey], changelogMode=[I,UA,D]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 23. Mode Transitions 23 Append-only Retracting Updating through operation if operator/sink requires it ChangelogNormalize if sink requires it UpsertMaterialize
  • 24. Mode Transitions – Characteristics 24 Append-only ● Event-time column backed by watermarks ● Highly state efficient due to notion of completeness ● Usually no event-time column ● State usage needs to be kept in mind ● Pure materialized view maintenance Retracting Updating aka "TABLE" aka "STREAM" aka ?
  • 26. Summary TLDR ● Flink's SQL engine is a powerful changelog processor ● Flexible tool for integrating systems with different semantics There is more… ● CDC connector ecosystem à 2.6k Github stars https://flink-packages.org/packages/cdc-connectors ● Table Store unified storage engine for dynamic tables à native changelog support à deep integration into Flink SQL "like a DB" https://flink.apache.org/news/2022/05/11/release-table-store-0.1.0.html 30