SlideShare a Scribd company logo
OpenLineage For Stream Processing
Paweł Leszczyński (github pawel-big-lebowski)
Maciej Obuchowski (github mobuchowski)
Kafka Summit 2024
2
Agenda
● OpenLineage intro & demo
○ Why do we need lineage?
○ Why having an open lineage?
○ Marquez and Flink demo
● Flink integration deep dive
○ Lineage for batch & streaming
○ Review of OpenLineage-Flink integration, FLIP-314
○ What does the future hold?
OpenLineage
1
Autumn Rhythm - Jackson Pollock
https://www.flickr.com/photos/thoth188/276162883
https://flic.kr/p/hjxW62
OpenLineage for Stream Processing | Kafka Summit London
7
To define an open standard
for the collection of lineage
metadata from pipelines
as they are running.
OpenLineage
Mission
Data model
8
Run is particular instance
of a streaming job
Job is data pipeline that
processes data
Datasets are Kafka topics,
Iceberg tables, Object
Storage destinations and
so on
transition
transition time
Run State Update
run uuid
Run
job id
(name based)
Job
dataset id
(name based)
Dataset
Run Facet
Job Facet
Dataset
Facet
run
job
inputs /
outputs
Producers Consumers
Marquez and Flink
Integration Demo
2
Demo
● Available under
https://github.com/OpenLineage/workshops/tree/main/flink-streaming
● Contains
○ Two Flink jobs
■ Kafka to Postgres
■ Postgres to Kafka
○ Airflow to run some Postgres queries
○ Marquez to present lineage graph
Flink applications - read & write to Kafka
Airflow DAG
OpenLineage for
Streaming
3
What is different for Streaming jobs?
15
Batch and streaming differ in many
aspects, but for lineage there are
few questions that matter:
● When does the unbounded
job end?
● When and how datasets get
updated?
● Does the transformation
change during execution?
When does job end?
16
● It might seem that streaming
jobs never end naturally
● Schema changes, new job
versions, new engine versions
- points when it’s worth to start
another run
When does dataset gets updated?
17
● Dataset versioning is pretty
important - bug analysis, data
freshness
● Implicit - “last update
timestamp”, Airflow’s data
interval - OL default
● Explicit - Iceberg, Delta Lake
dataset version
When does dataset gets updated?
18
● In streaming, it’s not so
obvious as in batch
● Update on each row write
would produce more
metadata than actual data…
● Update only on potential job
end would not give us any
meaningful information in the
meantime
When does dataset gets updated?
19
● Flink: maybe on checkpoint?
● Checkpointing is finicky,
100ms vs 10 minute
checkpoint interval
● Configure minimum event
emission interval separately
● OpenLineage’s additive
model fits that really well
● Spark: microbatch?
Dynamic transformation modification
20
● KafkaSource can find new
topic during execution when
passed a wildcard pattern
● We can catch this and emit
event containing this
information when this
happens
OpenLineage Flink
Integration
update
4
OpenLineage has Flink integration!
● OpenLineage has Flink
JobListener that notifies you
on job start and end
● Support for Kafka, Iceberg,
Cassandra, JDBC…
● Notifies you when job starts,
ends, and on checkpoint with
particular interval
● Additional metadata:
schemas, how much data
processed…
Idea is simple, execution is more complex
The integration has its limits
● Very limited, requires few
undesirable things like setting
execution.attached
● No SQL or Table API support!
● Need to manually attach
JobListener to every job
● OpenLineage preferred
solution would be to run
listener on JobManager in a
separate thread
And the internals are even more complex
● Basically, a lot of reflection
● API wasn’t made for this use
case, a lot of things are
private, a lot of things are in
the class internals
● OpenLineage preferred
solution would be to have API
for connectors to implement,
where they would be
responsible for providing
correct data
And even has evil hacks
● List of transformations inside
StreamExecutionEnvironment
gets cleared moment before
calling JobListeners
● Before that happens, we
replace the clearable list with
one that keeps copy of data
on `clear`.
So, why bother?
● We’ve opportunistically created the integration despite limitations, to gather
interest and provide even that limited value
● The long-term solution would be new API for Flink that would not have any of
those limitations
○ Single API that for DataStream and SQL APIs
○ Not depending on any particular execution mode
○ Connectors responsible for their own lineage - testable and dependable!
○ No reflection :)
○ Possible to have Column-Level Lineage support in the future
● And we’ve waited in that state for a bit
And then something happened
● FLIP-314 - Support Customized Flink Job Listener by Fang Yong, Zhanghao Chen
● New JobStatusChangedListener
○ JobCreatedEvent
○ JobExecutionStatusEvent
● JobCreatedEvent contains LineageGraph
● Both DataStream and
SQL/Table API support
● No attachment problem
● Sounds perfect?
LineageGraph
Problem with LineageVertex
● How do you know all possible connector implementations?
Problem with LineageVertex
● How do you know all connector implementations?
● How do you support custom connectors, where we can’t get the source?
○ …reflection?
Problem with LineageVertex
● How do you know all connector implementations?
● How do you support custom connectors, for which the code is not known?
● How do you deal with breaking changes in connectors?
○ …even more reflection?
Find a solution with community
● Voice your concern, propose how to resolve the issue
● Open discussion on Jira, Flink Slack, mailing list
● Managed to gain consensus and develop a solution that fits everyone involved
● Build community around lineage
Resulting API is really nice
Resulting API is really nice
Facets Allow to Extend Data
● Directly inspired by
OpenLineage facets
● Allow you to attach any atomic
piece of metadata to your
dataset or vertex metadata
● Both build-in into Flink - like
DatasetSchemaFacet - and
external, or specific per
connector
FLIP-314 will power OpenLineage
● Lineage driven by connectors is resilient
● Works for both DataStream and SQL/Table APIs
● Not dependant on any execution mode
What does the
future hold?
5
Support for other streaming systems
● Spark Streaming
● Kafka Connect
● …
Column-level lineage support for Flink
● It’s a hard problem!
● But maybe not for SQL?
● UDFs definitely break simple solutions
Native support for Spark connectors
● In contrast to Flink, Spark already has extension mechanism that allows you to
view the internals of the job as it’s running - SparkListener
● We use LogicalPlan abstraction to extract metadata
● We have very similar issues as with Flink :)
● Internal vs external logical plan interfaces
● DataSourceV2 implementations
Support for “raw” Kafka client
● It’s very popular to use raw client to build your own system, not only external
systems
● bootstrap.servers is non unique and ambiguous - use Kafka cluster ID
● Execution is spread over multiple clients - but maybe not every one of them
needs to always report
OpenLineage is Open Source
● OpenLineage integrations are open source and open governance
within LF AI & Data
● The best way to fix a problem is to fix it yourself :)
● Second best way is to be active and raise awareness
○ Maybe other people are also interested?
Thanks :)

More Related Content

PDF
Mobile Apps Competitive Analysis Done Right
SafeDK
 
PDF
DARPA - PREEMPT (HR001118S0017)
Guy Boulianne
 
PDF
15 Façons de designer votre page de titre dans PowerPoint
123-slide.com
 
PPT
E Drejta Tregtare
Menaxherat
 
PDF
Open core summit: Observability for data pipelines with OpenLineage
Julien Le Dem
 
PDF
Apache Flink internals
Kostas Tzoumas
 
PDF
[FFE19] Build a Flink AI Ecosystem
Jiangjie Qin
 
PDF
Towards Apache Flink 2.0 - Unified Data Processing and Beyond, Bowen Li
Bowen Li
 
Mobile Apps Competitive Analysis Done Right
SafeDK
 
DARPA - PREEMPT (HR001118S0017)
Guy Boulianne
 
15 Façons de designer votre page de titre dans PowerPoint
123-slide.com
 
E Drejta Tregtare
Menaxherat
 
Open core summit: Observability for data pipelines with OpenLineage
Julien Le Dem
 
Apache Flink internals
Kostas Tzoumas
 
[FFE19] Build a Flink AI Ecosystem
Jiangjie Qin
 
Towards Apache Flink 2.0 - Unified Data Processing and Beyond, Bowen Li
Bowen Li
 

Similar to OpenLineage for Stream Processing | Kafka Summit London (20)

PPTX
Apache Flink: Past, Present and Future
Gyula Fóra
 
PPTX
Flink Streaming @BudapestData
Gyula Fóra
 
PPTX
Flink internals web
Kostas Tzoumas
 
PDF
Evolution of Real-time User Engagement Event Consumption at Pinterest
HostedbyConfluent
 
PPTX
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Verv...
Flink Forward
 
PPTX
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Ver...
Flink Forward
 
PDF
Data and AI summit: data pipelines observability with open lineage
Julien Le Dem
 
PDF
Observability for Data Pipelines With OpenLineage
Databricks
 
PDF
Stream Loops on Flink - Reinventing the wheel for the streaming era
Paris Carbone
 
PDF
Flink Forward Berlin 2018: Paris Carbone - "Stream Loops on Flink: Reinventin...
Flink Forward
 
PPTX
Flink Forward San Francisco 2019: Towards Flink 2.0: Rethinking the stack and...
Flink Forward
 
PPTX
Flink history, roadmap and vision
Stephan Ewen
 
PPTX
Architecture of Flink's Streaming Runtime @ ApacheCon EU 2015
Robert Metzger
 
PPTX
Chicago Flink Meetup: Flink's streaming architecture
Robert Metzger
 
PDF
Baymeetup-FlinkResearch
Foo Sounds
 
PDF
Apache Flink 101 - the rise of stream processing and beyond
Bowen Li
 
PDF
Getting Data In and Out of Flink - Understanding Flink and Its Connector Ecos...
HostedbyConfluent
 
PPTX
ApacheCon: Apache Flink - Fast and Reliable Large-Scale Data Processing
Fabian Hueske
 
PPTX
Apache Flink(tm) - A Next-Generation Stream Processor
Aljoscha Krettek
 
PDF
Introduction to Flink Streaming
datamantra
 
Apache Flink: Past, Present and Future
Gyula Fóra
 
Flink Streaming @BudapestData
Gyula Fóra
 
Flink internals web
Kostas Tzoumas
 
Evolution of Real-time User Engagement Event Consumption at Pinterest
HostedbyConfluent
 
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Verv...
Flink Forward
 
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Ver...
Flink Forward
 
Data and AI summit: data pipelines observability with open lineage
Julien Le Dem
 
Observability for Data Pipelines With OpenLineage
Databricks
 
Stream Loops on Flink - Reinventing the wheel for the streaming era
Paris Carbone
 
Flink Forward Berlin 2018: Paris Carbone - "Stream Loops on Flink: Reinventin...
Flink Forward
 
Flink Forward San Francisco 2019: Towards Flink 2.0: Rethinking the stack and...
Flink Forward
 
Flink history, roadmap and vision
Stephan Ewen
 
Architecture of Flink's Streaming Runtime @ ApacheCon EU 2015
Robert Metzger
 
Chicago Flink Meetup: Flink's streaming architecture
Robert Metzger
 
Baymeetup-FlinkResearch
Foo Sounds
 
Apache Flink 101 - the rise of stream processing and beyond
Bowen Li
 
Getting Data In and Out of Flink - Understanding Flink and Its Connector Ecos...
HostedbyConfluent
 
ApacheCon: Apache Flink - Fast and Reliable Large-Scale Data Processing
Fabian Hueske
 
Apache Flink(tm) - A Next-Generation Stream Processor
Aljoscha Krettek
 
Introduction to Flink Streaming
datamantra
 
Ad

More from HostedbyConfluent (20)

PDF
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
HostedbyConfluent
 
PDF
Renaming a Kafka Topic | Kafka Summit London
HostedbyConfluent
 
PDF
Evolution of NRT Data Ingestion Pipeline at Trendyol
HostedbyConfluent
 
PDF
Ensuring Kafka Service Resilience: A Dive into Health-Checking Techniques
HostedbyConfluent
 
PDF
Exactly-once Stream Processing with Arroyo and Kafka
HostedbyConfluent
 
PDF
Fish Plays Pokemon | Kafka Summit London
HostedbyConfluent
 
PDF
Tiered Storage 101 | Kafla Summit London
HostedbyConfluent
 
PDF
Building a Self-Service Stream Processing Portal: How And Why
HostedbyConfluent
 
PDF
From the Trenches: Improving Kafka Connect Source Connector Ingestion from 7 ...
HostedbyConfluent
 
PDF
Future with Zero Down-Time: End-to-end Resiliency with Chaos Engineering and ...
HostedbyConfluent
 
PDF
Navigating Private Network Connectivity Options for Kafka Clusters
HostedbyConfluent
 
PDF
Apache Flink: Building a Company-wide Self-service Streaming Data Platform
HostedbyConfluent
 
PDF
Explaining How Real-Time GenAI Works in a Noisy Pub
HostedbyConfluent
 
PDF
TL;DR Kafka Metrics | Kafka Summit London
HostedbyConfluent
 
PDF
A Window Into Your Kafka Streams Tasks | KSL
HostedbyConfluent
 
PDF
Mastering Kafka Producer Configs: A Guide to Optimizing Performance
HostedbyConfluent
 
PDF
Data Contracts Management: Schema Registry and Beyond
HostedbyConfluent
 
PDF
Code-First Approach: Crafting Efficient Flink Apps
HostedbyConfluent
 
PDF
Debezium vs. the World: An Overview of the CDC Ecosystem
HostedbyConfluent
 
PDF
Beyond Tiered Storage: Serverless Kafka with No Local Disks
HostedbyConfluent
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
HostedbyConfluent
 
Renaming a Kafka Topic | Kafka Summit London
HostedbyConfluent
 
Evolution of NRT Data Ingestion Pipeline at Trendyol
HostedbyConfluent
 
Ensuring Kafka Service Resilience: A Dive into Health-Checking Techniques
HostedbyConfluent
 
Exactly-once Stream Processing with Arroyo and Kafka
HostedbyConfluent
 
Fish Plays Pokemon | Kafka Summit London
HostedbyConfluent
 
Tiered Storage 101 | Kafla Summit London
HostedbyConfluent
 
Building a Self-Service Stream Processing Portal: How And Why
HostedbyConfluent
 
From the Trenches: Improving Kafka Connect Source Connector Ingestion from 7 ...
HostedbyConfluent
 
Future with Zero Down-Time: End-to-end Resiliency with Chaos Engineering and ...
HostedbyConfluent
 
Navigating Private Network Connectivity Options for Kafka Clusters
HostedbyConfluent
 
Apache Flink: Building a Company-wide Self-service Streaming Data Platform
HostedbyConfluent
 
Explaining How Real-Time GenAI Works in a Noisy Pub
HostedbyConfluent
 
TL;DR Kafka Metrics | Kafka Summit London
HostedbyConfluent
 
A Window Into Your Kafka Streams Tasks | KSL
HostedbyConfluent
 
Mastering Kafka Producer Configs: A Guide to Optimizing Performance
HostedbyConfluent
 
Data Contracts Management: Schema Registry and Beyond
HostedbyConfluent
 
Code-First Approach: Crafting Efficient Flink Apps
HostedbyConfluent
 
Debezium vs. the World: An Overview of the CDC Ecosystem
HostedbyConfluent
 
Beyond Tiered Storage: Serverless Kafka with No Local Disks
HostedbyConfluent
 
Ad

Recently uploaded (20)

PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
PDF
Software Development Methodologies in 2025
KodekX
 
PDF
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
PDF
Unlocking the Future- AI Agents Meet Oracle Database 23ai - AIOUG Yatra 2025.pdf
Sandesh Rao
 
PPTX
cloud computing vai.pptx for the project
vaibhavdobariyal79
 
PPTX
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
PPTX
Simple and concise overview about Quantum computing..pptx
mughal641
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
PDF
Brief History of Internet - Early Days of Internet
sutharharshit158
 
PDF
The Future of Artificial Intelligence (AI)
Mukul
 
PDF
Orbitly Pitch Deck|A Mission-Driven Platform for Side Project Collaboration (...
zz41354899
 
PDF
Accelerating Oracle Database 23ai Troubleshooting with Oracle AHF Fleet Insig...
Sandesh Rao
 
PPTX
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PDF
Economic Impact of Data Centres to the Malaysian Economy
flintglobalapac
 
PDF
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
PPTX
Applied-Statistics-Mastering-Data-Driven-Decisions.pptx
parmaryashparmaryash
 
PDF
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
PDF
SparkLabs Primer on Artificial Intelligence 2025
SparkLabs Group
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
Software Development Methodologies in 2025
KodekX
 
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
Unlocking the Future- AI Agents Meet Oracle Database 23ai - AIOUG Yatra 2025.pdf
Sandesh Rao
 
cloud computing vai.pptx for the project
vaibhavdobariyal79
 
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
Simple and concise overview about Quantum computing..pptx
mughal641
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
Brief History of Internet - Early Days of Internet
sutharharshit158
 
The Future of Artificial Intelligence (AI)
Mukul
 
Orbitly Pitch Deck|A Mission-Driven Platform for Side Project Collaboration (...
zz41354899
 
Accelerating Oracle Database 23ai Troubleshooting with Oracle AHF Fleet Insig...
Sandesh Rao
 
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
Economic Impact of Data Centres to the Malaysian Economy
flintglobalapac
 
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
Applied-Statistics-Mastering-Data-Driven-Decisions.pptx
parmaryashparmaryash
 
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
SparkLabs Primer on Artificial Intelligence 2025
SparkLabs Group
 

OpenLineage for Stream Processing | Kafka Summit London

  • 1. OpenLineage For Stream Processing Paweł Leszczyński (github pawel-big-lebowski) Maciej Obuchowski (github mobuchowski) Kafka Summit 2024
  • 2. 2 Agenda ● OpenLineage intro & demo ○ Why do we need lineage? ○ Why having an open lineage? ○ Marquez and Flink demo ● Flink integration deep dive ○ Lineage for batch & streaming ○ Review of OpenLineage-Flink integration, FLIP-314 ○ What does the future hold?
  • 4. Autumn Rhythm - Jackson Pollock https://www.flickr.com/photos/thoth188/276162883
  • 7. 7 To define an open standard for the collection of lineage metadata from pipelines as they are running. OpenLineage Mission
  • 8. Data model 8 Run is particular instance of a streaming job Job is data pipeline that processes data Datasets are Kafka topics, Iceberg tables, Object Storage destinations and so on transition transition time Run State Update run uuid Run job id (name based) Job dataset id (name based) Dataset Run Facet Job Facet Dataset Facet run job inputs / outputs
  • 11. Demo ● Available under https://github.com/OpenLineage/workshops/tree/main/flink-streaming ● Contains ○ Two Flink jobs ■ Kafka to Postgres ■ Postgres to Kafka ○ Airflow to run some Postgres queries ○ Marquez to present lineage graph
  • 12. Flink applications - read & write to Kafka
  • 15. What is different for Streaming jobs? 15 Batch and streaming differ in many aspects, but for lineage there are few questions that matter: ● When does the unbounded job end? ● When and how datasets get updated? ● Does the transformation change during execution?
  • 16. When does job end? 16 ● It might seem that streaming jobs never end naturally ● Schema changes, new job versions, new engine versions - points when it’s worth to start another run
  • 17. When does dataset gets updated? 17 ● Dataset versioning is pretty important - bug analysis, data freshness ● Implicit - “last update timestamp”, Airflow’s data interval - OL default ● Explicit - Iceberg, Delta Lake dataset version
  • 18. When does dataset gets updated? 18 ● In streaming, it’s not so obvious as in batch ● Update on each row write would produce more metadata than actual data… ● Update only on potential job end would not give us any meaningful information in the meantime
  • 19. When does dataset gets updated? 19 ● Flink: maybe on checkpoint? ● Checkpointing is finicky, 100ms vs 10 minute checkpoint interval ● Configure minimum event emission interval separately ● OpenLineage’s additive model fits that really well ● Spark: microbatch?
  • 20. Dynamic transformation modification 20 ● KafkaSource can find new topic during execution when passed a wildcard pattern ● We can catch this and emit event containing this information when this happens
  • 22. OpenLineage has Flink integration! ● OpenLineage has Flink JobListener that notifies you on job start and end ● Support for Kafka, Iceberg, Cassandra, JDBC… ● Notifies you when job starts, ends, and on checkpoint with particular interval ● Additional metadata: schemas, how much data processed…
  • 23. Idea is simple, execution is more complex
  • 24. The integration has its limits ● Very limited, requires few undesirable things like setting execution.attached ● No SQL or Table API support! ● Need to manually attach JobListener to every job ● OpenLineage preferred solution would be to run listener on JobManager in a separate thread
  • 25. And the internals are even more complex ● Basically, a lot of reflection ● API wasn’t made for this use case, a lot of things are private, a lot of things are in the class internals ● OpenLineage preferred solution would be to have API for connectors to implement, where they would be responsible for providing correct data
  • 26. And even has evil hacks ● List of transformations inside StreamExecutionEnvironment gets cleared moment before calling JobListeners ● Before that happens, we replace the clearable list with one that keeps copy of data on `clear`.
  • 27. So, why bother? ● We’ve opportunistically created the integration despite limitations, to gather interest and provide even that limited value ● The long-term solution would be new API for Flink that would not have any of those limitations ○ Single API that for DataStream and SQL APIs ○ Not depending on any particular execution mode ○ Connectors responsible for their own lineage - testable and dependable! ○ No reflection :) ○ Possible to have Column-Level Lineage support in the future ● And we’ve waited in that state for a bit
  • 28. And then something happened ● FLIP-314 - Support Customized Flink Job Listener by Fang Yong, Zhanghao Chen ● New JobStatusChangedListener ○ JobCreatedEvent ○ JobExecutionStatusEvent ● JobCreatedEvent contains LineageGraph ● Both DataStream and SQL/Table API support ● No attachment problem ● Sounds perfect?
  • 30. Problem with LineageVertex ● How do you know all possible connector implementations?
  • 31. Problem with LineageVertex ● How do you know all connector implementations? ● How do you support custom connectors, where we can’t get the source? ○ …reflection?
  • 32. Problem with LineageVertex ● How do you know all connector implementations? ● How do you support custom connectors, for which the code is not known? ● How do you deal with breaking changes in connectors? ○ …even more reflection?
  • 33. Find a solution with community ● Voice your concern, propose how to resolve the issue ● Open discussion on Jira, Flink Slack, mailing list ● Managed to gain consensus and develop a solution that fits everyone involved ● Build community around lineage
  • 34. Resulting API is really nice
  • 35. Resulting API is really nice
  • 36. Facets Allow to Extend Data ● Directly inspired by OpenLineage facets ● Allow you to attach any atomic piece of metadata to your dataset or vertex metadata ● Both build-in into Flink - like DatasetSchemaFacet - and external, or specific per connector
  • 37. FLIP-314 will power OpenLineage ● Lineage driven by connectors is resilient ● Works for both DataStream and SQL/Table APIs ● Not dependant on any execution mode
  • 39. Support for other streaming systems ● Spark Streaming ● Kafka Connect ● …
  • 40. Column-level lineage support for Flink ● It’s a hard problem! ● But maybe not for SQL? ● UDFs definitely break simple solutions
  • 41. Native support for Spark connectors ● In contrast to Flink, Spark already has extension mechanism that allows you to view the internals of the job as it’s running - SparkListener ● We use LogicalPlan abstraction to extract metadata ● We have very similar issues as with Flink :) ● Internal vs external logical plan interfaces ● DataSourceV2 implementations
  • 42. Support for “raw” Kafka client ● It’s very popular to use raw client to build your own system, not only external systems ● bootstrap.servers is non unique and ambiguous - use Kafka cluster ID ● Execution is spread over multiple clients - but maybe not every one of them needs to always report
  • 43. OpenLineage is Open Source ● OpenLineage integrations are open source and open governance within LF AI & Data ● The best way to fix a problem is to fix it yourself :) ● Second best way is to be active and raise awareness ○ Maybe other people are also interested?