SlideShare a Scribd company logo
Let’s Talk ML on Flink
Boris Lublinsky
Stavros Kontopoulos
Lightbend
Apache Flink London Meetup (15/11/2017)
Talk’s agenda
• Flink ML Roadmap
• Machine learning & Models
• Model Serving with Flink
• Results and next steps
2
Flink ML
Flink ML Roadmap:
https://docs.google.com/document/d/1afQbvZBTV15qF3vobVWUjxQc49h3
Ud06MIRhahtJ6dw
Model Serving
Online Learning
Incremental learning
Batch api (https://issues.apache.org/jira/browse/FLINK-2396)?
3
How people see ML
4
Reality
5
What is the Model?
6
A model is a function transforming inputs to outputs
- y = f(x),for example:
Linear regression:
y = ac + a1*x1 + … + an*xn
Neural network:
Such a definition of the model allows for an easy
implementation of model’s composition. From the
implementation point of view it is just function
composition
Model learning pipeline
UC Berkeley AMPLab introduced machine learning pipelines as a graph defining the
complete chain of data transformation.
7
Models proliferation
Data Scientist Software engineer
8
Model standardization
9
Model lifecycle considerations
• Models tend to change
• Update frequencies vary greatly - from hourly to
quarterly/yearly
• Model version tracking
• Model release practices
• Model update process 10
Customer SLA
• Response time - time to calculate prediction
• Throughput - predictions per second
• Support for running multiple models
• Model update - how quickly and easily can the model be
updated
• Uptime/reliability 11
Model Governance
Models should be:
• governed by the company’s policies and procedures, laws and regulations and
organization’s goals
• be transparent, explainable, traceable and interpretable for auditors and
regulators.
• have approval and release process.
• have reason code for explanation of decision.
• be versioned. Reasoning for introduction of new version should be clearly
specified. 12
What are we building
We want to build a streaming system allowing to update
models without interruption of execution (dynamically
controlled stream).
13
Model Serving with Apache Flink
Idea: Exploit Flink’s low latency capabilities for serving models. Focus on offline
models loaded from a permanent storage and update them without interruption.
FLIP Proposal:
(https://docs.google.com/document/d/1ON_t9S28_2LJ91Fks2yFw0RYyeZvIvndu8
oGRPsPuk8)
Combines different efforts: https://github.com/FlinkML
● https://github.com/FlinkML/flink-jpmml (https://radicalbit.io/)
● https://github.com/FlinkML/flink-modelServer (Boris Lublinsky)
● https://github.com/FlinkML/flink-tensorflow (Eron Wright)
14
14
Pros
● Live model updating
● No Flink runtime changes
● Latency guarantees
● Not all systems cover the requirements. For example TF
Serving:
○ Metadata not available.
(https://github.com/tensorflow/serving/issues/612)
○ No new models at runtime:
(https://github.com/tensorflow/serving/issues/422)
15
Flink Low Level Join
• Create a state object for one input (or both)
• Update the state upon receiving elements from its input
• Upon receiving elements from the other input, probe the state and
produce the joined result
16
Exporting Model
Different ML packages have different export capabilities. We used
the following:
• For Spark ML export to PMML -
https://github.com/jpmml/jpmml-sparkml
• For Tensorflow:
• Normal graph export
• Saved model format
17
Model representationOn the wire
syntax = "proto3";
// Description of the trained model.
message ModelDescriptor {
// Model name
string name = 1;
// Human readable description.
string description = 2;
// Data type for which this model is applied.
string dataType = 3;
// Model type
enum ModelType {
TENSORFLOW = 0;
TENSORFLOWSAVED = 2;
PMML = 2;
};
ModelType modeltype = 4;
oneof MessageContent {
// Byte array containing the model
bytes data = 5;
string location = 6;
}
} 18
Internal
trait Model {
def score(input : AnyVal) : AnyVal
def cleanup() : Unit
def toBytes() : Array[Byte]
def getType : Long
}
trait ModelFactoryl {
def create(input : ModelDescriptor) :
Model
def restore(bytes : Array[Byte]) : Model
}
Key based join
Flink’s CoProcessFunction allows key-based merge of 2 streams. When using this API,
data is key-partitioned across multiple Flink executors. Records from both streams are
routed (based on key) to the appropriate executor that is responsible for the actual
processing.
19
Partition based join
Flink’s RichCoFlatMapFunction allows merging of 2 streams in parallel (based on
parallelization parameter). When using this API, on the partitioned stream, data from
different partitions is processed by dedicated Flink executor.
20
Monitoring
case class ModelToServeStats(
name: String, // Model name
description: String, // Model descriptor
modelType: ModelDescriptor.ModelType, // Model type
since : Long, // Start time of model usage
var usage : Long = 0, // Number of servings
var duration : Double = .0, // Time spent on serving
var min : Long = Long.MaxValue, // Min serving time
var max : Long = Long.MinValue // Max serving time
)
Model monitoring should provide information about usage, behavior,
performance and lifecycle of the deployed models
21
Queryable state
Queryable state (interactive queries) is an approach, which allows to get more from
streaming than just the processing of data. This feature allows to treat the stream
processing layer as a lightweight embedded database and, more concretely, to directly
query the current state of a stream processing application, without needing to materialize
that state to external databases or external storage first.
22
Queryable state
Limitations
Operator state (non keyed) is not queryable
https://issues.apache.org/jira/browse/FLINK-7771
23
Where are we now?
• Proposed solution does not require modification of Flink proper. It
rather represents a framework build leveraging Flink capabilities.
Although extension of the queryable state will be helpful
• Flink improvement proposal (Flip23) - model serving is close to
completion -
https://docs.google.com/document/d/1ON_t9S28_2LJ91Fks2yF
w0RYyeZvIvndu8oGRPsPuk8).
• Fully functioning prototype of solution supporting PMML and
Tensorflow is currently in the FlinkML Git repo
(https://github.com/FlinkML) 24
Next steps
• Complete Flip23 document
• Integrate Flink Tensorflow and Flink PMML
implementations to provide PMML and Tensorflow
specific model implementations - work in progress
• Look at bringing in additional model implementations
• Look at model governance and management
implementation 25
Model Serving Beyond
Flink
Author: Boris Lublinsky
Download from here:
https://info.lightbend.com/ebook-serving-machine-learning-models-register.html
26
Thank you
Any Questions?
27

More Related Content

What's hot (9)

PDF
Sharing C++ objects in Linux
Roberto Agostino Vitillo
 
PPTX
Mulebatch
shakeela shaik
 
PPTX
M batching
Vasanthii Chowdary
 
ODP
work load characterization
Raghu Golla
 
PDF
LWA 2015: The Apache Flink Platform for Parallel Batch and Stream Analysis
Jonas Traub
 
PDF
FlowSim_presentation
Anderson Paschoalon
 
PDF
Grokking Techtalk #38: Escape Analysis in Go compiler
Grokking VN
 
PDF
Apache Flink Training Workshop @ HadoopCon2016 - #4 Advanced Stream Processing
Apache Flink Taiwan User Group
 
Sharing C++ objects in Linux
Roberto Agostino Vitillo
 
Mulebatch
shakeela shaik
 
M batching
Vasanthii Chowdary
 
work load characterization
Raghu Golla
 
LWA 2015: The Apache Flink Platform for Parallel Batch and Stream Analysis
Jonas Traub
 
FlowSim_presentation
Anderson Paschoalon
 
Grokking Techtalk #38: Escape Analysis in Go compiler
Grokking VN
 
Apache Flink Training Workshop @ HadoopCon2016 - #4 Advanced Stream Processing
Apache Flink Taiwan User Group
 

Similar to Apache Flink London Meetup - Let's Talk ML on Flink (20)

PDF
Machine Learning At Speed: Operationalizing ML For Real-Time Data Streams
Lightbend
 
PDF
[FFE19] Build a Flink AI Ecosystem
Jiangjie Qin
 
PDF
Operationalizing Machine Learning: Serving ML Models
Lightbend
 
PDF
Flink Forward SF 2017: Dean Wampler - Streaming Deep Learning Scenarios with...
Flink Forward
 
PPTX
Flink Streaming
Gyula Fóra
 
PDF
Towards Apache Flink 2.0 - Unified Data Processing and Beyond, Bowen Li
Bowen Li
 
PDF
Machine Learning with Apache Flink at Stockholm Machine Learning Group
Till Rohrmann
 
PDF
FlinkML - Big data application meetup
Theodoros Vasiloudis
 
PDF
FlinkML: Large Scale Machine Learning with Apache Flink
Theodoros Vasiloudis
 
PPTX
Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...
Flink Forward
 
PDF
Flink Forward San Francisco 2018: Dave Torok & Sameer Wadkar - "Embedding Fl...
Flink Forward
 
PPTX
Predictive Maintenance with Deep Learning and Apache Flink
Dongwon Kim
 
PPTX
Flink history, roadmap and vision
Stephan Ewen
 
PPTX
Overview of Apache Flink: Next-Gen Big Data Analytics Framework
Slim Baltagi
 
PPTX
Chicago Flink Meetup: Flink's streaming architecture
Robert Metzger
 
PPTX
Architecture of Flink's Streaming Runtime @ ApacheCon EU 2015
Robert Metzger
 
PDF
Apache Flink internals
Kostas Tzoumas
 
PPTX
Apache Flink Deep Dive
DataWorks Summit
 
PDF
Apache Flink 101 - the rise of stream processing and beyond
Bowen Li
 
PPTX
Flink Forward SF 2017: Eron Wright - Introducing Flink Tensorflow
Flink Forward
 
Machine Learning At Speed: Operationalizing ML For Real-Time Data Streams
Lightbend
 
[FFE19] Build a Flink AI Ecosystem
Jiangjie Qin
 
Operationalizing Machine Learning: Serving ML Models
Lightbend
 
Flink Forward SF 2017: Dean Wampler - Streaming Deep Learning Scenarios with...
Flink Forward
 
Flink Streaming
Gyula Fóra
 
Towards Apache Flink 2.0 - Unified Data Processing and Beyond, Bowen Li
Bowen Li
 
Machine Learning with Apache Flink at Stockholm Machine Learning Group
Till Rohrmann
 
FlinkML - Big data application meetup
Theodoros Vasiloudis
 
FlinkML: Large Scale Machine Learning with Apache Flink
Theodoros Vasiloudis
 
Flink Forward Berlin 2017: Dongwon Kim - Predictive Maintenance with Apache F...
Flink Forward
 
Flink Forward San Francisco 2018: Dave Torok & Sameer Wadkar - "Embedding Fl...
Flink Forward
 
Predictive Maintenance with Deep Learning and Apache Flink
Dongwon Kim
 
Flink history, roadmap and vision
Stephan Ewen
 
Overview of Apache Flink: Next-Gen Big Data Analytics Framework
Slim Baltagi
 
Chicago Flink Meetup: Flink's streaming architecture
Robert Metzger
 
Architecture of Flink's Streaming Runtime @ ApacheCon EU 2015
Robert Metzger
 
Apache Flink internals
Kostas Tzoumas
 
Apache Flink Deep Dive
DataWorks Summit
 
Apache Flink 101 - the rise of stream processing and beyond
Bowen Li
 
Flink Forward SF 2017: Eron Wright - Introducing Flink Tensorflow
Flink Forward
 
Ad

More from Stavros Kontopoulos (11)

PDF
Serverless Machine Learning Model Inference on Kubernetes with KServe.pdf
Stavros Kontopoulos
 
PDF
Online machine learning in Streaming Applications
Stavros Kontopoulos
 
PPTX
ML At the Edge: Building Your Production Pipeline With Apache Spark and Tens...
Stavros Kontopoulos
 
PDF
Streaming analytics state of the art
Stavros Kontopoulos
 
PDF
Machine learning at scale challenges and solutions
Stavros Kontopoulos
 
PDF
Spark Summit EU Supporting Spark (Brussels 2016)
Stavros Kontopoulos
 
PDF
Voxxed days thessaloniki 21/10/2016 - Streaming Engines for Big Data
Stavros Kontopoulos
 
PPTX
Trivento summercamp masterclass 9/9/2016
Stavros Kontopoulos
 
PPTX
Trivento summercamp fast data 9/9/2016
Stavros Kontopoulos
 
PPTX
Typesafe spark- Zalando meetup
Stavros Kontopoulos
 
PDF
Cassandra at Pollfish
Stavros Kontopoulos
 
Serverless Machine Learning Model Inference on Kubernetes with KServe.pdf
Stavros Kontopoulos
 
Online machine learning in Streaming Applications
Stavros Kontopoulos
 
ML At the Edge: Building Your Production Pipeline With Apache Spark and Tens...
Stavros Kontopoulos
 
Streaming analytics state of the art
Stavros Kontopoulos
 
Machine learning at scale challenges and solutions
Stavros Kontopoulos
 
Spark Summit EU Supporting Spark (Brussels 2016)
Stavros Kontopoulos
 
Voxxed days thessaloniki 21/10/2016 - Streaming Engines for Big Data
Stavros Kontopoulos
 
Trivento summercamp masterclass 9/9/2016
Stavros Kontopoulos
 
Trivento summercamp fast data 9/9/2016
Stavros Kontopoulos
 
Typesafe spark- Zalando meetup
Stavros Kontopoulos
 
Cassandra at Pollfish
Stavros Kontopoulos
 
Ad

Recently uploaded (20)

PDF
Download Canva Pro 2025 PC Crack Full Latest Version
bashirkhan333g
 
PPTX
Milwaukee Marketo User Group - Summer Road Trip: Mapping and Personalizing Yo...
bbedford2
 
PPTX
OpenChain @ OSS NA - In From the Cold: Open Source as Part of Mainstream Soft...
Shane Coughlan
 
PDF
Empower Your Tech Vision- Why Businesses Prefer to Hire Remote Developers fro...
logixshapers59
 
PDF
UITP Summit Meep Pitch may 2025 MaaS Rebooted
campoamor1
 
PDF
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
PDF
MiniTool Power Data Recovery 8.8 With Crack New Latest 2025
bashirkhan333g
 
PPTX
Change Common Properties in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
SAP Firmaya İade ABAB Kodları - ABAB ile yazılmıl hazır kod örneği
Salih Küçük
 
PPTX
In From the Cold: Open Source as Part of Mainstream Software Asset Management
Shane Coughlan
 
PDF
Wondershare PDFelement Pro Crack for MacOS New Version Latest 2025
bashirkhan333g
 
PDF
Dipole Tech Innovations – Global IT Solutions for Business Growth
dipoletechi3
 
PDF
Everything you need to know about pricing & licensing Microsoft 365 Copilot f...
Q-Advise
 
PDF
Add Background Images to Charts in IBM SPSS Statistics Version 31.pdf
Version 1 Analytics
 
PDF
Salesforce Experience Cloud Consultant.pdf
VALiNTRY360
 
PPTX
Function & Procedure: Function Vs Procedure in PL/SQL
Shani Tiwari
 
PPTX
Home Care Tools: Benefits, features and more
Third Rock Techkno
 
PPTX
Build a Custom Agent for Agentic Testing.pptx
klpathrudu
 
PPTX
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
AI Prompts Cheat Code prompt engineering
Avijit Kumar Roy
 
Download Canva Pro 2025 PC Crack Full Latest Version
bashirkhan333g
 
Milwaukee Marketo User Group - Summer Road Trip: Mapping and Personalizing Yo...
bbedford2
 
OpenChain @ OSS NA - In From the Cold: Open Source as Part of Mainstream Soft...
Shane Coughlan
 
Empower Your Tech Vision- Why Businesses Prefer to Hire Remote Developers fro...
logixshapers59
 
UITP Summit Meep Pitch may 2025 MaaS Rebooted
campoamor1
 
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
MiniTool Power Data Recovery 8.8 With Crack New Latest 2025
bashirkhan333g
 
Change Common Properties in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
SAP Firmaya İade ABAB Kodları - ABAB ile yazılmıl hazır kod örneği
Salih Küçük
 
In From the Cold: Open Source as Part of Mainstream Software Asset Management
Shane Coughlan
 
Wondershare PDFelement Pro Crack for MacOS New Version Latest 2025
bashirkhan333g
 
Dipole Tech Innovations – Global IT Solutions for Business Growth
dipoletechi3
 
Everything you need to know about pricing & licensing Microsoft 365 Copilot f...
Q-Advise
 
Add Background Images to Charts in IBM SPSS Statistics Version 31.pdf
Version 1 Analytics
 
Salesforce Experience Cloud Consultant.pdf
VALiNTRY360
 
Function & Procedure: Function Vs Procedure in PL/SQL
Shani Tiwari
 
Home Care Tools: Benefits, features and more
Third Rock Techkno
 
Build a Custom Agent for Agentic Testing.pptx
klpathrudu
 
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
AI Prompts Cheat Code prompt engineering
Avijit Kumar Roy
 

Apache Flink London Meetup - Let's Talk ML on Flink

  • 1. Let’s Talk ML on Flink Boris Lublinsky Stavros Kontopoulos Lightbend Apache Flink London Meetup (15/11/2017)
  • 2. Talk’s agenda • Flink ML Roadmap • Machine learning & Models • Model Serving with Flink • Results and next steps 2
  • 3. Flink ML Flink ML Roadmap: https://docs.google.com/document/d/1afQbvZBTV15qF3vobVWUjxQc49h3 Ud06MIRhahtJ6dw Model Serving Online Learning Incremental learning Batch api (https://issues.apache.org/jira/browse/FLINK-2396)? 3
  • 6. What is the Model? 6 A model is a function transforming inputs to outputs - y = f(x),for example: Linear regression: y = ac + a1*x1 + … + an*xn Neural network: Such a definition of the model allows for an easy implementation of model’s composition. From the implementation point of view it is just function composition
  • 7. Model learning pipeline UC Berkeley AMPLab introduced machine learning pipelines as a graph defining the complete chain of data transformation. 7
  • 10. Model lifecycle considerations • Models tend to change • Update frequencies vary greatly - from hourly to quarterly/yearly • Model version tracking • Model release practices • Model update process 10
  • 11. Customer SLA • Response time - time to calculate prediction • Throughput - predictions per second • Support for running multiple models • Model update - how quickly and easily can the model be updated • Uptime/reliability 11
  • 12. Model Governance Models should be: • governed by the company’s policies and procedures, laws and regulations and organization’s goals • be transparent, explainable, traceable and interpretable for auditors and regulators. • have approval and release process. • have reason code for explanation of decision. • be versioned. Reasoning for introduction of new version should be clearly specified. 12
  • 13. What are we building We want to build a streaming system allowing to update models without interruption of execution (dynamically controlled stream). 13
  • 14. Model Serving with Apache Flink Idea: Exploit Flink’s low latency capabilities for serving models. Focus on offline models loaded from a permanent storage and update them without interruption. FLIP Proposal: (https://docs.google.com/document/d/1ON_t9S28_2LJ91Fks2yFw0RYyeZvIvndu8 oGRPsPuk8) Combines different efforts: https://github.com/FlinkML ● https://github.com/FlinkML/flink-jpmml (https://radicalbit.io/) ● https://github.com/FlinkML/flink-modelServer (Boris Lublinsky) ● https://github.com/FlinkML/flink-tensorflow (Eron Wright) 14 14
  • 15. Pros ● Live model updating ● No Flink runtime changes ● Latency guarantees ● Not all systems cover the requirements. For example TF Serving: ○ Metadata not available. (https://github.com/tensorflow/serving/issues/612) ○ No new models at runtime: (https://github.com/tensorflow/serving/issues/422) 15
  • 16. Flink Low Level Join • Create a state object for one input (or both) • Update the state upon receiving elements from its input • Upon receiving elements from the other input, probe the state and produce the joined result 16
  • 17. Exporting Model Different ML packages have different export capabilities. We used the following: • For Spark ML export to PMML - https://github.com/jpmml/jpmml-sparkml • For Tensorflow: • Normal graph export • Saved model format 17
  • 18. Model representationOn the wire syntax = "proto3"; // Description of the trained model. message ModelDescriptor { // Model name string name = 1; // Human readable description. string description = 2; // Data type for which this model is applied. string dataType = 3; // Model type enum ModelType { TENSORFLOW = 0; TENSORFLOWSAVED = 2; PMML = 2; }; ModelType modeltype = 4; oneof MessageContent { // Byte array containing the model bytes data = 5; string location = 6; } } 18 Internal trait Model { def score(input : AnyVal) : AnyVal def cleanup() : Unit def toBytes() : Array[Byte] def getType : Long } trait ModelFactoryl { def create(input : ModelDescriptor) : Model def restore(bytes : Array[Byte]) : Model }
  • 19. Key based join Flink’s CoProcessFunction allows key-based merge of 2 streams. When using this API, data is key-partitioned across multiple Flink executors. Records from both streams are routed (based on key) to the appropriate executor that is responsible for the actual processing. 19
  • 20. Partition based join Flink’s RichCoFlatMapFunction allows merging of 2 streams in parallel (based on parallelization parameter). When using this API, on the partitioned stream, data from different partitions is processed by dedicated Flink executor. 20
  • 21. Monitoring case class ModelToServeStats( name: String, // Model name description: String, // Model descriptor modelType: ModelDescriptor.ModelType, // Model type since : Long, // Start time of model usage var usage : Long = 0, // Number of servings var duration : Double = .0, // Time spent on serving var min : Long = Long.MaxValue, // Min serving time var max : Long = Long.MinValue // Max serving time ) Model monitoring should provide information about usage, behavior, performance and lifecycle of the deployed models 21
  • 22. Queryable state Queryable state (interactive queries) is an approach, which allows to get more from streaming than just the processing of data. This feature allows to treat the stream processing layer as a lightweight embedded database and, more concretely, to directly query the current state of a stream processing application, without needing to materialize that state to external databases or external storage first. 22
  • 23. Queryable state Limitations Operator state (non keyed) is not queryable https://issues.apache.org/jira/browse/FLINK-7771 23
  • 24. Where are we now? • Proposed solution does not require modification of Flink proper. It rather represents a framework build leveraging Flink capabilities. Although extension of the queryable state will be helpful • Flink improvement proposal (Flip23) - model serving is close to completion - https://docs.google.com/document/d/1ON_t9S28_2LJ91Fks2yF w0RYyeZvIvndu8oGRPsPuk8). • Fully functioning prototype of solution supporting PMML and Tensorflow is currently in the FlinkML Git repo (https://github.com/FlinkML) 24
  • 25. Next steps • Complete Flip23 document • Integrate Flink Tensorflow and Flink PMML implementations to provide PMML and Tensorflow specific model implementations - work in progress • Look at bringing in additional model implementations • Look at model governance and management implementation 25
  • 26. Model Serving Beyond Flink Author: Boris Lublinsky Download from here: https://info.lightbend.com/ebook-serving-machine-learning-models-register.html 26