SlideShare a Scribd company logo
INTRODUCTION TO
TENSORFLOW
ARCHITECTURE
MANI SHANKAR GOSWAMI
@Mani_Shankar_G
BEFORE WE START…
• PLEASE UNDERSTAND TensorFlow DIFFERS FROM MOST DATA ENGINES OUT
THERE FOR OBVIOUS REASONS.
• TensorFlow differs from batch dataflow systems in two respects:
• The model supports multiple concurrent executions on overlapping subgraphs of the
overall graph.
• Individual vertices may have mutable state that can be shared between different
executions of the graph.
• Some References (picked from OSDI 16 Conference):
• The principal limitation of a batch dataflow system is that it requires the input data to
be immutable, and all of the sub-computations to be deterministic, so that the
system can re-execute sub-computations when machines in the cluster fail.
• For example, the SparkNet system for training deep neural networks on Spark takes
20 seconds to broadcast weights and collect updates from five workers [55]. As a
result, in these systems, each model update step must process larger batches,
slowing convergence [8]. We show in Subsection 6.3 that TensorFlow can train larger
models on larger clusters with step times as short as 2 seconds
WHAT IS TENSORFLOW?
Here is the formal definition picked from https://www.tensorflow.org/:
TensorFlow is an open source software library for numerical
computation using data flow graphs. Nodes in the graph represent
mathematical operations, while the graph edges represent the
multidimensional data arrays (tensors) communicated between them.
The flexible architecture allows you to deploy computation to one or
more CPUs or GPUs in a desktop, server, or mobile device with a single
API.
TensorFlow was originally developed by researchers and engineers
working on the Google Brain Team within Google's Machine
Intelligence research organization for the purposes of conducting
machine learning and deep neural networks research.
WHAT IS A DATA FLOW GRAPH ?
Consider a typical linear equation: y = W * x + b
where W is Weight, x is an example and b is bias.
This linear equation can be represented as a acyclic graph, as below:
Biases
Weight
Examples
MatMul
Add Relu Gradients
Updated
Weights
and Biases
GENERALIZING THE DATAFLOW GRAPH
Biases
….
Learning
Rate
Add -=Mul
Update
Biases
…
gradient
computation
Variables
&
Constant
s
Operations Updating
of
Variables
LAYERED VIEW
Network LayerDevice Layer
Kernel Execution Layer
Distributed Master
Data Flow
Controller
API LAYER
CLIENT LAYER
LIBRARIES
(Training/Inference Libs)
TENSORFLOW’S DEVICE INTERACTION VIEW
TensorFlow uses CUDA and cuDNN to control GPUs and boost
CPU GPU #0 GPU #1
cuDNN
CUDA
TENSORFLOW
EXECUTION PHASES
• By deferring the execution until the entire program is available,
TensorFlow optimizes the execution phase by using global
information about the computation
• Example:
• TensorFlow achieves high GPU utilization by using the graph’s dependency
structure to issue a sequence of kernels to the GPU without waiting for
intermediate results
• TensorFlow uses deferred execution via the dataflow graph to
offload larger chunks of work to accelerators.
CONSTRUCTION
PHASE
EXECUTION PHASE
CLIENT WORKERS
WORKER’S DEVICE INTERACTIONS
• The worker service in each task:
• handles requests from the master,
• schedules the execution of the kernels for the operations that comprise a local subgraph
• mediates direct communication between tasks.
• It optimized for running large graphs with low overhead
• It dispatches kernels to local devices and runs kernels in parallel when possible, for example by
using multiple CPU cores or GPU streams.
CLIENT MASTER WORKER
GPU #1
GPU
#2
CPU
#0
Session
WORKER’S SCHEDULING & PLACEMENT
ALGORITHM
• Uses COST Model to determine placement
• contains estimates of the sizes of the input and output tensors for each
graph node
• Uses estimates of the computation time required for each node
• statically estimated based on heuristics associated with different operation
types
• also uses metrics collected for placement decisions for earlier executions
of the graph
• placement algorithm first runs a simulated execution of the graph
• For each node, feasible devices are determined
• When multiple devices are eligible for a node execution
• algorithm uses a greedy heuristic; examines the effects on the completion time
using COST MODEL
• usually, device where the node’s operation would finish the soonest is generally
selected
• Applies constraints like colocation requirements
SINGLE MACHINE VS DISTRIBUTED SYSTEM
STRUCTURE
Client is one which creates computation graph during the construction phase
It creates a session to master and send the constructed graph for execution
Finally when client evaluates a node or nodes in graph, master starts the execution by distributing sub graphs to workers.
Client Master
GPU0 GPU1 GPUn
session
run
execute sub-graph
Single Process
Client
Process
Master
GPU0
session run
execute sub-graph
Distributed Version
GPU1
GPUn CPU0
worker process 1
GPU0 GPU1
GPUn CPU0
worker process 2
GPU0 GPU1
GPUn CPU0
worker process 3
worker
KERNEL EXECUTION
• TF manages two types of thread pools on each device to
parallelize operations; inter-op & intra-op thread pools
• Inter-op are normal thread pool used when two or more
operations get scheduled on same device.
• In few cases operations have multi-threaded kernel, they use
intra-op thread pool
CPU#0 CPU #1
A
B
F
D
E
C
Inter-
op
pool
Intra-
op
pool
SESSION ON A SINGLE PROCESS
tf.Session
CPU: 0 GPU: 0
with tf.Session() as sess:
sess.run(init_op)
for _ in range(STEPS):
sess.run(train)
CROSS-DEVICE COMMUNICATION
s += w * x + b
CPU
+=
S w
b
Add
MatMul
X
GPU #0
Worker
CROSS-DEVICE COMMUNICATION
CPU
+=
s w
b
Add
MatMul
X
GPU #0
SEND RECV
RECV SEND
SEND RECV
Worker
CREATING A CLUSTER
tf.Session
CPU: 0 GPU: 0
cluster = tf.train.ClusterSpecs ({"ps":
ps_hosts, "worker": worker_hosts})
server = tf.train.Server(cluster,
job_name = “worker”, task_index=0)
tf.train.Server
CPU: 0 GPU: 0
tf.train.Server
CPU: 0 GPU: 0
tf.train.Server
DISTRIBUTED COMMUNICATION (DATA PARALLELISM
& REPLICATION)
• master decides a sub graph for a worker, in this case model parameters are given to PS worker
* worker is responsible for deciding and placing nodes of the sub-graph on devices
• nodes are executed in multiple GPUs/CPU Cores simultaneously subject to dependency
resolution
Device 1
(PS)
+=
s w
b
CPU (PS)GPU #0
MatMul
x
Add
Worker #0 Worker #1
GPU #0
MatMul
x
Add
DISTRIBUTED COMMUNICATION (DATA
PARALLELISM)
• Transfers between local CPU and GPU devices use the cudaMemcpyAsync() API to overlap computation and
data transfer.
• Transfers between two local GPUs use peer-to-peer DMA, to avoid an expensive copy via the host CPU.
• Transfers between tasks uses RDMA over Converged Ethernet else uses gRPC over TCP
Device 1
(PS)
+=
s w
b
CPU (PS)GPU #0
MatMul
x
Add
GPU #0
Worker #0 Worker #1
SEND
RECV
SEND
SEND
RECV
RDMA
is_chief=tru
e
MatMul
x
Add
RECV
SEND
REPLICATED TRAINING VIEW
DISTRIBUTED COMMUNICATION (MODEL
PARALLELISM)
• In model parallelism, the graph’s operations are distributed across cluster
Device 1
(PS)
Device 2 (worker)
+=
s w
b
CPU GPU #0
MatMul
x
Add
GPU #0
Worker #0 Worker #1
DISTRIBUTED COMMUNICATION (MODEL PARALLELISM)
• Transfers between local CPU and GPU devices use the cudaMemcpyAsync() API to overlap computation and
data transfer.
• Transfers between two local GPUs use peer-to-peer DMA, to avoid an expensive copy via the host CPU.
• Transfers between tasks uses RDMA over Converged Ethernet else uses gRPC over TCP
Device 1
(PS)
+=
s w
b
CPU GPU #0
*
x
+
GPU #0
Worker #0 Worker #1
SEND
RECV
SEND
RECV
SEND
Dest:
worker#1,
GPU #0
Dest:
worker#0,
GPU #0
Dest:
worker#1,
GPU #0
SEND Dest:
worker#0,
CPU #0
RECV
RDMA
is_chief =
True
CHIEF WORKER
• Chief is a task which is assigned some additional responsibilities in the cluster.
• Its responsibilities:
• Check pointing:
• Saves graph state in a configured store like HDFS etc.
• Runs a configurable frequency
• Maintaining Summary
• Runs all summary operations
• Saving Models
• Step Counters
• Keeps an eye on total steps taken
• Recovery
• restores the graph from the most recent checkpoint and resumes training
where it stopped
• Initializing all the variables in graph
• Can be monitored through TensorBoard.
PARAMETER TASKS VS WORKER TASKS
• In TensorFlow workload in distributed in form of PS and workers tasks.
• PS tasks holds:
• Variables
• Update operations
• Worker tasks: holds
• Pre-processing
• Loss calculation
• Back Propagations
• Multiple workers and PS tasks can run simultaneously but TF ensures that PS
is sharded, ensures that same variable has one physical copy. There are
various algorithm which support PS task distribution considering load and
network .
• It also allows partitioning large variables (~10x GBs) into multiple PS tasks
TYPES OF TRAINING REPLICATION
• In Graph Replication
• Here single client connects to a master and requests distribution of
replicated graph along with data within all available workers.
• Works well for a small work load but beyond that does not scale well.
• Between Graph Replication (Recommended Approach)
• In this approach multiple clients take part in replication
• Each machine has a client which talks to the local master and gives cluster
information, graphs and data to be executed.
• Master ensures that PS tasks are shared based on cluster and schedules
tasks in local worker
• Worker ensures all communication and synchronizations.
• Between Graphs Replication can be of two types:
• Synchronous
• Asynchronous
ASYNCHRONOUS VS SYNCHRONOUS REPLICATION
model
input
Device 1
model
input
Device 2
model
input
Device 3
Add
Update
P
PS Server
model
input
Device 1
model
input
Device 2
model
input
Device 3
Update
P
PS Server
P
Update
Update
P
P
P
SYNCHRONOUS DATA PARALLELISM
ASYNCHRONOUS DATA PARALLELISM
OPTIMIZATIONS
• Common Subexpression Elimination
• Schedules tasks in such a way that time window for which
intermediate results are stored could be reduced.
• Using ASAP/ALAP calculation critical path of graph is determined to
estimate when to start the Receive nodes. This reduced the chances
of sudden spike of I/O
• Non blocking Kernels
• Lossy compression of higher precision internal representations when
sending data between device
• XLA (Accelerated Linear Algebra) is a domain-specific compiler for
linear algebra that optimizes TensorFlow computations.
• Tensors also enable other optimizations for memory management
and communication, such as RDMA and direct GPU-to-GPU transfer
FAULT TOLERANCE
• Check pointing ensures that latest state is always available
• If a non supervisor worker gets killed
• Considering workers are state less, the cluster manager when bring it up
back, it simply contacts PS task to get the updated parameter and
resumes
• If a PS task fails
• In this case chief/supervisor is responsible for noting the failure
• Supervisor/Chief interrupts training on all workers and restores all PS
tasks from the last check-point.
• If Chief itself fails
• Interrupt training and when it comes back up it restore from a
checkpoint.
• Monitored Training Session allows automating the recovery
• Another approach could be to use Zookeeper for chief election and pass
SERVING THE MODEL
• TensorFlow recommended way to serve model in production is
TF Serving
• Advantages
• Supports both online and batching mode
• Supports both hosted as well as libs approach
• Supports multiple model in a single process
• Supports Docker & Kuburnetes
BENCHMARKS
Instance type: NVIDIA® DGX-1™
GPU: 8x NVIDIA® Tesla® P100
OS: Ubuntu 16.04 LTS with tests run via Docker
CUDA / cuDNN: 8.0 / 5.1
TensorFlow GitHub hash: b1e174e
Benchmark GitHub hash: 9165a70
Build Command: bazel build -c opt --copt=-march="haswell" --config=cuda
//tensorflow/tools/pip_package:build_pip_package
REFERENCES & FURTHER READING
• Paper on Large-Scale Machine Learning on Heterogeneous
Distributed Systems
• TensorFlow Documentations
• TensorFlow Tutorials
• Hands-on Machine Learning with Sckit Learn and TensorFlow
by Aurélien Géron
THANK YOU!

More Related Content

What's hot (20)

PPTX
急速に進化を続けるCNIプラグイン Antrea
Motonori Shindo
 
PDF
Kubernetesによる機械学習基盤への挑戦
Preferred Networks
 
PDF
株式会社コロプラ『GKE と Cloud Spanner が躍動するドラゴンクエストウォーク』第 9 回 Google Cloud INSIDE Game...
Google Cloud Platform - Japan
 
PDF
ツール比較しながら語る O/RマッパーとDBマイグレーションの実際のところ
Y Watanabe
 
PDF
TensorFlow and Keras: An Overview
Poo Kuan Hoong
 
PDF
30分でわかる広告エンジンの作り方
Daisuke Yamazaki
 
PDF
20180729 Preferred Networksの機械学習クラスタを支える技術
Preferred Networks
 
PDF
QUICとNATと
Yuya Rin
 
PDF
Goroutineと channelから はじめるgo言語
Takuya Ueda
 
PPTX
[AIoTLab]attention mechanism.pptx
TuCaoMinh2
 
PPTX
Pytorch
ehsan tr
 
PDF
Building NLP applications with Transformers
Julien SIMON
 
PPTX
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Preferred Networks
 
PDF
What Linux can learn from Solaris performance and vice-versa
Brendan Gregg
 
PDF
Introduction To TensorFlow
Spotle.ai
 
PDF
Data Versioning and Reproducible ML with DVC and MLflow
Databricks
 
PDF
データセンターネットワークでのPrometheus活用事例
Yahoo!デベロッパーネットワーク
 
PPTX
Argo CD Deep Dive
shunki fujiwara
 
PPTX
Neural Networks with Google TensorFlow
Darshan Patel
 
PDF
Docker Compose 徹底解説
Masahito Zembutsu
 
急速に進化を続けるCNIプラグイン Antrea
Motonori Shindo
 
Kubernetesによる機械学習基盤への挑戦
Preferred Networks
 
株式会社コロプラ『GKE と Cloud Spanner が躍動するドラゴンクエストウォーク』第 9 回 Google Cloud INSIDE Game...
Google Cloud Platform - Japan
 
ツール比較しながら語る O/RマッパーとDBマイグレーションの実際のところ
Y Watanabe
 
TensorFlow and Keras: An Overview
Poo Kuan Hoong
 
30分でわかる広告エンジンの作り方
Daisuke Yamazaki
 
20180729 Preferred Networksの機械学習クラスタを支える技術
Preferred Networks
 
QUICとNATと
Yuya Rin
 
Goroutineと channelから はじめるgo言語
Takuya Ueda
 
[AIoTLab]attention mechanism.pptx
TuCaoMinh2
 
Pytorch
ehsan tr
 
Building NLP applications with Transformers
Julien SIMON
 
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Preferred Networks
 
What Linux can learn from Solaris performance and vice-versa
Brendan Gregg
 
Introduction To TensorFlow
Spotle.ai
 
Data Versioning and Reproducible ML with DVC and MLflow
Databricks
 
データセンターネットワークでのPrometheus活用事例
Yahoo!デベロッパーネットワーク
 
Argo CD Deep Dive
shunki fujiwara
 
Neural Networks with Google TensorFlow
Darshan Patel
 
Docker Compose 徹底解説
Masahito Zembutsu
 

Similar to An Introduction to TensorFlow architecture (20)

PDF
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Chris Fregly
 
PDF
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
Chris Fregly
 
PPTX
Tensorflow in practice by Engineer - donghwi cha
Donghwi Cha
 
PPTX
24-TensorFlow-Clipper.pptxnjjjjnjjjjjjmm
SasidharaKashyapChat
 
PDF
Advanced Spark and TensorFlow Meetup May 26, 2016
Chris Fregly
 
PDF
Tensor flow white paper
Ying wei (Joe) Chou
 
PDF
TensorFlow example for AI Ukraine2016
Andrii Babii
 
PDF
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
Chris Fregly
 
PDF
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on G...
Chris Fregly
 
PDF
Atlanta Hadoop Users Meetup 09 21 2016
Chris Fregly
 
PDF
Tallinn Estonia Advanced Java Meetup Spark + TensorFlow = TensorFrames Oct 24...
Chris Fregly
 
PDF
Optimizing, profiling and deploying high performance Spark ML and TensorFlow ...
DataWorks Summit
 
PDF
Tensorflow 2.0 and Coral Edge TPU
Andrés Leonardo Martinez Ortiz
 
PPTX
Lecture Note DL&NN Tensorflow.pptx
BhaviniBhatt7
 
PPTX
Tensorflow
marwa Ayad Mohamed
 
PDF
The Flow of TensorFlow
Jeongkyu Shin
 
PDF
High Performance TensorFlow in Production -- Sydney ML / AI Train Workshop @ ...
Chris Fregly
 
PDF
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
Chris Fregly
 
PDF
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
Chris Fregly
 
PPTX
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...
Simplilearn
 
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Chris Fregly
 
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
Chris Fregly
 
Tensorflow in practice by Engineer - donghwi cha
Donghwi Cha
 
24-TensorFlow-Clipper.pptxnjjjjnjjjjjjmm
SasidharaKashyapChat
 
Advanced Spark and TensorFlow Meetup May 26, 2016
Chris Fregly
 
Tensor flow white paper
Ying wei (Joe) Chou
 
TensorFlow example for AI Ukraine2016
Andrii Babii
 
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
Chris Fregly
 
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on G...
Chris Fregly
 
Atlanta Hadoop Users Meetup 09 21 2016
Chris Fregly
 
Tallinn Estonia Advanced Java Meetup Spark + TensorFlow = TensorFrames Oct 24...
Chris Fregly
 
Optimizing, profiling and deploying high performance Spark ML and TensorFlow ...
DataWorks Summit
 
Tensorflow 2.0 and Coral Edge TPU
Andrés Leonardo Martinez Ortiz
 
Lecture Note DL&NN Tensorflow.pptx
BhaviniBhatt7
 
Tensorflow
marwa Ayad Mohamed
 
The Flow of TensorFlow
Jeongkyu Shin
 
High Performance TensorFlow in Production -- Sydney ML / AI Train Workshop @ ...
Chris Fregly
 
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
Chris Fregly
 
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
Chris Fregly
 
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...
Simplilearn
 
Ad

Recently uploaded (20)

PDF
Blockchain Transactions Explained For Everyone
CIFDAQ
 
PDF
What Makes Contify’s News API Stand Out: Key Features at a Glance
Contify
 
PDF
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
PPTX
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
PDF
"AI Transformation: Directions and Challenges", Pavlo Shaternik
Fwdays
 
PDF
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
IoT-Powered Industrial Transformation – Smart Manufacturing to Connected Heal...
Rejig Digital
 
PDF
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
PDF
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
PPTX
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
PDF
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
PDF
July Patch Tuesday
Ivanti
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PDF
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
PDF
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
PDF
POV_ Why Enterprises Need to Find Value in ZERO.pdf
darshakparmar
 
PDF
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
PDF
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
Blockchain Transactions Explained For Everyone
CIFDAQ
 
What Makes Contify’s News API Stand Out: Key Features at a Glance
Contify
 
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
"AI Transformation: Directions and Challenges", Pavlo Shaternik
Fwdays
 
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
IoT-Powered Industrial Transformation – Smart Manufacturing to Connected Heal...
Rejig Digital
 
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
July Patch Tuesday
Ivanti
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
POV_ Why Enterprises Need to Find Value in ZERO.pdf
darshakparmar
 
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
Ad

An Introduction to TensorFlow architecture

  • 2. BEFORE WE START… • PLEASE UNDERSTAND TensorFlow DIFFERS FROM MOST DATA ENGINES OUT THERE FOR OBVIOUS REASONS. • TensorFlow differs from batch dataflow systems in two respects: • The model supports multiple concurrent executions on overlapping subgraphs of the overall graph. • Individual vertices may have mutable state that can be shared between different executions of the graph. • Some References (picked from OSDI 16 Conference): • The principal limitation of a batch dataflow system is that it requires the input data to be immutable, and all of the sub-computations to be deterministic, so that the system can re-execute sub-computations when machines in the cluster fail. • For example, the SparkNet system for training deep neural networks on Spark takes 20 seconds to broadcast weights and collect updates from five workers [55]. As a result, in these systems, each model update step must process larger batches, slowing convergence [8]. We show in Subsection 6.3 that TensorFlow can train larger models on larger clusters with step times as short as 2 seconds
  • 3. WHAT IS TENSORFLOW? Here is the formal definition picked from https://www.tensorflow.org/: TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research.
  • 4. WHAT IS A DATA FLOW GRAPH ? Consider a typical linear equation: y = W * x + b where W is Weight, x is an example and b is bias. This linear equation can be represented as a acyclic graph, as below: Biases Weight Examples MatMul Add Relu Gradients Updated Weights and Biases
  • 5. GENERALIZING THE DATAFLOW GRAPH Biases …. Learning Rate Add -=Mul Update Biases … gradient computation Variables & Constant s Operations Updating of Variables
  • 6. LAYERED VIEW Network LayerDevice Layer Kernel Execution Layer Distributed Master Data Flow Controller API LAYER CLIENT LAYER LIBRARIES (Training/Inference Libs)
  • 7. TENSORFLOW’S DEVICE INTERACTION VIEW TensorFlow uses CUDA and cuDNN to control GPUs and boost CPU GPU #0 GPU #1 cuDNN CUDA TENSORFLOW
  • 8. EXECUTION PHASES • By deferring the execution until the entire program is available, TensorFlow optimizes the execution phase by using global information about the computation • Example: • TensorFlow achieves high GPU utilization by using the graph’s dependency structure to issue a sequence of kernels to the GPU without waiting for intermediate results • TensorFlow uses deferred execution via the dataflow graph to offload larger chunks of work to accelerators. CONSTRUCTION PHASE EXECUTION PHASE CLIENT WORKERS
  • 9. WORKER’S DEVICE INTERACTIONS • The worker service in each task: • handles requests from the master, • schedules the execution of the kernels for the operations that comprise a local subgraph • mediates direct communication between tasks. • It optimized for running large graphs with low overhead • It dispatches kernels to local devices and runs kernels in parallel when possible, for example by using multiple CPU cores or GPU streams. CLIENT MASTER WORKER GPU #1 GPU #2 CPU #0 Session
  • 10. WORKER’S SCHEDULING & PLACEMENT ALGORITHM • Uses COST Model to determine placement • contains estimates of the sizes of the input and output tensors for each graph node • Uses estimates of the computation time required for each node • statically estimated based on heuristics associated with different operation types • also uses metrics collected for placement decisions for earlier executions of the graph • placement algorithm first runs a simulated execution of the graph • For each node, feasible devices are determined • When multiple devices are eligible for a node execution • algorithm uses a greedy heuristic; examines the effects on the completion time using COST MODEL • usually, device where the node’s operation would finish the soonest is generally selected • Applies constraints like colocation requirements
  • 11. SINGLE MACHINE VS DISTRIBUTED SYSTEM STRUCTURE Client is one which creates computation graph during the construction phase It creates a session to master and send the constructed graph for execution Finally when client evaluates a node or nodes in graph, master starts the execution by distributing sub graphs to workers. Client Master GPU0 GPU1 GPUn session run execute sub-graph Single Process Client Process Master GPU0 session run execute sub-graph Distributed Version GPU1 GPUn CPU0 worker process 1 GPU0 GPU1 GPUn CPU0 worker process 2 GPU0 GPU1 GPUn CPU0 worker process 3 worker
  • 12. KERNEL EXECUTION • TF manages two types of thread pools on each device to parallelize operations; inter-op & intra-op thread pools • Inter-op are normal thread pool used when two or more operations get scheduled on same device. • In few cases operations have multi-threaded kernel, they use intra-op thread pool CPU#0 CPU #1 A B F D E C Inter- op pool Intra- op pool
  • 13. SESSION ON A SINGLE PROCESS tf.Session CPU: 0 GPU: 0 with tf.Session() as sess: sess.run(init_op) for _ in range(STEPS): sess.run(train)
  • 14. CROSS-DEVICE COMMUNICATION s += w * x + b CPU += S w b Add MatMul X GPU #0 Worker
  • 15. CROSS-DEVICE COMMUNICATION CPU += s w b Add MatMul X GPU #0 SEND RECV RECV SEND SEND RECV Worker
  • 16. CREATING A CLUSTER tf.Session CPU: 0 GPU: 0 cluster = tf.train.ClusterSpecs ({"ps": ps_hosts, "worker": worker_hosts}) server = tf.train.Server(cluster, job_name = “worker”, task_index=0) tf.train.Server CPU: 0 GPU: 0 tf.train.Server CPU: 0 GPU: 0 tf.train.Server
  • 17. DISTRIBUTED COMMUNICATION (DATA PARALLELISM & REPLICATION) • master decides a sub graph for a worker, in this case model parameters are given to PS worker * worker is responsible for deciding and placing nodes of the sub-graph on devices • nodes are executed in multiple GPUs/CPU Cores simultaneously subject to dependency resolution Device 1 (PS) += s w b CPU (PS)GPU #0 MatMul x Add Worker #0 Worker #1 GPU #0 MatMul x Add
  • 18. DISTRIBUTED COMMUNICATION (DATA PARALLELISM) • Transfers between local CPU and GPU devices use the cudaMemcpyAsync() API to overlap computation and data transfer. • Transfers between two local GPUs use peer-to-peer DMA, to avoid an expensive copy via the host CPU. • Transfers between tasks uses RDMA over Converged Ethernet else uses gRPC over TCP Device 1 (PS) += s w b CPU (PS)GPU #0 MatMul x Add GPU #0 Worker #0 Worker #1 SEND RECV SEND SEND RECV RDMA is_chief=tru e MatMul x Add RECV SEND
  • 20. DISTRIBUTED COMMUNICATION (MODEL PARALLELISM) • In model parallelism, the graph’s operations are distributed across cluster Device 1 (PS) Device 2 (worker) += s w b CPU GPU #0 MatMul x Add GPU #0 Worker #0 Worker #1
  • 21. DISTRIBUTED COMMUNICATION (MODEL PARALLELISM) • Transfers between local CPU and GPU devices use the cudaMemcpyAsync() API to overlap computation and data transfer. • Transfers between two local GPUs use peer-to-peer DMA, to avoid an expensive copy via the host CPU. • Transfers between tasks uses RDMA over Converged Ethernet else uses gRPC over TCP Device 1 (PS) += s w b CPU GPU #0 * x + GPU #0 Worker #0 Worker #1 SEND RECV SEND RECV SEND Dest: worker#1, GPU #0 Dest: worker#0, GPU #0 Dest: worker#1, GPU #0 SEND Dest: worker#0, CPU #0 RECV RDMA is_chief = True
  • 22. CHIEF WORKER • Chief is a task which is assigned some additional responsibilities in the cluster. • Its responsibilities: • Check pointing: • Saves graph state in a configured store like HDFS etc. • Runs a configurable frequency • Maintaining Summary • Runs all summary operations • Saving Models • Step Counters • Keeps an eye on total steps taken • Recovery • restores the graph from the most recent checkpoint and resumes training where it stopped • Initializing all the variables in graph • Can be monitored through TensorBoard.
  • 23. PARAMETER TASKS VS WORKER TASKS • In TensorFlow workload in distributed in form of PS and workers tasks. • PS tasks holds: • Variables • Update operations • Worker tasks: holds • Pre-processing • Loss calculation • Back Propagations • Multiple workers and PS tasks can run simultaneously but TF ensures that PS is sharded, ensures that same variable has one physical copy. There are various algorithm which support PS task distribution considering load and network . • It also allows partitioning large variables (~10x GBs) into multiple PS tasks
  • 24. TYPES OF TRAINING REPLICATION • In Graph Replication • Here single client connects to a master and requests distribution of replicated graph along with data within all available workers. • Works well for a small work load but beyond that does not scale well. • Between Graph Replication (Recommended Approach) • In this approach multiple clients take part in replication • Each machine has a client which talks to the local master and gives cluster information, graphs and data to be executed. • Master ensures that PS tasks are shared based on cluster and schedules tasks in local worker • Worker ensures all communication and synchronizations. • Between Graphs Replication can be of two types: • Synchronous • Asynchronous
  • 25. ASYNCHRONOUS VS SYNCHRONOUS REPLICATION model input Device 1 model input Device 2 model input Device 3 Add Update P PS Server model input Device 1 model input Device 2 model input Device 3 Update P PS Server P Update Update P P P SYNCHRONOUS DATA PARALLELISM ASYNCHRONOUS DATA PARALLELISM
  • 26. OPTIMIZATIONS • Common Subexpression Elimination • Schedules tasks in such a way that time window for which intermediate results are stored could be reduced. • Using ASAP/ALAP calculation critical path of graph is determined to estimate when to start the Receive nodes. This reduced the chances of sudden spike of I/O • Non blocking Kernels • Lossy compression of higher precision internal representations when sending data between device • XLA (Accelerated Linear Algebra) is a domain-specific compiler for linear algebra that optimizes TensorFlow computations. • Tensors also enable other optimizations for memory management and communication, such as RDMA and direct GPU-to-GPU transfer
  • 27. FAULT TOLERANCE • Check pointing ensures that latest state is always available • If a non supervisor worker gets killed • Considering workers are state less, the cluster manager when bring it up back, it simply contacts PS task to get the updated parameter and resumes • If a PS task fails • In this case chief/supervisor is responsible for noting the failure • Supervisor/Chief interrupts training on all workers and restores all PS tasks from the last check-point. • If Chief itself fails • Interrupt training and when it comes back up it restore from a checkpoint. • Monitored Training Session allows automating the recovery • Another approach could be to use Zookeeper for chief election and pass
  • 28. SERVING THE MODEL • TensorFlow recommended way to serve model in production is TF Serving • Advantages • Supports both online and batching mode • Supports both hosted as well as libs approach • Supports multiple model in a single process • Supports Docker & Kuburnetes
  • 29. BENCHMARKS Instance type: NVIDIA® DGX-1™ GPU: 8x NVIDIA® Tesla® P100 OS: Ubuntu 16.04 LTS with tests run via Docker CUDA / cuDNN: 8.0 / 5.1 TensorFlow GitHub hash: b1e174e Benchmark GitHub hash: 9165a70 Build Command: bazel build -c opt --copt=-march="haswell" --config=cuda //tensorflow/tools/pip_package:build_pip_package
  • 30. REFERENCES & FURTHER READING • Paper on Large-Scale Machine Learning on Heterogeneous Distributed Systems • TensorFlow Documentations • TensorFlow Tutorials • Hands-on Machine Learning with Sckit Learn and TensorFlow by Aurélien Géron

Editor's Notes

  • #12: Client is one which creates computation graph during the construction phase It creates a session to master and send the constructed graph for execution Finally when client evaluates a node or nodes in graph, master starts the execution by distributing sub graphs to workers.