SlideShare a Scribd company logo
Measuring Docker Performance
What a mess!!!
Emiliano Casalicchio
emc@bth.se
Blekinge Institute of Technology
Vanessa Perciballi
v.perciballi@gmail.com
University of Rome Tor Vergata
https://www.bth.se/amlcs/
Important Dates
• Paper submission deadline June 9, 2017
• Author notification June 30, 2017
• Camera-ready papers July 21, 2017
Presented papers will be included in the Proceedings of
the IEEE Conference ICCAC.
Few selected paper will be published in a special issue on
Cluster Computing, Elsevier
30	
  secs.	
  
Advertisement
Agenda
• Background and motivations
• Performance studies and contribution
• Monitoring Infrastructure
• Experimental setup
• architecture
• Workload
• Metrics
• Results
Container
• “the new cloud operating environment
will be built using containers as a
foundation. No virtual machines…” IBM
view
• +
• dependency hell problem
• application portability problem
• performance overhead problem
• fine grain scalability
•-
• Poor I/O performance
• Isolation to enable multi-tenancy
• Not yet mature orchestration mechanisms
● Esempi
○ Google Container Engine
○ Amazon Elastic Container Service
○ Azure Container Service
● Anno 2013: nascita di Docker, attuale leader nel settore
Motivation
• Container orchestration is about
how to select, to deploy, to
monitor, and to dynamically
control the configuration of multi-
container packaged applications
in the cloud
• Orchestration tools do not include
yet sophisticated mechanisms for
run time adaptation
• Kubernetes,
• Swarm,
• Mesos
Monitoring, Analysis and Modeling
• How to monitor
• tools
• observation point
• How to evaluate the performance of container based systems
• Characteristics of the container workload
of this
relieve
aging
Full
design
autono
Ultima
manag
merely
may m
nomic
interfa
intern
Each
manag
for ma
that co
other e
intern
elemen
has em
autho
Autonomic manager
Knowledge
Managed element
Analyze Plan
Monitor Execute
Performance studies
• Evaluation of containers overhead when running on physical servers
• W. Felter et al. An updated performance comparison of virtual machines and
Linux containers.
• R. Morabito, et al. Hypervisors vs. lightweight virtualization: A performance
comparison.
• Evaluation of container overhead when running on Cloud Infrastructures
(e.g. EC2)
• Z. Kozhirbayev et al. A performance comparison of container-based technologies
for the cloud.
• Methodology used
• Running benchmark on top of containers
• Comparison/Assessment based on benchmark statistics
Our approach
• Identifying and assessing the monitoring options
• quantifying the performance overhead introduced by
Docker versus the native environment
• Using system metrics
• Analyzing the influence of the measurement tools on the
performance results.
Monitoring Infrastructure
Experimental environment
Table 1: Experimental environment characteristics
Processor MD Turion(tm) II Neo
N40L Dual-Core @800MHz
# of CPU, cores/socket,
threads/core
2, 2, 1
RAM 2GB @1333MHz
Disk (file system type) ATA DISK HDD 250GB
(ext4)
Platforms Ubuntu 14.04 Trusty,
Docker v 1.12.3
Monitoring tools Grafana 3.1.1, Prometheus
1.3.1, cAdvisor 0.24.1
• CPU utilization (%CPU). This metric is measured
using docker stats, cAdvisor and mpstat. The first
two tools provide the value of the % of CPU used
by the monitored application. mpstat provides the
percentage of CPU utilization (%user) that occurred
Workload
• CPU intensive workload
• (Sysbench) consists of verifying prime numbers by doing standard division
of the input number by all numbers between 2 and the square root of the
number.
• (Stress-ng) Matrix product
• Disk I/O intensive workload
• consist in doing sequential reads, writes or random reads, writes, or a
combination on files of large dimension respect to the RAM size, to avoid
that caching could effect the benchmark results.
• Sysbench
• FIO
Metrics
CPUovh is the CPU overhead expressed as a fraction
of the %CPU in the native environment. It is defined
as
CPUovh =
|%CPUdocker %CPUnative|
%CPUnative
IOovh is the disk I/O throughput overhead expressed
as a fraction of the kBr/s or kBw/s in the native en-
vironment. It is defined as
IOovh,r =
|(kBr/s)docker (kBr/s)native|
(kBr/s)native
for the read throughput and as
IOovh,w =
|(kBw/s)docker (kBw/s)native|
(kBw/s)native
the system is heavy
ence load (%CPUn
that case, all the m
the same results (se
head goes below the
2.5% for the 6 threa
What does this m
there any bias in th
The most logical
lowing. Docker, if c
of resources, always
between the 80% a
side the container a
amount of time. Th
the amount of CPU
the container, let u
stat and cAdvisor
CPUovh =
|%CPUdocker %CPUnative|
%CPUnative
IOovh is the disk I/O throughput overhead expressed
as a fraction of the kBr/s or kBw/s in the native en-
vironment. It is defined as
IOovh,r =
|(kBr/s)docker (kBr/s)native|
(kBr/s)native
for the read throughput and as
IOovh,w =
|(kBw/s)docker (kBw/s)native|
(kBw/s)native
for the write throughput.
Eovh is the execution time overhead expressed as a
fraction of the E in the native environment. It is de-
head goes below th
2.5% for the 6 thre
What does this m
there any bias in th
The most logical
lowing. Docker, if c
of resources, alway
between the 80% a
side the container a
amount of time. Th
the amount of CPU
the container, let u
stat and cAdvisor
container and give
the system, that is
5.3 Disk I/O i
%CPUnative
• IOovh is the disk I/O throughput overhead expressed
as a fraction of the kBr/s or kBw/s in the native en-
vironment. It is defined as
IOovh,r =
|(kBr/s)docker (kBr/s)native|
(kBr/s)native
for the read throughput and as
IOovh,w =
|(kBw/s)docker (kBw/s)native|
(kBw/s)native
for the write throughput.
• Eovh is the execution time overhead expressed as a
fraction of the E in the native environment. It is de-
fined as
2.5% for the 6 thr
What does this
there any bias in t
The most logica
lowing. Docker, if
of resources, alway
between the 80%
side the container
amount of time. T
the amount of CPU
the container, let
stat and cAdviso
container and give
the system, that i
5.3 Disk I/O
The purpose of
IOovh,r =
|(kBr/s)docker (kBr/s)native|
(kBr/s)native
e read throughput and as
Oovh,w =
|(kBw/s)docker (kBw/s)native|
(kBw/s)native
e write throughput.
is the execution time overhead expressed as a
on of the E in the native environment. It is de-
as
Eovh =
|Edocker Enative|
Enative
of resources, always ”use”
between the 80% and 90%
side the container are not
amount of time. Therefore
the amount of CPU deman
the container, let us call t
stat and cAdvisor measu
container and give an e↵e
the system, that is %CPU
5.3 Disk I/O intens
The purpose of these e
container workload when
tion. Specifically we use
Execution time overhead
Table 1: Experimental environment characteristics
Processor MD Turion(tm) II Neo
N40L Dual-Core @800MHz
# of CPU, cores/socket,
threads/core
2, 2, 1
RAM 2GB @1333MHz
Disk (file system type) ATA DISK HDD 250GB
(ext4)
Platforms Ubuntu 14.04 Trusty,
Docker v 1.12.3
Monitoring tools Grafana 3.1.1, Prometheus
1.3.1, cAdvisor 0.24.1
• CPU utilization (%CPU). This metric is measured
using docker stats, cAdvisor and mpstat. The first
two tools provide the value of the % of CPU used
by the monitored application. mpstat provides the
percentage of CPU utilization (%user) that occurred
while executing at the user level (application) and %CPU =
%user ✏. While executing experiments in our con-
trolled environment we have empirically estimated ✏ =
2.5%
0
50
100
150
200
250
300
16000 32000 64000 128000
execution1time1overhead1(%)
input1size
1thread 2threads 4threads 6threads
Figure 3: Execution time overhead Eovh
5.2 CPU intensive workload
We run sysbench with input number={16000, 32000, 64000,
128000}. Following the approach used in literature (e.g. [6,
9]) we first analyze the execution time E and the overhead
(Eovh) for increasing input sizes and for increasing number
It	
  could	
  be	
  
misleading
azione benchmark: calcolo numeri primi fino a N
iche di prestazione
Benchmark: tempo esecuzione (secondi)
Tool Monitoraggio: %utilizzazione CPU
Iterazioni 10
Numero
Thread
{1,...,6}
Input (N)
1000, 2000, 4000, 8000, 16000,
32000, 64000, 128000, 256000
CPU load and CPU overhead (sysbench)
0
20
40
60
80
100
16000 32000 64000 128000
%CPU
input1number1(61threads)
Native Docker1mp Docker1ds Docekr1ca
0
20
40
60
80
100
16000 32000 64000 128000
%CPU
input1number1(41threads)
Native Docker1mp Docker1ds Docker1ca
0
20
40
60
80
100
16000 32000 64000 128000
%CPU
input1number1(21thread)
Native Docker1mp Docker1ds Docker1ca
0
20
40
60
80
100
16000 32000 64000 128000
%CPU
input1number1(11thread)
Native Docker1mp Docker1ds Docker1ca
Figure 4: CPU load measured by means of: mpstat (Native and Docker mp), docker stats (Docker ds) and
cAdvisor (Docker ca)
0,1
1
10
100
CPU,overhead,(%)
docker,stat mpstat cAdvisor
0,1
1
10
100
CPU,overhead,(%)
docker,stat mpstat cAdvisor
0,1
1
10
100
16000 32000 64000 128000
CPU,overhead,(%)
docker,stat mpstat cAdvisor
0,1
1
10
100
16000 32000 64000 128000
CPU,overhead,(%)
docker,stat mpstat cAdvisor
0
20
40
60
80
100
16000 32000 64000 128000
%CPU
input1number1(61threads)
Native Docker1mp Docker1ds Docekr1ca
0
20
40
60
80
100
16000 32000 64000 128000
%CPU
input1number1(41threads)
Native Docker1mp Docker1ds Docker1ca
0
20
40
60
80
100
16000 32000 64000 128000
%CPU
input1number1(21thread)
Native Docker1mp Docker1ds Docker1ca
0
20
40
60
80
100
16000 32000 64000 128000
%CPU
input1number1(11thread)
Native Docker1mp Docker1ds Docker1ca
Figure 4: CPU load measured by means of: mpstat (Native and Docker mp), docker stats (Docker ds) and
cAdvisor (Docker ca)
0,1
1
10
100
16000 32000 64000 128000
CPU,overhead,(%)
docker,stat mpstat cAdvisor
0,1
1
10
100
16000 32000 64000 128000
CPU,overhead,(%)
docker,stat mpstat cAdvisor
0,1
1
10
100
16000 32000 64000 128000
CPU,overhead,(%)
docker,stat mpstat cAdvisor
0,1
1
10
100
16000 32000 64000 128000
CPU,overhead,(%)
docker,stat mpstat cAdvisor
Figure 5: CPU overhead computed comparing %CPUnative with %CPU measured with Gs, mpstat and cAdvisor
1	
  thread 2 thread 4	
  thread 6 thread
CPU load and CPU overhead (Stress-ng)
Metrica di prestazione: %utilizzazione CPU
Processi
Input (N) 64, 128, 256
Input	
  size Input	
  size
Input	
  sizeInput	
  size
CPU load and CPU overhead (Stress-ng)
Input	
  size	
  (103)
Input	
  size	
  (103)
Input	
  size	
  (103)Input	
  size	
  (103)
Disk I/O (Sysbench)
0,1
16000 32000 64000 128000
0,1
16000 32000 64000 128000
0,1
16000 32000 64000 128000
0,1
16000 32000 64000 128000
Figure 5: CPU overhead computed comparing %CPUnative with %CPU measured with Gs, mpstat and cAdvisor
0
500
1000
1500
2000
2500
16 32 64 128
kB+write+/+sec
file+size+(GB)
Native
Docker
0
500
1000
1500
2000
2500
16 32 64 128
kB+read+/+sec
file+size+(GB)
Native
Docker
0
20
40
60
80
100
120
140
160
16 32 64 128
tps
file+size+(GB)
Native
Docker
Figure 6: Disk I/O throughput measured in transactions per seconds (tps), kByte read per second (kBr/s)
and kByte written per second (kBw/s)
write operations on files of the following sizes 16 GB, 32
GB, 64 GB and 128 GB. Considering that the RAM of the
server we used for the experiments is 2GB we have the cer-
tainty that the OS caching mechanisms will not a↵ect the
measurements.
As before mentioned, the measurement are done only with
iostat because docker stats does not collect disk I/O re-
lated data and cAdvisor is unstable for monitoring I/O.
6. CONCLUDING REMARKS
Measuring container performances with the goal of char-
acterizing overhead and workload is not an easy task, also
because there are not stable and dedicated tools that cover a
wide range of performance metrics. From our measurement
campaign, we have learned what follow:
1. the available container monitoring tools give di↵erent
0
10
20
30
40
50
60
70
80
90
100
16 32 64 128
IO-overhead-(%)
file-size-(GB)
kB-read/s
kB-write/s
Figure 7: Disk IO Overhead for read and write re-
[7] W. Gerlach, W. Tang, K. Keegan
A. Wilke, J. Bischof, M. D’Souza
D. Murphy-Olson, N. Desai, and
Container-based execution enviro
for multi-cloud scientific workflow
the 5th International Workshop o
Computing in the Clouds, DataC
Piscataway, NJ, USA, 2014. IEEE
[8] M. Helsley. Lxc: Linux container
devloperWorks Technical Library,
[9] Z. Kozhirbayev and R. O. Sinnot
comparison of container-based te
cloud. Future Generation Compu
182, 2017.
[10] Linux Containers. Linux Contain
• Measure with iostat
• docker stat and
Cadvisor do not
provide I/O
statistics
Disk I/O (FIO)
I/O - Fio (monitoraggio)
CALCOLO OVERHEAD:
ERHEAD:
Filesize (GB) Filesize (GB) Filesize (GB)
Filesize (GB)
Stress-ng quotas
quota	
  CPU	
  50	
  -­‐ 70%
-­‐-­‐cpu-­‐quota=70000	
  
-­‐-­‐cpu-­‐period=100000	
  
0
10
20
30
40
50
60
70
80
90
100
1+worker 6+worker
%CPU
docker+stats
cAdvisor
mpstat
0
10
20
30
40
50
60
70
80
90
100
1+worker 6+worker
%CPU
docker+stats
cAdvisor
mpstat
Conclusions
• Heterogeneity of results
• Performance measures depends on
• Type of workload (benchmark used)
• Load level
• Use of quotas
• Monitoring tool used (and obs point)
https://www.bth.se/amlcs/
Important Dates
• Paper submission deadline June 9, 2017
• Author notification June 30, 2017
• Camera-ready papers July 21, 2017
Presented papers will be included in the Proceedings of
the IEEE Conference ICCAC.
Few selected paper will be published in a special issue on
Cluster Computing, Elsevier
Questions ?
Emiliano Casalicchio
http://www.bth.se/people/emc
emc@bth.se

More Related Content

PDF
Apache Storm Tutorial
Farzad Nozarian
 
PPTX
Final_Presentation_Docker_KP
Kaushik Padmanabhan
 
PDF
Performance Profiling in Rust
InfluxData
 
PDF
On heap cache vs off-heap cache
rgrebski
 
POTX
Performance Tuning EC2 Instances
Brendan Gregg
 
PDF
containerit at useR!2017 conference, Brussels
Daniel Nüst
 
PPTX
Improved Reliable Streaming Processing: Apache Storm as example
DataWorks Summit/Hadoop Summit
 
PDF
Storm Anatomy
Eiichiro Uchiumi
 
Apache Storm Tutorial
Farzad Nozarian
 
Final_Presentation_Docker_KP
Kaushik Padmanabhan
 
Performance Profiling in Rust
InfluxData
 
On heap cache vs off-heap cache
rgrebski
 
Performance Tuning EC2 Instances
Brendan Gregg
 
containerit at useR!2017 conference, Brussels
Daniel Nüst
 
Improved Reliable Streaming Processing: Apache Storm as example
DataWorks Summit/Hadoop Summit
 
Storm Anatomy
Eiichiro Uchiumi
 

What's hot (20)

PDF
INFLUXQL & TICKSCRIPT
InfluxData
 
PDF
Shared Memory Performance: Beyond TCP/IP with Ben Cotton, JPMorgan
Hazelcast
 
PDF
[241]large scale search with polysemous codes
NAVER D2
 
PDF
InfluxData Platform Future and Vision
InfluxData
 
PDF
Performance Optimization of HPC Applications: From Hardware to Source Code
Fisnik Kraja
 
PPT
Exploiting Multicore CPUs Now: Scalability and Reliability for Off-the-shelf ...
Emery Berger
 
PPTX
Storm
Pouyan Rezazadeh
 
PDF
Obtaining the Perfect Smoke By Monitoring Your BBQ with InfluxDB and Telegraf
InfluxData
 
PDF
Storm: The Real-Time Layer - GlueCon 2012
Dan Lynn
 
PPTX
JVM Memory Model - Yoav Abrahami, Wix
Codemotion Tel Aviv
 
PPTX
Am I reading GC logs Correctly?
Tier1 App
 
PPT
overbooking.ppt
webhostingguy
 
PDF
A performance-aware power capping orchestrator for the Xen hypervisor
NECST Lab @ Politecnico di Milano
 
PDF
InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...
InfluxData
 
DOCX
CS6401 Operating systems - Solved Examples
ramyaranjith
 
PDF
Trip down the GPU lane with Machine Learning
Renaldas Zioma
 
PDF
Storm Real Time Computation
Sonal Raj
 
PDF
Flux and InfluxDB 2.0
InfluxData
 
PPTX
Apache Flink Training: DataStream API Part 1 Basic
Flink Forward
 
INFLUXQL & TICKSCRIPT
InfluxData
 
Shared Memory Performance: Beyond TCP/IP with Ben Cotton, JPMorgan
Hazelcast
 
[241]large scale search with polysemous codes
NAVER D2
 
InfluxData Platform Future and Vision
InfluxData
 
Performance Optimization of HPC Applications: From Hardware to Source Code
Fisnik Kraja
 
Exploiting Multicore CPUs Now: Scalability and Reliability for Off-the-shelf ...
Emery Berger
 
Obtaining the Perfect Smoke By Monitoring Your BBQ with InfluxDB and Telegraf
InfluxData
 
Storm: The Real-Time Layer - GlueCon 2012
Dan Lynn
 
JVM Memory Model - Yoav Abrahami, Wix
Codemotion Tel Aviv
 
Am I reading GC logs Correctly?
Tier1 App
 
overbooking.ppt
webhostingguy
 
A performance-aware power capping orchestrator for the Xen hypervisor
NECST Lab @ Politecnico di Milano
 
InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...
InfluxData
 
CS6401 Operating systems - Solved Examples
ramyaranjith
 
Trip down the GPU lane with Machine Learning
Renaldas Zioma
 
Storm Real Time Computation
Sonal Raj
 
Flux and InfluxDB 2.0
InfluxData
 
Apache Flink Training: DataStream API Part 1 Basic
Flink Forward
 
Ad

Similar to Measuring Docker Performance: what a mess!!! (20)

PDF
Monitoring Docker at Scale - Docker San Francisco Meetup - August 11, 2015
Datadog
 
PDF
24 23 jun17 2may17 16231 ijeecs latest_version (1) edit septian
IAESIJEECS
 
PPTX
Performance characteristics of traditional v ms vs docker containers (dockerc...
Boden Russell
 
PPTX
DockerCon14 Performance Characteristics of Traditional VMs vs. Docker Containers
Docker, Inc.
 
PDF
Docker from basics to orchestration (PHPConfBr2015)
Wellington Silva
 
PDF
Introduction to docker
Hiroki Endo
 
DOCX
Whitepaper nebucom intelligent application broking and provisioning in a hybr...
Nebucom
 
PPTX
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
Boden Russell
 
PDF
HPC Cloud Burst Using Docker
IRJET Journal
 
PDF
Evaluation of Container Virtualized MEGADOCK System in Distributed Computing ...
Kento Aoyama
 
PDF
Docker vs kvm
Wilson Cunalata
 
PPTX
How to build a container monitoring solution - David Gildeh, CEO and Co-Found...
Outlyer
 
PDF
Docker Introduction + what is new in 0.9
Jérôme Petazzoni
 
PDF
Docker Introduction, and what's new in 0.9 — Docker Palo Alto at RelateIQ
Jérôme Petazzoni
 
PDF
DockerCon EU '17 - Dockerizing Aurea
Łukasz Piątkowski
 
PDF
Introduction to docker
Instruqt
 
PDF
Dockerizing OpenStack for High Availability
Daniel Krook
 
PPTX
Introductio to Docker and usage in HPC applications
Richie Varghese
 
PDF
A Survey of Performance Comparison between Virtual Machines and Containers
prashant desai
 
PPTX
Docker
Hussien Elhannan
 
Monitoring Docker at Scale - Docker San Francisco Meetup - August 11, 2015
Datadog
 
24 23 jun17 2may17 16231 ijeecs latest_version (1) edit septian
IAESIJEECS
 
Performance characteristics of traditional v ms vs docker containers (dockerc...
Boden Russell
 
DockerCon14 Performance Characteristics of Traditional VMs vs. Docker Containers
Docker, Inc.
 
Docker from basics to orchestration (PHPConfBr2015)
Wellington Silva
 
Introduction to docker
Hiroki Endo
 
Whitepaper nebucom intelligent application broking and provisioning in a hybr...
Nebucom
 
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
Boden Russell
 
HPC Cloud Burst Using Docker
IRJET Journal
 
Evaluation of Container Virtualized MEGADOCK System in Distributed Computing ...
Kento Aoyama
 
Docker vs kvm
Wilson Cunalata
 
How to build a container monitoring solution - David Gildeh, CEO and Co-Found...
Outlyer
 
Docker Introduction + what is new in 0.9
Jérôme Petazzoni
 
Docker Introduction, and what's new in 0.9 — Docker Palo Alto at RelateIQ
Jérôme Petazzoni
 
DockerCon EU '17 - Dockerizing Aurea
Łukasz Piątkowski
 
Introduction to docker
Instruqt
 
Dockerizing OpenStack for High Availability
Daniel Krook
 
Introductio to Docker and usage in HPC applications
Richie Varghese
 
A Survey of Performance Comparison between Virtual Machines and Containers
prashant desai
 
Ad

Recently uploaded (20)

PPTX
Online Cab Booking and Management System.pptx
diptipaneri80
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
PDF
Zero Carbon Building Performance standard
BassemOsman1
 
PPTX
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
PDF
FLEX-LNG-Company-Presentation-Nov-2017.pdf
jbloggzs
 
PPTX
Chapter_Seven_Construction_Reliability_Elective_III_Msc CM
SubashKumarBhattarai
 
PDF
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
PPTX
quantum computing transition from classical mechanics.pptx
gvlbcy
 
PPTX
Tunnel Ventilation System in Kanpur Metro
220105053
 
PPTX
Inventory management chapter in automation and robotics.
atisht0104
 
PPTX
MT Chapter 1.pptx- Magnetic particle testing
ABCAnyBodyCanRelax
 
PDF
Natural_Language_processing_Unit_I_notes.pdf
sanguleumeshit
 
PDF
Cryptography and Information :Security Fundamentals
Dr. Madhuri Jawale
 
PDF
Chad Ayach - A Versatile Aerospace Professional
Chad Ayach
 
PDF
settlement FOR FOUNDATION ENGINEERS.pdf
Endalkazene
 
PDF
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
PPT
1. SYSTEMS, ROLES, AND DEVELOPMENT METHODOLOGIES.ppt
zilow058
 
PDF
STUDY OF NOVEL CHANNEL MATERIALS USING III-V COMPOUNDS WITH VARIOUS GATE DIEL...
ijoejnl
 
PDF
Zero carbon Building Design Guidelines V4
BassemOsman1
 
PPTX
Victory Precisions_Supplier Profile.pptx
victoryprecisions199
 
Online Cab Booking and Management System.pptx
diptipaneri80
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
Zero Carbon Building Performance standard
BassemOsman1
 
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
FLEX-LNG-Company-Presentation-Nov-2017.pdf
jbloggzs
 
Chapter_Seven_Construction_Reliability_Elective_III_Msc CM
SubashKumarBhattarai
 
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
quantum computing transition from classical mechanics.pptx
gvlbcy
 
Tunnel Ventilation System in Kanpur Metro
220105053
 
Inventory management chapter in automation and robotics.
atisht0104
 
MT Chapter 1.pptx- Magnetic particle testing
ABCAnyBodyCanRelax
 
Natural_Language_processing_Unit_I_notes.pdf
sanguleumeshit
 
Cryptography and Information :Security Fundamentals
Dr. Madhuri Jawale
 
Chad Ayach - A Versatile Aerospace Professional
Chad Ayach
 
settlement FOR FOUNDATION ENGINEERS.pdf
Endalkazene
 
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
1. SYSTEMS, ROLES, AND DEVELOPMENT METHODOLOGIES.ppt
zilow058
 
STUDY OF NOVEL CHANNEL MATERIALS USING III-V COMPOUNDS WITH VARIOUS GATE DIEL...
ijoejnl
 
Zero carbon Building Design Guidelines V4
BassemOsman1
 
Victory Precisions_Supplier Profile.pptx
victoryprecisions199
 

Measuring Docker Performance: what a mess!!!

  • 1. Measuring Docker Performance What a mess!!! Emiliano Casalicchio [email protected] Blekinge Institute of Technology Vanessa Perciballi [email protected] University of Rome Tor Vergata
  • 2. https://www.bth.se/amlcs/ Important Dates • Paper submission deadline June 9, 2017 • Author notification June 30, 2017 • Camera-ready papers July 21, 2017 Presented papers will be included in the Proceedings of the IEEE Conference ICCAC. Few selected paper will be published in a special issue on Cluster Computing, Elsevier 30  secs.   Advertisement
  • 3. Agenda • Background and motivations • Performance studies and contribution • Monitoring Infrastructure • Experimental setup • architecture • Workload • Metrics • Results
  • 4. Container • “the new cloud operating environment will be built using containers as a foundation. No virtual machines…” IBM view • + • dependency hell problem • application portability problem • performance overhead problem • fine grain scalability •- • Poor I/O performance • Isolation to enable multi-tenancy • Not yet mature orchestration mechanisms ● Esempi ○ Google Container Engine ○ Amazon Elastic Container Service ○ Azure Container Service ● Anno 2013: nascita di Docker, attuale leader nel settore
  • 5. Motivation • Container orchestration is about how to select, to deploy, to monitor, and to dynamically control the configuration of multi- container packaged applications in the cloud • Orchestration tools do not include yet sophisticated mechanisms for run time adaptation • Kubernetes, • Swarm, • Mesos
  • 6. Monitoring, Analysis and Modeling • How to monitor • tools • observation point • How to evaluate the performance of container based systems • Characteristics of the container workload of this relieve aging Full design autono Ultima manag merely may m nomic interfa intern Each manag for ma that co other e intern elemen has em autho Autonomic manager Knowledge Managed element Analyze Plan Monitor Execute
  • 7. Performance studies • Evaluation of containers overhead when running on physical servers • W. Felter et al. An updated performance comparison of virtual machines and Linux containers. • R. Morabito, et al. Hypervisors vs. lightweight virtualization: A performance comparison. • Evaluation of container overhead when running on Cloud Infrastructures (e.g. EC2) • Z. Kozhirbayev et al. A performance comparison of container-based technologies for the cloud. • Methodology used • Running benchmark on top of containers • Comparison/Assessment based on benchmark statistics
  • 8. Our approach • Identifying and assessing the monitoring options • quantifying the performance overhead introduced by Docker versus the native environment • Using system metrics • Analyzing the influence of the measurement tools on the performance results.
  • 10. Experimental environment Table 1: Experimental environment characteristics Processor MD Turion(tm) II Neo N40L Dual-Core @800MHz # of CPU, cores/socket, threads/core 2, 2, 1 RAM 2GB @1333MHz Disk (file system type) ATA DISK HDD 250GB (ext4) Platforms Ubuntu 14.04 Trusty, Docker v 1.12.3 Monitoring tools Grafana 3.1.1, Prometheus 1.3.1, cAdvisor 0.24.1 • CPU utilization (%CPU). This metric is measured using docker stats, cAdvisor and mpstat. The first two tools provide the value of the % of CPU used by the monitored application. mpstat provides the percentage of CPU utilization (%user) that occurred
  • 11. Workload • CPU intensive workload • (Sysbench) consists of verifying prime numbers by doing standard division of the input number by all numbers between 2 and the square root of the number. • (Stress-ng) Matrix product • Disk I/O intensive workload • consist in doing sequential reads, writes or random reads, writes, or a combination on files of large dimension respect to the RAM size, to avoid that caching could effect the benchmark results. • Sysbench • FIO
  • 12. Metrics CPUovh is the CPU overhead expressed as a fraction of the %CPU in the native environment. It is defined as CPUovh = |%CPUdocker %CPUnative| %CPUnative IOovh is the disk I/O throughput overhead expressed as a fraction of the kBr/s or kBw/s in the native en- vironment. It is defined as IOovh,r = |(kBr/s)docker (kBr/s)native| (kBr/s)native for the read throughput and as IOovh,w = |(kBw/s)docker (kBw/s)native| (kBw/s)native the system is heavy ence load (%CPUn that case, all the m the same results (se head goes below the 2.5% for the 6 threa What does this m there any bias in th The most logical lowing. Docker, if c of resources, always between the 80% a side the container a amount of time. Th the amount of CPU the container, let u stat and cAdvisor CPUovh = |%CPUdocker %CPUnative| %CPUnative IOovh is the disk I/O throughput overhead expressed as a fraction of the kBr/s or kBw/s in the native en- vironment. It is defined as IOovh,r = |(kBr/s)docker (kBr/s)native| (kBr/s)native for the read throughput and as IOovh,w = |(kBw/s)docker (kBw/s)native| (kBw/s)native for the write throughput. Eovh is the execution time overhead expressed as a fraction of the E in the native environment. It is de- head goes below th 2.5% for the 6 thre What does this m there any bias in th The most logical lowing. Docker, if c of resources, alway between the 80% a side the container a amount of time. Th the amount of CPU the container, let u stat and cAdvisor container and give the system, that is 5.3 Disk I/O i %CPUnative • IOovh is the disk I/O throughput overhead expressed as a fraction of the kBr/s or kBw/s in the native en- vironment. It is defined as IOovh,r = |(kBr/s)docker (kBr/s)native| (kBr/s)native for the read throughput and as IOovh,w = |(kBw/s)docker (kBw/s)native| (kBw/s)native for the write throughput. • Eovh is the execution time overhead expressed as a fraction of the E in the native environment. It is de- fined as 2.5% for the 6 thr What does this there any bias in t The most logica lowing. Docker, if of resources, alway between the 80% side the container amount of time. T the amount of CPU the container, let stat and cAdviso container and give the system, that i 5.3 Disk I/O The purpose of IOovh,r = |(kBr/s)docker (kBr/s)native| (kBr/s)native e read throughput and as Oovh,w = |(kBw/s)docker (kBw/s)native| (kBw/s)native e write throughput. is the execution time overhead expressed as a on of the E in the native environment. It is de- as Eovh = |Edocker Enative| Enative of resources, always ”use” between the 80% and 90% side the container are not amount of time. Therefore the amount of CPU deman the container, let us call t stat and cAdvisor measu container and give an e↵e the system, that is %CPU 5.3 Disk I/O intens The purpose of these e container workload when tion. Specifically we use
  • 13. Execution time overhead Table 1: Experimental environment characteristics Processor MD Turion(tm) II Neo N40L Dual-Core @800MHz # of CPU, cores/socket, threads/core 2, 2, 1 RAM 2GB @1333MHz Disk (file system type) ATA DISK HDD 250GB (ext4) Platforms Ubuntu 14.04 Trusty, Docker v 1.12.3 Monitoring tools Grafana 3.1.1, Prometheus 1.3.1, cAdvisor 0.24.1 • CPU utilization (%CPU). This metric is measured using docker stats, cAdvisor and mpstat. The first two tools provide the value of the % of CPU used by the monitored application. mpstat provides the percentage of CPU utilization (%user) that occurred while executing at the user level (application) and %CPU = %user ✏. While executing experiments in our con- trolled environment we have empirically estimated ✏ = 2.5% 0 50 100 150 200 250 300 16000 32000 64000 128000 execution1time1overhead1(%) input1size 1thread 2threads 4threads 6threads Figure 3: Execution time overhead Eovh 5.2 CPU intensive workload We run sysbench with input number={16000, 32000, 64000, 128000}. Following the approach used in literature (e.g. [6, 9]) we first analyze the execution time E and the overhead (Eovh) for increasing input sizes and for increasing number It  could  be   misleading azione benchmark: calcolo numeri primi fino a N iche di prestazione Benchmark: tempo esecuzione (secondi) Tool Monitoraggio: %utilizzazione CPU Iterazioni 10 Numero Thread {1,...,6} Input (N) 1000, 2000, 4000, 8000, 16000, 32000, 64000, 128000, 256000
  • 14. CPU load and CPU overhead (sysbench) 0 20 40 60 80 100 16000 32000 64000 128000 %CPU input1number1(61threads) Native Docker1mp Docker1ds Docekr1ca 0 20 40 60 80 100 16000 32000 64000 128000 %CPU input1number1(41threads) Native Docker1mp Docker1ds Docker1ca 0 20 40 60 80 100 16000 32000 64000 128000 %CPU input1number1(21thread) Native Docker1mp Docker1ds Docker1ca 0 20 40 60 80 100 16000 32000 64000 128000 %CPU input1number1(11thread) Native Docker1mp Docker1ds Docker1ca Figure 4: CPU load measured by means of: mpstat (Native and Docker mp), docker stats (Docker ds) and cAdvisor (Docker ca) 0,1 1 10 100 CPU,overhead,(%) docker,stat mpstat cAdvisor 0,1 1 10 100 CPU,overhead,(%) docker,stat mpstat cAdvisor 0,1 1 10 100 16000 32000 64000 128000 CPU,overhead,(%) docker,stat mpstat cAdvisor 0,1 1 10 100 16000 32000 64000 128000 CPU,overhead,(%) docker,stat mpstat cAdvisor 0 20 40 60 80 100 16000 32000 64000 128000 %CPU input1number1(61threads) Native Docker1mp Docker1ds Docekr1ca 0 20 40 60 80 100 16000 32000 64000 128000 %CPU input1number1(41threads) Native Docker1mp Docker1ds Docker1ca 0 20 40 60 80 100 16000 32000 64000 128000 %CPU input1number1(21thread) Native Docker1mp Docker1ds Docker1ca 0 20 40 60 80 100 16000 32000 64000 128000 %CPU input1number1(11thread) Native Docker1mp Docker1ds Docker1ca Figure 4: CPU load measured by means of: mpstat (Native and Docker mp), docker stats (Docker ds) and cAdvisor (Docker ca) 0,1 1 10 100 16000 32000 64000 128000 CPU,overhead,(%) docker,stat mpstat cAdvisor 0,1 1 10 100 16000 32000 64000 128000 CPU,overhead,(%) docker,stat mpstat cAdvisor 0,1 1 10 100 16000 32000 64000 128000 CPU,overhead,(%) docker,stat mpstat cAdvisor 0,1 1 10 100 16000 32000 64000 128000 CPU,overhead,(%) docker,stat mpstat cAdvisor Figure 5: CPU overhead computed comparing %CPUnative with %CPU measured with Gs, mpstat and cAdvisor 1  thread 2 thread 4  thread 6 thread
  • 15. CPU load and CPU overhead (Stress-ng) Metrica di prestazione: %utilizzazione CPU Processi Input (N) 64, 128, 256 Input  size Input  size Input  sizeInput  size
  • 16. CPU load and CPU overhead (Stress-ng) Input  size  (103) Input  size  (103) Input  size  (103)Input  size  (103)
  • 17. Disk I/O (Sysbench) 0,1 16000 32000 64000 128000 0,1 16000 32000 64000 128000 0,1 16000 32000 64000 128000 0,1 16000 32000 64000 128000 Figure 5: CPU overhead computed comparing %CPUnative with %CPU measured with Gs, mpstat and cAdvisor 0 500 1000 1500 2000 2500 16 32 64 128 kB+write+/+sec file+size+(GB) Native Docker 0 500 1000 1500 2000 2500 16 32 64 128 kB+read+/+sec file+size+(GB) Native Docker 0 20 40 60 80 100 120 140 160 16 32 64 128 tps file+size+(GB) Native Docker Figure 6: Disk I/O throughput measured in transactions per seconds (tps), kByte read per second (kBr/s) and kByte written per second (kBw/s) write operations on files of the following sizes 16 GB, 32 GB, 64 GB and 128 GB. Considering that the RAM of the server we used for the experiments is 2GB we have the cer- tainty that the OS caching mechanisms will not a↵ect the measurements. As before mentioned, the measurement are done only with iostat because docker stats does not collect disk I/O re- lated data and cAdvisor is unstable for monitoring I/O. 6. CONCLUDING REMARKS Measuring container performances with the goal of char- acterizing overhead and workload is not an easy task, also because there are not stable and dedicated tools that cover a wide range of performance metrics. From our measurement campaign, we have learned what follow: 1. the available container monitoring tools give di↵erent 0 10 20 30 40 50 60 70 80 90 100 16 32 64 128 IO-overhead-(%) file-size-(GB) kB-read/s kB-write/s Figure 7: Disk IO Overhead for read and write re- [7] W. Gerlach, W. Tang, K. Keegan A. Wilke, J. Bischof, M. D’Souza D. Murphy-Olson, N. Desai, and Container-based execution enviro for multi-cloud scientific workflow the 5th International Workshop o Computing in the Clouds, DataC Piscataway, NJ, USA, 2014. IEEE [8] M. Helsley. Lxc: Linux container devloperWorks Technical Library, [9] Z. Kozhirbayev and R. O. Sinnot comparison of container-based te cloud. Future Generation Compu 182, 2017. [10] Linux Containers. Linux Contain • Measure with iostat • docker stat and Cadvisor do not provide I/O statistics
  • 18. Disk I/O (FIO) I/O - Fio (monitoraggio) CALCOLO OVERHEAD: ERHEAD: Filesize (GB) Filesize (GB) Filesize (GB) Filesize (GB)
  • 19. Stress-ng quotas quota  CPU  50  -­‐ 70% -­‐-­‐cpu-­‐quota=70000   -­‐-­‐cpu-­‐period=100000   0 10 20 30 40 50 60 70 80 90 100 1+worker 6+worker %CPU docker+stats cAdvisor mpstat 0 10 20 30 40 50 60 70 80 90 100 1+worker 6+worker %CPU docker+stats cAdvisor mpstat
  • 20. Conclusions • Heterogeneity of results • Performance measures depends on • Type of workload (benchmark used) • Load level • Use of quotas • Monitoring tool used (and obs point)
  • 21. https://www.bth.se/amlcs/ Important Dates • Paper submission deadline June 9, 2017 • Author notification June 30, 2017 • Camera-ready papers July 21, 2017 Presented papers will be included in the Proceedings of the IEEE Conference ICCAC. Few selected paper will be published in a special issue on Cluster Computing, Elsevier