Ready to get shipped?
By Chafik Belhaoues
@XebiaFr
Introduction [History & newness of the idea]1
Anatomy of the building blocks2
Namespaces3
cgroups5
Storage backends6
Execution environments7
A little bit of history:
The marine containers have been created in 1956 par Malcom Mclean in NewYork, just because
time is money (-90% of transport costs).
BEFORE AFTER
Did the containerization {concept} exist before Docker?
OH YEAH…
By Pivotal
The need of containerization:
Develop, ship, and run applications {everywhere}.
Concept? Product? Life-cycle engine? …you said {DevOps} tool?
A single, runnable, distributable executable.
What is the difference with the other form of virtualization then?
Open source [CS version].
Not OS-related [theoretically].
No hypervisor needed.
A different [new] vision of IT.
Closer to the most IT needs.
Hardware-centric:
A VM packages a full stack (virtual hardware, kernel, a user space).
Designed with machine operators in mind, not software developers.
VMs offer no facilities for application versioning, monitoring, configuration, logging or service
discovery…
Application-centric:
Packages only the user space; there is no kernel or virtual hardware.
Sandboxing method known as containerization = Application virtualization.
Overview:
Docker is based on a client-server architecture. The client {user commands} talks to the Docker
Daemon.
Daemon: runs on a host machine.
Client: accepts commands from the user and communicates back and forth with a Docker
Daemon using API.
3 components involved: build..ship..run
Images: a read-only template, images are the build component of Docker.
Registries: hold images, the distribution component of Docker.
Containers: holds everything that is needed for an application to run, the run component of
Docker
Combination:
A container consists of a read-only image based on a given operating system, user-added files,
and meta-data.
Anatomy of the building blocks:
Apartment complex analogy:
1. Each apartment will require water and electricity and these resources should be distributed
fairly {resources}.
2. The apartments are isolated with walls to keep people separate from their respective neighbors
{isolation}.
3. Each apartment also has a door, lock, and keys {security}.
4. Finally, most apartment complexes benefit from a manager who works to ensure a consistent
and clean steady state of operations {management}.
By analogy to system resources required for a container, the kernel should implement 4
elements:
- Resource Management.
- Process Isolation.
- Security.
- Tooling (CLI).
Resource management is provided by control groups (cgroups).
Process isolation is provided by kernel namespaces.
Security is provided by policy manager like: SELinux
Overall management by Docker CLI.
Namespace:
Wraps a global system resources in an abstraction.
Changes are visible only inside the namespace.
Kernel namespaces allow the new process to have its own hostname, IP address and a whole
network stack, filesystem, PID, IPC stack, and even user mapping.
The container to look a VM.
Kernal space:
Strictly reserved for running a privileged operating system kernel, kernel extensions, and most
device drivers, the gate to this land is managed by CAP_SYS_ADMIN capability starting with
kernel 2.2 [before it was the superuser, or root, ID 0].
User space [userland]:
The memory area where application software and some drivers execute.
These interactions are managed by 3 syscall :
clone
setns
unshare
Playing with Syscalls:
clone:
Creates a new process, in a manner similar to fork then creates new namespaces for every flag
CLONE_NEW*.
Unlike fork, the child process is allowed to share parts of its execution context with the calling
process (the memory space, the table of file descriptors, the table of signal handlers…).
setns:
Allows the calling process to join an existing namespace.
unshare:
Moves the calling process to a new namespace in other words: disassociates parts of its execution
context that are currently being shared with other processes (or threads).
Namespace Date Kernel version
mount 2002 2.4.19
uts 2006 2.6.19
ipc 2006 2.6.19
pid 2008 2.6.24
net 2009 2.6.29
user 2013 3.8
MNT namespace:
Isolate the set of filesystem mount points.
Means that processes in different mount namespaces can have different views of the filesystem
hierarchy.
The container “thinks” that a directory which is actually mounted from the host OS is exclusive to
the container.
Interacting with this namespace is simply done by mount/umount syscalls.
All about Isolation…
PID namespace:
Isolate the process ID number space = processes in different "PID namespaces" can have the same
PID.
The container thinks it has a separate standalone instance of the OS.
Technically, the new process created in a new namespace will be the famous PID 1 "init“.
Inside the namespace fork/clone syscalls will produce processes with PIDs that are unique.
This mechanism allows containers to provide functionality such as:
suspending/resuming the set of processes.
PID consistency on migration.
NETNS namespace:
Logically another copy of the network stack, with its own routes, firewall rules, and network
devices.
It means each network namespace has its own network devices, IP addresses, IP routing tables,
/proc/net directory, port numbers...
It allows a container to have its own IP address, independent of that of the host.
UTS namespace [UNIX Time Sharing]:
Historically the term "UTS" derives from the name of the structure passed to the uname() system
call: struct utsname.
{Initially the time sharing was invited to allow a large number of users to interact concurrently
with a single computer by the sharing of a computing resource among many users by means of
multiprogramming and multi-tasking at the same time}.
This mechanism isolates two system identifiers nodename and domainname.
It allows the containers to have its own separate identity {hostname and NIS domain name}.
IPC namespace:
IPC (POSIX/SysV IPC) namespace provides isolation/separation of IPC resources:
Named shared memory segments.
Semaphores.
Message queues.
Why this need ?
Shared memory segments are used to accelerate inter-process communication at memory speed,
rather than through pipes or through the network stack. It is commonly used by databases and
custom-built high performance applications for scientific computing and financial services
industries. If these types of applications are broken into multiple containers, you might need to
share the IPC mechanisms of the containers.
User namespace:
The last namespace to be implemented, integrated in the kernel mainstream starting from 3.8
BUT in technical preview in almost all Linux distros.
A process's user and group IDs can be different inside and outside a user namespace, that means
a process can have a normal unprivileged user ID outside a user namespace while at the same
time having a user ID of 0 inside the namespace. Which in term of isolation, makes the user and
group ID number spaces totally separated.
cgroups:
Traditionally, all processes received similar amount of system resources and all the tuning goes
through the process niceness value.
A mechanism to organize processes hierarchically and distribute system resources — such as CPU
time, system memory, network bandwidth, or combinations of these resources — along
the hierarchy in a controlled and configurable manner.
Every process belongs to one and only one cgroup.
Initially, only the root cgroup exists to which all processes belong.
All processes are put in the cgroup that the parent process belongs to at the time.
Two parts of cgroups:
1. core: primarily responsible for hierarchically organizing processes.
2. controller: responsible for distributing or applying limits to a specific type of system resource.
blkio: sets limits on input/output access to and from block devices.
cpu: uses the CPU scheduler to provide cgroup tasks an access to the CPU.
cpuacct: creates automatic reports on CPU resources used by tasks in a cgroup.
cpuset: assigns individual CPUs (on a multicore system) and memory nodes to tasks in a cgroup.
devices: allows or denies access to devices for tasks in a cgroup.
freezer: suspends or resumes tasks in a cgroup.
memory: sets limits on memory used by tasks in a cgroup.
net_cls: tags network packets with a class identifier (classid) that allows the traffic controller to
identify packets originating from a particular cgroup task.
perf_event: enables monitoring cgroups with the perf tool.
hugetlb: allows to use virtual memory pages of large sizes, and to enforce resource limits on these
pages.
Union filesystem:
A stackable unification file system, which merges the contents of several directories (branches),
while keeping their physical content separate.
Builds file systems that operate by creating layers, allow files and directories of separate file
systems {branches}, to be transparently overlaid, forming a single coherent file system.
It allows any combination of read-only and read-write branches, as well as insertion and deletion
of branches anywhere in the tree.
AUFS [Another Union File System]:
Since V2 it stands for "advanced multi layered unification filesystem“.
It was the first storage driver in use with Docker, developed in 2006 as a complete rewrite of the
earlier UnionFS.
According to Docker:
AUFS is not included in the mainline (upstream) Linux kernel. It was rejected because of the
dense, unreadable, and uncommented code.
OverlayFS:
Merged in the Linux kernel in 2014, kernel version 3.18.
The natural successor to aufs.
Combines two filesystems - an 'upper' filesystem and a 'lower' filesystem.
When a name exists in both filesystems, the object in the 'upper' filesystem is visible while the
object in the 'lower' filesystem is either hidden or, in the case of directories, merged with the
'upper' object.
DeviceMapper [storage backend ]:
Initially developed by Redhat as an alternative to AUFS.
Based on snapshots.
Uses allocate-on-demand.
Container format:
Docker wraps all the previous components into an execution environment or driver called
{container format}.
Traditional container drivers: OpenVZ, systemd-nspawn, libvirt-lxc, libvirt-sandbox, qemu/kvm,
BSD Jails, Solaris Zones, and even good old chroot.
The new execution drivers: moving from libcontainer to runc & containerd.

More Related Content

PPTX
PPTX
PPTX
Understanding the container landscape and it associated projects
PPTX
Introduction to linux containers
PDF
Evoluation of Linux Container Virtualization
PDF
Kernel linux lab manual feb (1)
PPTX
Docker: Aspects of Container Isolation
PDF
Understanding the container landscape and it associated projects
Introduction to linux containers
Evoluation of Linux Container Virtualization
Kernel linux lab manual feb (1)
Docker: Aspects of Container Isolation

What's hot (20)

PDF
Docker internals
PPTX
Containerization (docker)
PPTX
Linux container, namespaces & CGroup.
PPTX
Survey of open source cloud architectures
PPTX
Introduction to automated environment management with Docker Containers - for...
PDF
PDF
LXC NSAttach
PDF
Linux containers-namespaces(Dec 2014)
PDF
Linux cgroups and namespaces
PDF
Hack the whale
PDF
Microservices, Containers and Docker
PDF
Linuxcon Barcelon 2012: LXC Best Practices
PDF
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
PDF
Unikernels Introduction
PDF
Lxc- Introduction
PPTX
Realizing Linux Containers (LXC)
PDF
Docker storage drivers by Jérôme Petazzoni
PPTX
Union FileSystem - A Building Blocks Of a Container
PDF
Docker 1.8: What's new
PPTX
Docker internals
Containerization (docker)
Linux container, namespaces & CGroup.
Survey of open source cloud architectures
Introduction to automated environment management with Docker Containers - for...
LXC NSAttach
Linux containers-namespaces(Dec 2014)
Linux cgroups and namespaces
Hack the whale
Microservices, Containers and Docker
Linuxcon Barcelon 2012: LXC Best Practices
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
Unikernels Introduction
Lxc- Introduction
Realizing Linux Containers (LXC)
Docker storage drivers by Jérôme Petazzoni
Union FileSystem - A Building Blocks Of a Container
Docker 1.8: What's new
Ad

Viewers also liked (7)

PDF
Tendencias del marketing pucp 2012
PDF
XebiCon'16 : Orange - Transformation DevOps, les conteneurs sont vos alliés !
PDF
A Gentle Introduction To Docker And All Things Containers
PPTX
Why Docker
PPTX
Docker introduction
PDF
Kubernetes Meetup Paris #5 - Metriques applicatives k8s
PDF
Docker 101: Introduction to Docker
Tendencias del marketing pucp 2012
XebiCon'16 : Orange - Transformation DevOps, les conteneurs sont vos alliés !
A Gentle Introduction To Docker And All Things Containers
Why Docker
Docker introduction
Kubernetes Meetup Paris #5 - Metriques applicatives k8s
Docker 101: Introduction to Docker
Ad

Similar to The building blocks of docker. (20)

PPTX
Cgroups, namespaces and beyond: what are containers made from?
PDF
Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...
PDF
GDG Cloud Iasi - Docker For The Busy Developer.pdf
PDF
Docker containers : introduction
PDF
dotCloud (now Docker) Paas under the_hood
PDF
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
PDF
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
PDF
Namespaces and cgroups - the basis of Linux containers
PPTX
Introduction to containers
PPTX
Introduction to OS LEVEL Virtualization & Containers
PPTX
Linux container internals
PDF
LXC Containers and AUFs
PDF
Scale11x lxc talk
PDF
ACM_Intro_Containers_Cloud.pdf Cloud.pdf
PDF
"Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo...
PDF
Lightweight Virtualization: LXC containers & AUFS
PDF
Creare Docker da zero con GoLang - Giulio De Donato
PDF
Docker Container: isolation and security
PDF
Advanced Namespaces and cgroups
PPTX
Containerization & Docker - Under the Hood
Cgroups, namespaces and beyond: what are containers made from?
Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...
GDG Cloud Iasi - Docker For The Busy Developer.pdf
Docker containers : introduction
dotCloud (now Docker) Paas under the_hood
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Namespaces and cgroups - the basis of Linux containers
Introduction to containers
Introduction to OS LEVEL Virtualization & Containers
Linux container internals
LXC Containers and AUFs
Scale11x lxc talk
ACM_Intro_Containers_Cloud.pdf Cloud.pdf
"Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo...
Lightweight Virtualization: LXC containers & AUFS
Creare Docker da zero con GoLang - Giulio De Donato
Docker Container: isolation and security
Advanced Namespaces and cgroups
Containerization & Docker - Under the Hood

Recently uploaded (20)

PPTX
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
PDF
Abrasive, erosive and cavitation wear.pdf
PDF
distributed database system" (DDBS) is often used to refer to both the distri...
PDF
22EC502-MICROCONTROLLER AND INTERFACING-8051 MICROCONTROLLER.pdf
PPTX
CyberSecurity Mobile and Wireless Devices
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PPTX
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PPTX
Module 8- Technological and Communication Skills.pptx
PDF
Categorization of Factors Affecting Classification Algorithms Selection
PPTX
communication and presentation skills 01
PDF
Improvement effect of pyrolyzed agro-food biochar on the properties of.pdf
PPTX
Amdahl’s law is explained in the above power point presentations
PDF
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
PPTX
Fundamentals of Mechanical Engineering.pptx
PPTX
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
PPTX
"Array and Linked List in Data Structures with Types, Operations, Implementat...
PPTX
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
PDF
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
PDF
III.4.1.2_The_Space_Environment.p pdffdf
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
Abrasive, erosive and cavitation wear.pdf
distributed database system" (DDBS) is often used to refer to both the distri...
22EC502-MICROCONTROLLER AND INTERFACING-8051 MICROCONTROLLER.pdf
CyberSecurity Mobile and Wireless Devices
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
Module 8- Technological and Communication Skills.pptx
Categorization of Factors Affecting Classification Algorithms Selection
communication and presentation skills 01
Improvement effect of pyrolyzed agro-food biochar on the properties of.pdf
Amdahl’s law is explained in the above power point presentations
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
Fundamentals of Mechanical Engineering.pptx
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
"Array and Linked List in Data Structures with Types, Operations, Implementat...
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
III.4.1.2_The_Space_Environment.p pdffdf

The building blocks of docker.

  • 1. Ready to get shipped? By Chafik Belhaoues @XebiaFr
  • 2. Introduction [History & newness of the idea]1 Anatomy of the building blocks2 Namespaces3 cgroups5 Storage backends6 Execution environments7
  • 3. A little bit of history: The marine containers have been created in 1956 par Malcom Mclean in NewYork, just because time is money (-90% of transport costs). BEFORE AFTER
  • 4. Did the containerization {concept} exist before Docker? OH YEAH…
  • 6. The need of containerization: Develop, ship, and run applications {everywhere}. Concept? Product? Life-cycle engine? …you said {DevOps} tool? A single, runnable, distributable executable. What is the difference with the other form of virtualization then? Open source [CS version]. Not OS-related [theoretically]. No hypervisor needed. A different [new] vision of IT. Closer to the most IT needs.
  • 7. Hardware-centric: A VM packages a full stack (virtual hardware, kernel, a user space). Designed with machine operators in mind, not software developers. VMs offer no facilities for application versioning, monitoring, configuration, logging or service discovery… Application-centric: Packages only the user space; there is no kernel or virtual hardware. Sandboxing method known as containerization = Application virtualization.
  • 8. Overview: Docker is based on a client-server architecture. The client {user commands} talks to the Docker Daemon. Daemon: runs on a host machine. Client: accepts commands from the user and communicates back and forth with a Docker Daemon using API. 3 components involved: build..ship..run Images: a read-only template, images are the build component of Docker. Registries: hold images, the distribution component of Docker. Containers: holds everything that is needed for an application to run, the run component of Docker
  • 9. Combination: A container consists of a read-only image based on a given operating system, user-added files, and meta-data.
  • 10. Anatomy of the building blocks: Apartment complex analogy: 1. Each apartment will require water and electricity and these resources should be distributed fairly {resources}. 2. The apartments are isolated with walls to keep people separate from their respective neighbors {isolation}. 3. Each apartment also has a door, lock, and keys {security}. 4. Finally, most apartment complexes benefit from a manager who works to ensure a consistent and clean steady state of operations {management}. By analogy to system resources required for a container, the kernel should implement 4 elements: - Resource Management. - Process Isolation. - Security. - Tooling (CLI).
  • 11. Resource management is provided by control groups (cgroups). Process isolation is provided by kernel namespaces. Security is provided by policy manager like: SELinux Overall management by Docker CLI.
  • 12. Namespace: Wraps a global system resources in an abstraction. Changes are visible only inside the namespace. Kernel namespaces allow the new process to have its own hostname, IP address and a whole network stack, filesystem, PID, IPC stack, and even user mapping. The container to look a VM. Kernal space: Strictly reserved for running a privileged operating system kernel, kernel extensions, and most device drivers, the gate to this land is managed by CAP_SYS_ADMIN capability starting with kernel 2.2 [before it was the superuser, or root, ID 0]. User space [userland]: The memory area where application software and some drivers execute.
  • 13. These interactions are managed by 3 syscall : clone setns unshare
  • 14. Playing with Syscalls: clone: Creates a new process, in a manner similar to fork then creates new namespaces for every flag CLONE_NEW*. Unlike fork, the child process is allowed to share parts of its execution context with the calling process (the memory space, the table of file descriptors, the table of signal handlers…). setns: Allows the calling process to join an existing namespace. unshare: Moves the calling process to a new namespace in other words: disassociates parts of its execution context that are currently being shared with other processes (or threads).
  • 15. Namespace Date Kernel version mount 2002 2.4.19 uts 2006 2.6.19 ipc 2006 2.6.19 pid 2008 2.6.24 net 2009 2.6.29 user 2013 3.8
  • 16. MNT namespace: Isolate the set of filesystem mount points. Means that processes in different mount namespaces can have different views of the filesystem hierarchy. The container “thinks” that a directory which is actually mounted from the host OS is exclusive to the container. Interacting with this namespace is simply done by mount/umount syscalls. All about Isolation…
  • 17. PID namespace: Isolate the process ID number space = processes in different "PID namespaces" can have the same PID. The container thinks it has a separate standalone instance of the OS. Technically, the new process created in a new namespace will be the famous PID 1 "init“. Inside the namespace fork/clone syscalls will produce processes with PIDs that are unique. This mechanism allows containers to provide functionality such as: suspending/resuming the set of processes. PID consistency on migration.
  • 18. NETNS namespace: Logically another copy of the network stack, with its own routes, firewall rules, and network devices. It means each network namespace has its own network devices, IP addresses, IP routing tables, /proc/net directory, port numbers... It allows a container to have its own IP address, independent of that of the host.
  • 19. UTS namespace [UNIX Time Sharing]: Historically the term "UTS" derives from the name of the structure passed to the uname() system call: struct utsname. {Initially the time sharing was invited to allow a large number of users to interact concurrently with a single computer by the sharing of a computing resource among many users by means of multiprogramming and multi-tasking at the same time}. This mechanism isolates two system identifiers nodename and domainname. It allows the containers to have its own separate identity {hostname and NIS domain name}.
  • 20. IPC namespace: IPC (POSIX/SysV IPC) namespace provides isolation/separation of IPC resources: Named shared memory segments. Semaphores. Message queues. Why this need ? Shared memory segments are used to accelerate inter-process communication at memory speed, rather than through pipes or through the network stack. It is commonly used by databases and custom-built high performance applications for scientific computing and financial services industries. If these types of applications are broken into multiple containers, you might need to share the IPC mechanisms of the containers.
  • 21. User namespace: The last namespace to be implemented, integrated in the kernel mainstream starting from 3.8 BUT in technical preview in almost all Linux distros. A process's user and group IDs can be different inside and outside a user namespace, that means a process can have a normal unprivileged user ID outside a user namespace while at the same time having a user ID of 0 inside the namespace. Which in term of isolation, makes the user and group ID number spaces totally separated.
  • 22. cgroups: Traditionally, all processes received similar amount of system resources and all the tuning goes through the process niceness value. A mechanism to organize processes hierarchically and distribute system resources — such as CPU time, system memory, network bandwidth, or combinations of these resources — along the hierarchy in a controlled and configurable manner. Every process belongs to one and only one cgroup. Initially, only the root cgroup exists to which all processes belong. All processes are put in the cgroup that the parent process belongs to at the time. Two parts of cgroups: 1. core: primarily responsible for hierarchically organizing processes. 2. controller: responsible for distributing or applying limits to a specific type of system resource.
  • 23. blkio: sets limits on input/output access to and from block devices. cpu: uses the CPU scheduler to provide cgroup tasks an access to the CPU. cpuacct: creates automatic reports on CPU resources used by tasks in a cgroup. cpuset: assigns individual CPUs (on a multicore system) and memory nodes to tasks in a cgroup. devices: allows or denies access to devices for tasks in a cgroup. freezer: suspends or resumes tasks in a cgroup. memory: sets limits on memory used by tasks in a cgroup. net_cls: tags network packets with a class identifier (classid) that allows the traffic controller to identify packets originating from a particular cgroup task. perf_event: enables monitoring cgroups with the perf tool. hugetlb: allows to use virtual memory pages of large sizes, and to enforce resource limits on these pages.
  • 24. Union filesystem: A stackable unification file system, which merges the contents of several directories (branches), while keeping their physical content separate. Builds file systems that operate by creating layers, allow files and directories of separate file systems {branches}, to be transparently overlaid, forming a single coherent file system. It allows any combination of read-only and read-write branches, as well as insertion and deletion of branches anywhere in the tree.
  • 25. AUFS [Another Union File System]: Since V2 it stands for "advanced multi layered unification filesystem“. It was the first storage driver in use with Docker, developed in 2006 as a complete rewrite of the earlier UnionFS. According to Docker: AUFS is not included in the mainline (upstream) Linux kernel. It was rejected because of the dense, unreadable, and uncommented code.
  • 26. OverlayFS: Merged in the Linux kernel in 2014, kernel version 3.18. The natural successor to aufs. Combines two filesystems - an 'upper' filesystem and a 'lower' filesystem. When a name exists in both filesystems, the object in the 'upper' filesystem is visible while the object in the 'lower' filesystem is either hidden or, in the case of directories, merged with the 'upper' object.
  • 27. DeviceMapper [storage backend ]: Initially developed by Redhat as an alternative to AUFS. Based on snapshots. Uses allocate-on-demand.
  • 28. Container format: Docker wraps all the previous components into an execution environment or driver called {container format}. Traditional container drivers: OpenVZ, systemd-nspawn, libvirt-lxc, libvirt-sandbox, qemu/kvm, BSD Jails, Solaris Zones, and even good old chroot. The new execution drivers: moving from libcontainer to runc & containerd.