2. 2
Docker : Pre requisite
• You need some knowledge to run docker :
• IT parts :
• GNU Linux
• Networking
• Windows (if you want to run Docker on Windows Server)
• « Coding » parts :
• Be able to understand some code sample in Python, Javascript and
others languages
• Be able to understand Json files and YML files
3. 3
Virtualization
• Provide an abstraction layer between hardware and OS
• Better usage of hardware
• Lower the energy consumption
• Add way to run different OS on the same server
• Reduce the time for deploying new servers (15 min avg)
4. 4
Containerization
• Provide isolation between processes on the same operating
system
• Reduce the memory usage of same kernel.
• Reduce the time for deploying services on servers (up to 5
sec avg)
• Provide a way for Infrastructure as Code (IaaC)
6. 6
Docker : Operating Systems
• Docker run on :
• Linux : https://docs.docker.com/engine/installation/linux/
• Windows : Docker for Windows (made by French team !)
• Mac Os : Docker for Mac
7. 7
Docker in Microsoft World
• In Microsoft Windows Server 2016, they added
Windows Containers based on Docker (even Linux
containers):
• https://blog.docker.com/2016/04/docker-windows-server-
tp5/
8. 8
Docker : What is Docker
• 18/01/2013 : 1st commit
• 01/02/2013 : 1st demo online
• 21/03/2013 : 1st demo à Pycon
US
• 23/03/2013 : Version 0.1
• 26/03/2013 : GitHub opening
• 23/04/2013 : Version 0.2
• 06/05/2013 : Version 0.3
• 03/06/2013 : Version 0.4
• 25/06/2013 : Linux Fundation
• 18/07/2013 : Version 0.5 (top,
mount)
• 23/08/2013 : Version 0.6 (-
privileged, LXC conf)
• 19/09/2013 : Red Hat partner
• 29/10/2013 : company renamed
in Docker
• 26/11/2013 : Version 0.7
(standard kernel, device-mapper,
name, links)
• 21/01/2014 : get 15M$
• 04/02/2014 : Version 0.8
(MacOSX, BTRFS experimental,
ONBUILD)
• ….
10. 10
Docker : Architecture
Overview
• The core of docker is
the deamon
« dockerd »
• You control docker over
the client « docker »
• « docker »
communicate over a
Rest Api to the docker
daemon
11. 11
Docker : Engine Architecture
• Docker Engine is based
on standard technology
:
• Libvirt
• LXC
• Systemd-nspwan
• Libcontainer
• Docker use heavily
Linux features such as
Namespace, Netlink,
netfilter or cgroups
12. 12
Docker : Engine Architecture
- Mount
• Mount Namespace (Linux 2.4.19)
• Manage the isolation between file system and process
groups:
• Mount point aren’t system wide but namespace specific
• You can inherit mount point
• New root (chroot)
13. 13
Docker : Engine Architecture
- PID
• PID namespace (Linux 2.6.24)
• Manage ID isolation between process :
• PID 1 init-like per namespace
• each namespace get it’s own PID numeration (host isolation)
• process of one namespace can’t call systemcall on other process of
other PID namespace
• Manage pseudo-filesystem (ex : /proc) seen by PID namespace
• each process onw multiple PID : one in namespace, one outside
(process seen by host), more if you have imbricated namespace.
14. 14
Docker : Engine Architecture
- Net
• Net namespace (Linux 2.6.19-2.6.24)
• Manage network isolation. Each own :
• interfaces
• ports
• Routing table
• Firewall rules (iptables)
• Folder /proc/net
• INADDR_ANY (0.0.0.0)
15. 15
Docker : Engine Architecture
- User
• User namespace (Linux 2.6.23-3.8)
• Manage isolation of users and groups :
• Split right between namspaces
• Make safe namespace shareing to process without right
17. 17
Docker : Engine Architecture
- UTS
• UTS namespace (Linux 2.6.19)
• Manage isolation of hostname and domain.
• UTS come from structure of "utsname" send by syscall uname(). UTS
means "UNIX Time-sharing System".
18. 18
Docker : Terms
• Images
• Images are read only template. We build images.
• Registry
• Registry hold images. It can be public or private. It’s the
distribution part.
• Docker Hub is a public registry.
• Containers
• Containers are similar to a folder. Containers are build
from images. This is the run part of docker
19. 19
Docker : Prepare your
workstation for Docker
• Linux :
• Be sure to run a Docker compatible and supported Linux
Kernel
• 3.10 today, use the uname –r command to get your running Kernel
• You need to install « docker-compose » manualy
• Windows :
• Windows 10 64 bits in Pro, Enterprise or Education with at
least the build 10586. You need to enable Hyper V
• Mac :
• A Mac after 2010 with Intel MMU and EPT
• OS X 10.10.3 or newer
• 4 Gb of Ram
20. 20
Docker installation
Linux
• For Docker on Linux we
need to follow the official
guide depending of the
linux distribution :
• https://docs.docker.com/engi
ne/installation/linux/
• For our labs it’s better to
install it in a virtual
machine but it’s not
mandatory
• Take care of your
selinux/apparmor
configuration !
Windows & Mac
• Just download Docker
for Mac or for Windows
and follow the installer
assistant
22. 22
Linux version vs Mac or
Windows version
Linux
• On linux you can run
the docker engine
directly
• You use the same kernel
in Docker and in your
containers
Windows or Mac
• On Mac OS :
• You use the HyperKit to
run a Linux virtual
machine:
• github.com/docker/
hyperkit
• On Windows :
• You use the Hyper V
feature to run a virtual
machine
23. 23
Docker client consideration
• For testing and developping, use docker directly
on your workstation is not a problem. But if you
want to run docker in production, you will need to
run the client on you management computer and
the docker daemon on your docker host or in a
Virtual Machine.
24. 24
Configure the Docker
Daemon
• The daemon is standard, so you can interact with it
as usual using the LSB syntax :
• Service docker start/stop/…
• Systemclt start docker
• To configure you docker daemon, you have two
options :
• Edit the/etc/default/docker file
• Create a docker.service.d in /etc/systemd/system/ (see it
on next slide)
25. 25
Configure the daemon for
remote management
• On a Red Hat system using SystemD we need to :
• Create a folder like that :
• mkdir /etc/systemd/system/docker.service.d
• Create a docker.conf file in this folder
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -D --tls=true --tlscert=/var/docker/server.pem
--tlskey=/var/docker/serverkey.pem -H tcp://192.168.59.3:2376
• -D is for debug mode
• --tls is for using tls mode and so set your cert and your key using –tlscert
and –tlskey
• -H tcp://192.168.59.3:2376 is needed for the deamon to listen on the 2376
port for incoming docker connection
• Reload the deamon : sudo systemctl daemon-reload
• Restart the deamon : sudo systemctl restart docker
• By default the docker daemon listen on
unix:///var/run/docker.sock so we need to open the remote
connections, using sock is not really recommanded for remote !
26. 26
Docker Engine Logs
• To get the logs of the docker deamon you can use
your standard logs tools.
• On Red Hat you can get your log of your docker instance :
• journalctl -u docker
27. 27
How to connect docker to
remote Docker
• For one command it’s maybe more easier to use
the –H option :
• docker –H tcp://192.168.59.3:2376
• Or you can configure your docker client by adding
an DOCKER_HOST in your environnement :
• export DOCKER_HOST=tcp://192.168.59.3:2376
28. 28
Configure the Docker
Daemon
• On a Red Hat system using SystemD we need to :
• Create a folder like that (if you don’t have made it before):
• mkdir /etc/systemd/system/docker.service.d
• Create a file http-proxy.conf in this folder
• Edit this file with the following content :
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/ "
"HTTPS_PROXY=https://secureproxy.example.com:80/"
"NO_PROXY=http://proxy.example.com:80/"
• Reload the docker deamon using « systemctl daemon-
reload »
• Check if your configuration is ready :
systemctl show --property=Environment docker
• You should get :
Environment=HTTP_PROXY=http://proxy.example.com:80/
29. 29
Docker Compose installation
• To install docker compose on Linux it’s pretty easy :
• Copy the curl line from :
https://docs.docker.com/compose/install/
• curl -L
https://github.com/docker/compose/releases/download/1
.8.0/docker-compose-`uname -s`-`uname -m` >
/usr/local/bin/docker-compose
• We will see what is docker-compose and why we
need it more further.
30. 30
Docker – Base command
• Docker is written in Go, so we have a single
keyword to interact with docker :
• docker (yes ;))
31. 31
Docker – Base commands
• Docker command syntax is like systemctl
command :
• docker <keyword> <command> [options]
• Keyword are for example :
• Attach
• Run
• Start/Stop
• Images
• Command are for example
• Ls
32. 32
Docker : Exercise 1 -
Installation
• We seen how to install docker, now install your
docker host !
• https://docs.docker.com/engine/installation/linux/
• https://docs.docker.com/engine/installation/mac/
• https://docs.docker.com/engine/installation/windows/
• Don’t forget to install docker-compose !
33. 33
Docker : Exercise 2 – Base
Command
• Get your docker version
docker version
Sample output :
34. 34
Docker : Exercise 3 – Base
Command
• Get your docker-compose version
• docker-compose version
35. 35
Docker : Exercise 4 and 5 –
Base Command
• Get a base image of your choice (just download it !)
docker pull centos
• To get the list of the « tag » avaibles you can use curl :
• Docker : Exercise 4
• You may notice the centos:latest
• You can get a full list of availables tag on the docker hub
• Try to get an older version like the 5.11 :
docker pull centos:5.11
• This can be useful for older binaries and testing
36. 36
Docker : Exercise 6 – Base
Command
• Run a Hello World
• docker run hello-world
• As you see, we can use
run without pulling the
images first, it’s not
recommanded for big
images
37. 37
Docker : Exercise 7 – Base
Command
• Run a base image interactive and get information in the
container
docker run –it centos /bin/bash
• The –i stand for interactive and –t for assign a virtual tty
• Using yum install the net-tools package
• yum update && yum install net-tools
• Now net-tools is installed get your container IP
• Yes, your container have it’s own ip on another network, we will see
this point later…
38. 38
Docker : Exercise 8 – Base
command
• Now we have downloaded images and launched
containers it’s time to clean it up:
• docker ps –a
• docker rm $(docker ps -a -q)
• (or one by one ;))
• docker images
• docker rmi $(docker images -a -q)
39. 39
Docker : Images
• In docker we used images but images are more
complex than you think.
• Docker images are build on UnionFS (AUFS, BTRFS..)
• Images are the assembly of multiple layer
41. 41
Docker : Images
• Images in multiple layer provide advantage :
• You can update layer and rebuild the layer without rebuild
the whole images.
• You can put only the delta layer into your images
42. 42
Docker : Images - Exercise
• Pull the ubuntu layer :
• Docker pull ubuntu:15.04
• Create a file named Dockerfile
• Write :
• FROM: ubuntu:15.04
• RUN echo "Hello world" > /tmp/newfile
• Save and Exit
• Type :
• docker build -t changed-ubuntu .
43. 43
Docker : Images – Exercise
• At the build time we can see :
• Sending build context to Docker daemon 2.048 kB
• Step 1 : FROM ubuntu:15.04 ---> d1b55fd07600
• Step 2 : RUN echo "Hello world" > /tmp/newfile
• ---> Running in a74b0ffafa39
• ---> 45e5e22e1161
• Removing intermediate container a74b0ffafa39
• Successfully built 45e5e22e1161
• As you can see, we created a R/W layer execute the
command and then created a snapshot of the
delta layer based on ubuntu
44. 44
Docker : Images – Exercise
• Running the docker images command you can see
your new image :
• REPOSITORY TAG IMAGE ID CREATED SIZE
• ubuntu-changed latest 45e5e22e1161 2 minutes ago 131.3 MB
• ubuntu 15.04 d1b55fd07600 7 months ago 131.3 MB
• Running the docker history ubuntu-changed
command :
• You can see all the layer of your new image !
45. 45
Docker : Images –
Exercice
In reality, we only linked the
existing layer of the Ubuntu
image and our layer.
Using the UnionFS we don’t
need to copy the whole layer
multiple time to get a good
image.
46. 46
Docker : Networking
• Docker provide networking to containers by using
a default network. This network is a bridge and let
containers communicate each others.
• The official doc is here :
https://docs.docker.com/engine/tutorials/networkingcont
ainers/
47. 47
Docker : Networking –
Exercise 1
• Let‘s try to create a new network for our containers
:
• docker network create -d bridge ournewbridge
• docker network ls will show us our new bridge
48. 48
Docker : Networking
• Using docker network inspect you can have more
information about your new network
50. 50
Docker : Networking –
Exercise 2
• Using the inspect command we can find the
container2 IP address :
• docker network inspect isolated_nw
• Create a third container :
• docker run --network=isolated_nw --ip=172.25.3.3 -itd --
name=container3 busybox
• Using the attach command connect in container2 :
• docker attach container2
• Using ifconfig you can see container2 is connected to 2
network on eth0 and eth1
51. 51
Docker : Networking –
Exercise 2
• Try to ping container1 and container3 using
container name
• You can use the name on the isolated network we create
but not in the defaut one. That’s by design, the service
discovery isn’t enabled on the default bridge.
Our Host
docker0
Isolated_nw
52. 52
Docker : Networking
•To go deeper in more complex
networking in Docker you can go to :
• https://docs.docker.com/engine/
userguide/networking/
53. 53
Docker Volume
• For persistent storage we need docker volumes
• Docker volume are mount point between your docker
host and your containers
• On the docker documentation we can go to :
• https://docs.docker.com/engine/tutorials/dockervolumes/
• As you see there’s multiples way to add storage for our
containers
54. 54
Docker Volumes
• Data volumes :
• Using a create or run command we can create a persistent
storage for containers :
• docker run -d -P --name container -v /<YourVolumeName> <images>
<command>
• Using this way the mount point is controlled by the docker
engine
• You can see where your files are stored with :
docker inspect containers | more
• You will get a JSON output, search for « Mounts » node :
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/YourVolumeName",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
55. 55
Docker Volumes
• Data Volumes can also be « named », in our last
command we used the –v /YourVolumeName but
we can also use :
• docker run -d -P --name container -v
/YourVolumeNameonHost:/<YourVolumeName> <images> <command>
• Then you can push file from your host in the
container directly
56. 56
Docker Volumes
• You can also use a container to store your data.
• Docker recommand to use the same image as your
« application » container to lower the impact on your host.
• If you want a Mysql Container you can do the following :
• docker create -v /dbdata --name dbstore mariadb /bin/true
• docker run -d --volumes-from dbstore --name db1 mariadb
• And then launch another one :
• docker run -d --volumes-from dbstore --name db2 mariadb
• Docker also advise us :
• “However, multiple containers writing to a single shared
volume can cause data corruption. Make sure your
applications are designed to write to shared data stores.”
57. 57
Docker Volumes
• In our previous ssh sample we could add
something in our Dockerfile to persist data in our
containers
• Just before the CMD line add the following :
• VOLUME /data/ourfiles
• Rebuild your image and launch the container like :
• docker create -v /data/ourfiles --name sshdata eg_sshd
• Then :
• docker run -d --volumes-from sshdata --name ssh1 –p 2221:22
eg_sshd
• docker run -d --volumes-from sshdata --name ssh2 -p 2222:22
eg_sshd
• Try to create a file in ssh1 and get it from ssh2
58. 58
Dockerfiles
• Get the dockerfile of a LAMP Stack :
• https://hub.docker.com/r/nickistre/centos-lamp/~/dockerfile/
• A Dockerfile is an YML file with all the information for
create an image using the « docker build »
command.
• In the dockerfile you get you can see :
• FROM keyword, used for images selection
• RUN keyword, used for execute command in the build process
• ADD keyword, used for copying file from your computer in the
image
• You can get the whole keyword from the Docker
website here :
• https://docs.docker.com/engine/reference/builder/
59. 59
Dockerize an Application
• On the official docker page we have samples to
understand how we can dockerize app.
• Get the sample here :
• https://docs.docker.com/engine/examples/running_ssh_se
rvice/
• Copy the Dockerfile example in a new folder and then :
• docker build -t eg_sshd .
60. 60
Dockerize an Application
• After the build process we could run our image in a
container :
• docker run -d -P --name test_sshd eg_sshd
• Let’s find where we can interact with our ssh
services :
• docker port test_sshd 22
• Connect in your container :
• ssh [email protected] –p <theportfromcommand>
61. 61
Dockerize an Application
• Create a file in your new ssh container:
• Touch myfile.txt
• Exit your ssh container and then change the root
password in the line 10 « RUN echo
'root:screencast' | chpasswd » and rebuild your
image.
• Reconnect in your container, what do you see ?
• Containers aren’t able to store data, every time you
rebuild a container, you will loose your data !
62. 62
Docker Compose
• For running complex architecture, Dockerfiles are
not very handy. With Docker you can also use the
docker-compose
• First install docker-compose command from here :
• https://docs.docker.com/compose/install/
• Compose is a three step process :
• Define your app’s environment with a Dockerfile so it can
be reproduced anywhere.
• Define the services that make up your app in docker-
compose.yml so they can be run together in an isolated
environment.
• Lastly, run docker-compose up and Compose will start and
run your entire app.
63. 63
Docker Compose
•
To better understand the docker compose way,go
here :
• https://docs.docker.com/compose/gettingstarted/
• We will copy a simple python application and create a
redis database
• After that we can create a Wordpress stack
following this :
• https://docs.docker.com/compose/wordpress/
64. 64
Docker Compose
• A docker-compose.yml
looks like :
• You need the keyword
« Services » to declare
your web services and
redis
• As in a dockerfile you can
add information like links
between containers
65. 65
Docker Registry
• Docker store image in a registry
• When we used docker pull or docker run command we get
the image in the docker hub. This registry is public and
contains images from official company like Canonical, Red
Hat, Oracle, Microsoft and also from external contributor
• We can create our shared registry easily :
• https://docs.docker.com/registry/deploying/
• First we need to create a container :
• docker run -d -p 5000:5000 --restart=always --name
registry registry:2
• Then add the first image in our registry :
• docker pull ubuntu && docker tag ubuntu
localhost:5000/ubuntu
66. 66
Docker Registry
• To add the image in our registry we need to push it
:
• docker push localhost:5000/ubuntu
• As the registry is a container, we can access it
remotely as other container on the specified port
• For production registry, you must add encryption
like mentionned here :
• https://docs.docker.com/registry/deploying/#/running-a-d
omain-registry
67. 67
Docker Clustering
• Running a standalone host is like creating a
virtualisation with one host.
• Using Docker Swarm, we can clusterize our server
and achieve a high available docker solution
• https://docs.docker.com/swarm/install-manual/
• For now you need a linux machine to test swarm with
virtualbox installed
68. 68
Docker Orchestration
• Running Docker without orchestration in
production isn’t really recommended.
• You can try Kubernetes, Rancher, Mesos, Swarm
• Kubernetes is from Google, with all the Google experience
• http://kubernetes.io/
• Rancher is a new company (Rancher Labs) who provides
the Rancher server to achieve a really fast installation
using containers
• http://rancher.com/
• Mesos from Apache
• http://mesos.apache.org/
• Swarm mode from Docker
• https://docs.docker.com/engine/swarm/