Managing the XiVO project with Docker

“We did packages, now we do containers.”
Anonymous developer

What is Docker about?

We are currently using more and more Docker containers in our projects. With it, we gain more flexibility and more control on our various packages, and adding any functionality to one project become easy. Docker looks a little like virtual machines (VM) as it creates, like them, an environment isolated from the rest of the operating system. However, since we are working in a Linux environment, it is much lighter, because Docker does not duplicate kernel modules – instead, it shares the kernel resources between the virtual and host machines.


Docker and VM main differences

Image and containers:

Two words are essential when discussing Docker: image and container. The container is the running part of Docker: it is where you configured the environment and where the code is running. Containers are lightweight, easy to start, update and remove. They are a bit like the instantiation of an object for an object-oriented programming language, except that here the object is a small operating system.


There are many Docker containers running around


Each container is based on an image, which would be the “object” to be instantiated in our previous example. Images define the first elements we want to load in the container: the Linux distribution and the service dependencies. We use containers to wrap our services one by one: in the web-interface there is a container for the database, another one for the config-mgt, another one for the nginx proxy server, … and all of these need a different image containing the appropriate service modules. Thankfully, most of them already have an official image on the Docker Hub, the online open source platform which hosts these images, and those are a great starting point to put a service into Docker.


The official image of the postgres database, with some popular images added by other users

Customizing Docker

If we start a container based on one of those official images, the nginx image for instance, we will just find inside the basic configuration to activate any nginx server. So we need to add our own configuration files inside. A way to do it is to simply customize the image by using a Dockerfile. Dockerfiles allow us to create a new image based on an existing one, to which we can add all the specific elements our own service requires. This image can be pushed online in Docker Hub, and then we can start a container with it on any computer where the Docker environment is installed, which is the case for our virtual machines.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
FROM debian:stretch

RUN apt-get -yqq update \
&& apt-get -yqq install python-pip \
git \
libpq-dev \
python-dev \
libyaml-dev

WORKDIR /usr/src/dao

ADD requirements.txt .
ADD test-requirements.txt .
RUN pip install -r requirements.txt
RUN pip install -r test-requirements.txt

# Special trick to install Debian version of sqlalchemy for the tests:
# - sqlalchemy is kept in requirements for developments (for IDEs)
# - but then we force installation of sqlalchemy from debian repo
RUN pip uninstall -y sqlalchemy
RUN apt-get update \
&& apt-get -yqq install python-sqlalchemy

ADD . /usr/src/dao

CMD nosetests xivo_dao

A Dockerfile which defines an image for the Database Access Manager based on Debian


Those images can be updated easily: Docker Hub is working with a system similar to Git
managed by layers and tags, so adding a new version of your image or returning to a previous one is pretty simple to do. Official images are updated regularly the same way.


Our official user on Docker Hub, xivoxc, has launched no less than 39 different images so far


But in most cases, the container also requires some files from the computer where it will be executed from.
Docker has several ways to mount them inside the container, for instance as a volume. This can be done when we start the container from an image, either by passing them as parameters on the command line or by using a compose file, which is a yml file often named docker-compose.yml. Within this compose file, we can set multiple options, among them the containers we want to start, the ports they will be listening to, or the files to be mounted from the host machine. By itself, it deploys all the services we need, and it can easily be shared. Various compose files are used in the XiVO project:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
xivo_stats:
image: "xivoxc/xivo-full-stats:${XIVOCC_TAG}.${XIVOCC_DIST}"

links:
- pgxivocc:db

environment:
- JAVA_OPTS=-Xms256m -Xmx2048m
- XIVO_HOST

volumes_from:
- timezone
- xivocclogs

restart: always

pack_reporting:
image: "xivoxc/pack-reporting:${XIVOCC_TAG}.${XIVOCC_DIST}"

links:
- pgxivocc:reporting

environment:
- WEEKS_TO_KEEP

volumes_from:
- timezone
- xivocclogs

restart: always

elasticsearch:
image: elasticsearch:1.7.2

ports:
- "9200:9200"
- "9300:9300"

volumes_from:
- timezone

restart: always

kibana_volumes:
image: xivoxc/kibana_volume:${XIVOCC_TAG}.${XIVOCC_DIST}

restart: always

nginx:
image: xivoxc/xivoxc_nginx:${XIVOCC_TAG}.${XIVOCC_DIST}

Part of the compose file used for the xivocc. We can see the instructions to create several containers

Working with Docker

So it becomes far easier to install all of those services on a new machine. Furthermore, it is also easier to work with them. The compose file comes with compose commands which allow us to stop or restart any of the containers it launched. It is possible to access a running container and make some tests inside without risking anything: even if it goes wrong, you can just remove the container and create a new one. Docker also has commands to directly see the logs of a container from the host machine, which makes their management easier. By the way, divided among many Docker containers, the application becomes less monolithic, and more evolutive.


A little schema to summarize the way we use Docker on XiVO

With its dynamic, open-source and widely-used technology, Docker has become an important tool of our project. There is no doubt that more and more of our components will be packaged with Docker in the future.

Thank you for reading. If you are further interested, here are some links to the documentation:

Share