A critical view on Docker

Mon 22 June 2015

TL;DR Before you start reading this, I want to make it clear that I absolutely don't hate Docker or the application container idea in general, at all!. I really see containers become a new way of doing things in addition to the existing technologies. In fact, I use containers myself more and more.

Currently I'm using Docker for local development because it's so easy to get your environment up and running in just e few seconds. But of course, that is "local" development. Things start to get interesting when you want to deploy over multiple Docker hosts in a production environment.

At the "Pragmatic Docker Day" a lot of people who were using (some even in production) or experimenting with Docker showed up. Other people were completely new to Docker so there was a good mix.

During the Open Spaces in the afternoon we had a group of people who decided to stay outside (the weather was really to nice to stay inside) and started discussing the talks that were given in the morning sessions. This evolved in a rather good discussion about everyone's personal view on the current state of containers and what they might bring in the future. People chimed in and added their opinion to the conversation

That inspired me to write about the following items which are a combination of the things that came up during the conversations and my own view on the current state of Docker.

The Docker file

A lot of people are now using some configuration management tool and have invested quite some time in their tool of choice to deploy and manage the state of their infrastructure. Docker provides the Dockerfile to build/configure your container images and it feels a bit like a "dirty" way/hack to do this given that config management tools provide some nice features.

Quite some people are using their config management tool to build their container images. I for instance upload my Ansible playbooks into the image (during build) and then run them. This allows me to reuse existing work I know that works. And I can use it for both containers and non-containers.

It would have been nice if Docker somehow provided a way to integrate the exiting configuration management tools a bit better. Vagrant does a better job here.

As far as I know you also can't use variables (think Puppet Hiera or Ansible Inventory) inside your Dockerfile. Something configuration management tools happen to do be very good at.

Bash scripting

When building more complex Docker images you notice that a lot of Bash scripting is used to prep the image and make it do what you want. Things like passing variables into configuration files, creating users, preparing storage, configure and start services, etc.. While Bash is not necessarily a bad thing, it all feels like a workaround for things that are so simple when not using containers.

Dev vs Ops all over again?

The people I talked to agreed on the fact that Docker is rather developer focused and that it allows them to build images containing a lot of stuff where you might have no control over. It abstracts away possible issues. The container works so all is well..right?

I believe that when you start building and using containers the DevOps aspect is more important then ever. If for instance a CVE is found in a library/service that has been included in the container image you'll need to update this in your base image and then rolled out through your deployment chain. To make this possible all stakeholders must know what is included, and in which version of the Docker image. Needless to say this needs both ops and devs working together. I don't think there's a need for "separation of concerns" as Docker likes to advocate. Haven't we learned that creating silo's isn't the best idea?

More complexity

Everything in the way you used to work becomes different once you start using containers. The fact that you can't ssh into something or let your configuration management make some changes just feels awkward.


By default Docker creates a Linux Bridge on the host where it creates interfaces for each container that gets started. It then adjusts the iptables nat table to pass traffic entering a port on the host to the exposed port inside the container.

To have a more advanced network configuration you need to look at tools like weave, flannel, etc.. Which require more research to see what fits your specific use case best.

Recently I was wondering if it was possible to have multiple nics inside your container because I wanted this to test Ansible playbooks that configure multiple nics. Currently it's not possible but there's a ticket open on GitHub https://github.com/docker/docker/issues/1824 which doesn't give me much hope.

Service discovery

Once you go beyond playing with containers on your laptop and start using multiple docker hosts to scale your applications, you need to have a way to know where the specific service you want to connect to is running and on what port it is running. You probably don't want to manually define ports per container on each host because that will become tedious quite fast. This is were tools like Consul, etcd etc.. come in. Again some extra tooling/complexity.


You will always have something that needs persistence and when you do, you'll need storage. Now, when using containers the Docker way, you are assumed to put as much as possible inside the container image. But some things like log files, configuration files, application generated data, etc.. are a moving target.

Docker provides volumes to pass storage from the host inside a container. Basically you map a path on the host to a path inside the container. But this poses some questions like, how do I share this in case the container gets started, how can I make sure this is secure? How do I manage all these volumes? What is the best way to share this among different hosts? ...

One way to consolidate your volumes is to use "data-only" containers. This means that you run a container with some volumes attached to it and then link to them from other containers so they all use a central place to store data. This works but has some drawbacks imho.

This container just needs to exist (it doesn't even need to be running) and as long as this container or a container that links to it exists, the volumes are kept on the system. Now, if you by accident delete the container holding the volumes or you delete the last container linking to them, you loose all your data. With containers coming and going, it can become tricky to keep track of this and making mistakes at this level has some serious consequences.


Docker images

One of the "advantages" that Docker brings is the fact that you can pull images from the Docker hub and from what I have read this is in most cases encouraged. Now, everyone I know who runs a virtualization platform will never pull a Virtual Appliance and run it without feeling dirty. when using a cloud platform, chances are that you are using prebuild images to deploy new instances from. This is analogue to the Docker images with that difference that people who care about their infrastructure build their own images. Now most Linux distributions provide an "official" Docker image. These are the so called "trusted" images which I think is fine to use as a base image for everything else. But when I search the Docker Hub for Redis I get 1546 results. Do you trust all of them and would you use them in your environment?

What can go wrong with pulling an OpenVPN container. Right..?

This is also an interesting read: https://titanous.com/posts/docker-insecurity

User namespacing

Currently there's no user namespacing which means that if a UID inside the docker container matches the UID of a user on the host, that user will have access to the host with the same permissions. This is one of the reasons why you should not run processes as the root user inside containers (and outside). But even then you need to be careful with what you're doing.

Containers, containers, containers..

When you run more and more stuff in containers, you'll end up with a few hundred, thousand or even more containers. If you're lucky they all share the same base image. And even if they do, you still need to update them with fixes and security patches which results in newer base images. At this point all your existing containers should be rebuild and redeployed. welcome to the immutable world..

So the "problem" just shifts up a layer. A Layer where the developers have more control over what gets added. What do you do when the next OpenSSL bug pops up? Do you know which containers has which OpenSSL version..?

Minimal OS's

Everyone seems to be building these mini OS's these days like CoreOS, ProjectAtomic, RancherOS, etc.. The idea is that updating the base OS is a breeze (reboot, AB partition etc..) and all services we need are running inside containers.

That's all nice but people with a sysadmin background will quickly start asking questions like, can I do software raid? Can I add my own monitoring on this host? Can I integrate with my storage setup? etc...


What I wanted to point out is that when you decide to start using containers, keep in mind that this means you'll need to change your mindset and be ready to learn quite some new ways to do things.

While Docker is still young and has some shortcomings I really enjoy working with it on my laptop and use it for testing/CI purposes. It's also exciting (and scary at the same time) to see how fast all of this evolves.

I've been writing this post on and off for some weeks and recently some announcements at Dockercon might address some of the above issues. Anyway, if you've read until here, I want to thank you and good luck with all your container endeavors.

Ansible and Opennebula

Sun 12 October 2014

Recently we decided to deploy a private cloud to replace our RHEV setup. The reasoning behind this will be covered in an other blog post, but the main reason was the higher level of automation we could achieve with Opennebula compared to RHEV. In this post I would like to talk about how we used Ansible to help us with the setup of Opennebula and what we are going to do in the near future.

Why Ansible? Well, we were already using Ansible to perform repeatable deployments in our test environments to save us some valuable time compared to "manual" setups. This way we can test new code or deploy complete test environments faster.

So when we decided to deploy Opennebula we started writing ansible playbooks from the first start because we wanted to test several setups until we had a configuration that we found performant enough and was configured the way we wanted. This allowed us to rebuild the complete setup from scratch (using Cobbler for physical deployments) and have a fresh setup 30min later. This included a fully configured setup with Opennebula Management Node, Hypervisors(kvm) and everything we needed to further configure our Gluster storage backend.

One of the advantages of Ansible is that it is not just a configuration management tool but can do orchestration to. Opennebula for example uses SSH to communicate to all the hypervisor nodes. So during the deployment of a hypervisor node we use the delegate_to module to fetch the earlier generated ssh keys and deploy them on the hypervisor. Pretty convenient..

We currently have quite complete playbooks that use a combination of 3 roles. They do need some testing and when we feel they can be used by other people too, we'll put them on the Ansible Galaxy.

  • one_core : configures the base for both KVM nodes and the sunstone service
  • one_sunstone : configures the Sunstone UI service
  • one_kvmnode : configures the hypervisor

Until now we haven't used Ansible to keep our config in sync or to do updates, but it's something we have in the pipeline and should be quite trivial using the current Ansible playbooks.

Another thing we'll start working on are modules to support Opennebula. We already had a look at the possibilities Opennebula provides and should be quite trivial to build using its API.

We are very pleased with both projects as they aim to keep things simple which is important to us since we are a very small team and have to move forward at a rather fast pace.

The playbooks can be found on github

Backup Zarafa with Bacula

Mon 05 March 2012

Last week I finished migrating our mail/collaboration platform to Zarafa, and as with all things this needs to be backed up. We're running the Zarafa Enterprise edition which come's with a backup tool called zarafa-backup which works like this :

The first time you run the zarafa-backup tool it creates a data file and an index file refering to the items (folders and mails) inside the data file.

The next time you run zarafa-backup it detects the existing files and creates an incremental data file and updates the corresponding index file. It keeps doing this until you delete the data files and index file. Then it wil create a new full backup and the cycle will start all over.

We are using Bacula to do our backups so I needed to work something out.

As stated earlier, zarafa-backup just keeps on creating incrementals which means that if you keep this running a restore will involve restoring a lot of incrementals first. This is not something I wanted...

So I made my schedule like this :

  • create a full backup on Friday evening. That way we have the weekend to run the backup.
  • Until the next Friday we let zarafa-backup creating incrementals in the working folder.
  • On the next Friday we move the complete set to an other folder ( I called it weekly) and back it up. If this is successfull we empty the weekly folder again. Then we run zarafa-backup again which creates a new full backup (since the complete set has been moved and the working directory is empty).

Bacula schedule

Two schedules are created, each whith their own storage pool. * One we run on Friday. * One we run all the other days.

schedule {
        Name = "zarafa-dly"
    Run = Level=full pool=ZDLY-POOL sat-thu at 19:00
schedule { 
    Name = "zarafa-wkly"
    Run = Level=full pool=ZWKLY-POOL fri at 19:00

Bacula Zarafa client

The client config has 2 jobs defined. * One that does the daily backups using the "zarafa-dly" schedule. * One that does the backups of the weekly sets using the "zarafa-wkly" schedule. Each job runs a script before the backup run. The second job that backups the weekly sets also has a script that runs after the backup has been made. This script empties the weekly folder.

Job {
        Name ="MAIL02-DLY"
        Client = mail-02
        Storage = TapeRobot
        Write Bootstrap = "/var/lib/bacula/%c.bsr"
        Messages = Standard
        Schedule = "zarafa-dly"
        Type = Backup
        Pool = ZDLY-POOL
        ClientRunBeforeJob = "/etc/bacula/zbackup.sh"
        Run After Job = "/scripts/bacula2nagios \"%n\" 0 \"%e %l %v\""
        Run After Failed Job = "/scripts/bacula2nagios \"%n\" 1 \"%e %l %v\""

job {
        Name ="MAIL02-WKLY"
        Client = mail-02
        Storage = TapeRobot
        Write Bootstrap = "/var/lib/bacula/%c.bsr"
        Messages = Standard
        Schedule = "zarafa-wkly"
        Type = Backup
        Pool = ZWKLY-POOL
        ClientRunBeforeJob = "/etc/bacula/zbackup.sh"
    Client Run After Job = "/etc/bacula/zbackup-cleanup.sh"
    Run After Job = "/scripts/bacula2nagios \"%n\" 0 \"%e %l %v\""
        Run After Failed Job = "/scripts/bacula2nagios \"%n\" 1 \"%e %l %v\""

Backup script


WEEK=`date +%W`

#check if it's Friday or if the folder is empty
if [ `date +%w` -eq 5 -a `ls -A $ZBFOLDER | wc -l` -eq 0 ]; then
    echo "Starting Full backup"
    zarafa-backup -a -o $ZBFOLDER
  elif [ `date +%w` -eq 5 -a `ls -A $ZNBFOLDER | wc -l` -ne 0 ];then
    echo "Copying working to weekly and start new Full backup"
    mkdir -p $WEEKLYFOLDER/week-$WEEK
    rm -f $ZBFOLDER/*
    zarafa-backup -a -o $ZBFOLDER
    echo "Starting Incremental backup"
    zarafa-backup -a -o $ZBFOLDER

{% endhighlight %}

### cleanup script
{% highlight bash %}
#cleanup the weekly folder after bacula has run


Detect MTU size when using Jumbo Frames

Wed 22 June 2011

Recently I've setup an iSCSI target based on RHEL6 + tgt. After adding Logical Volumes to a target in the tgtd config file, the iSCSI target was discoverable and ready for use.

After testing this setup for a few days I wanted to tune the network traffic by enabeling Jumbo Frames. If you search on the interwebz you'll most likely find information about adding "MTU=9000" ( for RHEL based clones) to the config file of the network interface.

The problem with Jumbo Frames is that when setting the mtu to high, you get fragmentation. Changing your mtu to 9000 will probably lead to fragmentation. If you don't know this it can be quite hard to troubleshoot because you can still use ssh, ping the target etc.. but the iSCSI targets will keep failing.

You can easily check this with good old ping. Running this:

ping -M do -s 9000 <target_ip>

  • -M : MTU discovery strategy. "do" means "prohibit fragmentation"
  • -s : here you can specify the packet size

Gave me the following result :

From icmp_seq=1 Frag needed and DF set (mtu = 9000)

Lower the packet size until you get a normal ping reply. This is the value you can use as your mtu size in your network card's config file.

ping -M do -s 8900 <target_ip>

RHEV setup

Mon 20 June 2011

This blog post comes a little late because I did this RHEV setup at our company more than 6 months ago and it has been living in the drafts folder for some time now. Now with RHEV 3.0 Beta released I tought it's time to publish this.

About a year and a half ago we started looking at alternatives for our VMWare ESXi setup because we wanted to add hypervisor nodes to our 2 existing nodes running VMWare ESXi. We also wanted the ability to live migrate vm's between the nodes. At the same time Red Hat released RHEV 2.1 and being a Red Hat partner we decided to evaulate it.

We extended our existing setup with 2 Supermicro servers and a Supermicro SATA disk based SAN box configured as an iSCSI target providing around 8TB of usable storage.


To migrate our existing VM's running on VMWare we used the virt-v2v tool that converts and moves VMWare machines to RHEV. This procedure can be scripted so you can define a set of VM's you want to migrate in one go. Unfortunate these VM's need to be powerd down. I noticed that if your vmdk folders/files are scattered around on you storage including differend folder names, the virt-v2v tool in some cases bails out. In our case I could understand why the tool refused to migrate some machines (it was quite a mess).


You have 2 options to install the hypervisor nodes :

  • RHEV-H : stripped RHEL with a 100MB foorprint that provides enough to function as a hypervisor node.
  • RHEL : a default RHEL install you can configure yourself.

We created a custom profile on our Kickstart server so we could easily deploy hypervisors nodes based on a standard RHEL. By using a standard RHEL you can install additional packages later on which is not the case with a RHEV-H based install.

Once installed you can then add this node from within the manager interface to your cluster. Once added it will automatically install the necessary packages and becomes active in the cluster.


After adding hypervisor nodes you need to create "Storage Domains" based on either NFS, FC or iSCSI. Besides Storage Domains you also need to define an ISO domain to stock your installation images. If you want to migrate VM's from VMWare or other RHEV clusters you need to create an Export Domain.

In each cluster one hypervisor node automatically gets the SPM (Storage Pool Manager) role defined. This host keeps track of where storage is assigned to. As soon as this host is put in maintenance or becomes unavailable another host in the cluster will take over the SPM role.

VM's can use Preallocated disks (RAW) or Thin Provisioning (QCOW). For best performance Preallocated is recommended.


We have been running this setup for more than a year now and haven't had any real issues with it. We actually filed 2 support cases which have been resolved in newer releases of RHEV. At the moment we run around 100 VM's and although I haven't run any benchmarks yet, I see no real difference with our VMWare setup using FC storage. Although the product still has some drawbacks I believe it has a solid base to build on and already has some nice features like Live Migration, Load Balancing, Thin provisioning,..


  • RHEV-M (manager) runs on Windows
  • RHEV-M can only be accessed via IE (will probably change in 3.1)
  • Storage part is quite confusing at first.
  • API only accesible via Powershell
  • no live snapshots

In a few weeks I'll probably start testing RHEV 3.0 which now runs on Linux on JBOSS. This makes me think if JBOSS clustering will work to get RHEV-M working in a HA setup.

Switched to Jekyll

Sat 11 June 2011

It has been a while since I last blogged about a "decent" topic and actually it's been a while blogging about anything. The reason is the lack of time and also some lazyness. But that should change now, and the first step I took was migrating my blog from Drupal to a Jekyll generated website. Not that Drupal is bad or anything, but it's quite overkill and somehow felt not really productive while creating content.

So how did I end up with Jekyll?

Because I like using plain text files for writing (I use Latex quite a lot) I started looking for a blogging tool that used plain text files to store it's content instead of a database. PyBloxsom, Blosxom came to mind, but then Jekyll popped up in one of my search results and immediatly liked it because it generates static content you can upload to any webserver. No more php, python, perl, Mysql or updating needed. However, you do need Ruby on the machine that does the generation.. One "drawback" of a static website is commenting and for a moment I was planning on dropping comments on my blog but went for Disqus which I actually quite like.

Now I have my blog stored in a git repository that rsyncs the static content to my webserver when I push my changes. As simple as that.

I really like the thought of using Markdown and vim to write my blogposts from now on (and of course the geeky factor of all this). The only thing left is improving the layout and sanatizing the setup a bit more

I'll be at LOAD (Linux Open Administrator Days)

Wed 13 April 2011


Getting DropBox to work with SELinux

Sun 21 November 2010

Recently Serge mentioned DropBox to me, and I remembered creating an account once but haven't used or installed it in the last 2 years.

These days you also get lot more free space with your DropBox so I decided to start using it again.

So I started installing DropBox using the rpm from their website, but got an SELinux warning. Setroubleshootd perfectly explains what's going on and the solution is trivial.

[root@localhost ~]# semanage fcontext -a -t execmem_exec_t '/home/vincent/.dropbox-dist/dropbox' [root@localhost ~]# restorecon -vvF '/home/vincent/.dropbox-dist/dropbox' restorecon reset /home/vincent/.dropbox-dist/dropbox context unconfined_u:object_r:user_home_t:s0->system_u:object_r:execmem_exec_t:s0


Fri 29 October 2010

So today I went for the second time to sit the RHCE exam. This time the results were better then earlier.

RHCT components score: 100.0 RHCE components score: 100.0

RHCE certificate number : 805010290454578

The instructor mentioned that this was probably one of the last exams based on RHEL5.

Anyways, I'm glad I made it this time...

Fedora 14 Release party

Thu 21 October 2010

The date for the Belgium Fedora Release Party has been set. A bigger (as in "print this and hang it up in your office") file has been attached.

Fedora 14 Release party poster

Large image

Page 1 / 5 »