Introduction to Docker – Global Azure Bootcamp 2016

Today as part of the Global Azure Bootcamp 2016 in Buenos Aires, Argentina; I give an introduction to Docker and what are the possibilities to use it from within Microsoft Azure.

Global Azure Bootcamp 2016 logo

The Global Azure Bootcamp is an incredible community event that take place the same day in multiple locations around the world since 2013. Last year I take the challenge to organize the local event and it was an incredible experience. This year, Matías Quaranta y Guillermo Bellmann organize the local event and it was even better. You can find more information about the local event here.

If you want to know more about the WordPress/Nginx scenario I showed, you can review my blog posts:

  1. Creating a docker Virtual Machine on Azure manually
  2. WordPress Docker container with Compose
  3. Nginx for serving multiple sites in docker
  4. Using Let’s Encrypt with nginx on docker
  5. Automating the Let’s Encrypt certificate renewal

Regarding the demos, the docker command to run dotnet inside a container but modifying the host machine was the following. This command is really useful is you want to test the latest version taken from the build server of dotnet.

docker run -it --rm -v "$PWD":/tmp/app/ -w /tmp/app/ microsoft/dotnet-preview dotnet new

I also, showed kitematic as an easy to use interface for docker.

Docker demo showing kitematic

Finally, you can find the slides I used for my presentation.

Automating the letsencrypt certificate renewal

In my previous post I explain how to set up nginx running in a docker container to use a SSL certificate created by letsencrypt. The problem with that approach is that you need to update the certificate manually once every 90 days. Is nearly 100% sure that you will forget to do that, at least once. In this post, I will show you how to automatize the letsencrypt certificate renewal process with some easy steps.letsencrypt nginx in docker

For the renewal of the certificate, we’ll use another of the strategies/plugins that letencrypt client support, the webroot plugin. This plugin obtains the certificate by writing a special file in the /.well-known directory within your document root of an already running webserver. The special file will then be opened (through your web server) by the Let’s Encrypt service for validation. Depending on your configuration, you may need to explicitly allow access to the /.well-known directory. To ensure that the directory is accessible to Let’s Encrypt for validation, you can update the nginx configuration by adding the following inside the server node.

location ~ /.well-known {
      allow all;
}

The letsencrypt client support using a configuration file instead of parameters. This is ideal for this scenario where we plan to automatize the letsencrypt certificate renewal process, as it will be easier to update any of the parameter without needing to update the scripts. First you will need to create the configuration file, to do this, copy the one from the examples in the client folder.

sudo cp /opt/letsencrypt/examples/cli.ini /usr/local/etc/le-renew-webroot.ini

Edit the new file with your information. Note that in addition to the email and domains parameter, you need to specify the webroot-path parameter. This parameter specify the path to the folder that nginx (or your web server) is serving. In this case, that path is the local path that is linked to the /usr/share/nginx/html folder in docker.

rsa-key-size = 4096

email = [email protected]

domains = nbellocam.me, www.nbellocam.me

webroot-path = /local/path/to/www

Now that you have everything in place, test the configuration by executing the following command. Remember that nginx needs to be running and configured as described in my previous post.

/opt/letsencrypt/letsencrypt-auto certonly -a webroot --agree-tos --renew-by-default --config /usr/local/etc/le-renew-webroot.ini

This command is the same as the one below, with the difference that one uses the configuration file while the other uses configuration by parameter.

/opt/letsencrypt/letsencrypt-auto certonly -a webroot --agree-tos --renew-by-default --webroot-path=/local/path/to/www --email [email protected] -d nbellocam.me -d www.nbellocam.me

Now, that everything is in place, you will be able focus on automatize the letsencrypt certificate renewal process. In order to do this, create a new file, e.g. /usr/local/sbin/le-renew-webroot and add execution permissions.

sudo touch /usr/local/sbin/le-renew-webroot
sudo chmod +x /usr/local/sbin/le-renew-webroot

Now, edit the new file and add the following snippet, replacing the value of the compose_file_path and config_file variables.

#!/bin/bash

compose_file_path='/path/to/your/nginx/docker-compose.yml'
config_file="/usr/local/etc/le-renew-webroot.ini"

le_path='/opt/letsencrypt'
exp_limit=30;

if [ ! -f $config_file ]; then
        echo "[ERROR] config file does not exist: $config_file"
        exit 1;
fi

domain=`grep "^\s*domains" $config_file | sed "s/^\s*domains\s*=\s*//" | sed 's/(\s*)\|,.*$//'`
cert_file="/etc/letsencrypt/live/$domain/fullchain.pem"

if [ ! -f $cert_file ]; then
	echo "[ERROR] certificate file not found for domain $domain."
fi

exp=$(date -d "`openssl x509 -in $cert_file -text -noout|grep "Not After"|cut -c 25-`" +%s)
datenow=$(date -d "now" +%s)
days_exp=$(echo \( $exp - $datenow \) / 86400 |bc)

echo "Checking expiration date for $domain..."

if [ "$days_exp" -gt "$exp_limit" ] ; then
	echo "The certificate is up to date, no need for renewal ($days_exp days left)."
	exit 0;
else
	echo "The certificate for $domain is about to expire soon. Starting webroot renewal script..."
        $le_path/letsencrypt-auto certonly -a webroot --agree-tos --renew-by-default --config $config_file
	echo "Reloading $web_service"
	docker-compose -f $compose_file_path restart
	echo "Renewal process finished for domain $domain"
	exit 0;
fi

Test the script by executing it with sudo /usr/local/sbin/le-renew-webroot. You should see the output saying that the certificate is up to date. The idea is to renew it just when its necessary.

Finally, add a new cron task to execute the le-renew-webroot script. To do this, execute sudo crontab -e and add the following line that will execute the le-renew-webroot command every Monday at 2:30 am. The output produced by the command will be piped to a log file located at /var/log/le-renewal.log.

30 2 * * 1 /usr/local/sbin/le-renew-webroot >> /var/log/le-renewal.log

Now that you configure and automatized the full letsencrypt certificate renewal process, you can relax and enjoy the benefits of having a free and automatically updated Let’s Encrypt TLS/SSL certificate to securely serve HTTPS content.

Using letsencrypt with nginx on docker

Now that I have my site running on a docker container using nginx (more info here), I want to add a secure endpoint and support https. In order to do this, the first thing I would need is to have a SSL Certificate, but those are usually too expensive for a personal site. That were you can take advantage of letsencrypt.

letsencrypt nginx in docker Let’s Encrypt is a new Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates. It simplifies the process by providing a software client, letsencrypt, that attempts to automate most (if not all) of the required steps.

But, why do you need the certificate for? When you request a HTTPS connection to a webpage, the website will initially send its SSL certificate to your browser. This certificate contains the public key needed to begin the secure session. Based on this initial exchange, your browser and the website then initiate the SSL handshake. The SSL handshake involves the generation of shared secrets to establish a uniquely secure connection between yourself and the website.

The first thing you will need is to configure the access to the VM, which means that you will need to set up your DNS for each of the domains you plan to create the certificate for. In my case, I created two entries, one for nbellocam.me and the second one for www.nbellocam.me. In addition to setting up your DNS, you will need to make sure that the ports 80 and 443 are available and accessible. This requirements are due to the fact that the validation process will resolve your domain and access those ports to validate that you are how you say you are.

Then, you will need to download the letsencrypt client. To do this, you need to have git and bc and then execute the following command.

sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt

Once you have the client in your VM and you have access to it from your domain, the easiest way to obtain the certificate is to execute the following command, replacing the domains and your email.

sudo /opt/letsencrypt/letsencrypt-auto certonly --standalone --email [email protected] -d nbellocam.me -d www.nbellocam.me

This will create the certificate in your /etc/letsencrypt folder. Note that you have two folders there, archive and live. The first one contains all the certificates history while the second one contains a symlink to the latest one.

Now that you have the certificates its time to configure your docker-compose.yml file to enabling the 443 port as well as sharing the folders with the certificates.

web:
  restart: always
  build: ./conf/
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - /local/path/to/www:/usr/share/nginx/html
    - /etc/letsencrypt:/etc/letsencrypt
  external_links:
    - wordpress_web_1:bloghost

Before restarting the docker container using the new configuration, let’s update the nginx configuration file to add the support for https. To do this, update the server node for your site adding the following

server {
        listen 443 ssl;

        server_name	nbellocam.me www.nbellocam.me;

        ssl_certificate /etc/letsencrypt/live/nbellocam.me/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/nbellocam.me/privkey.pem;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

        root   /usr/share/nginx/html;
        index  index.html index.htm;

        error_page  404              /404.html;
    }

You can then add a new node that permanent redirects all the content in port 80 that target your domains to the secure endpoint.

    server {
        listen 80;
        server_name nbellocam.me www.nbellocam.me;
        return 301 https://$host$request_uri;
    }

Finally, build the new image using docker-compose build and restart the container with docker-compose restart.

You should be able to connect to your https endpoint now. However, note that these certificate expires 90 days after the creation, so you’ll need to renew the certificate before it expires using the same command as before but this time adding the --renew-by-default parameter.

sudo /opt/letsencrypt/letsencrypt-auto certonly --standalone --renew-by-default --email [email protected] -d nbellocam.me -d www.nbellocam.me

Once everything is up and running, you can verify how secure its your site using this SSL Server test

Nginx for serving multiple sites in docker

As part of ideas related to my docker VM, I planned to have multiple sites deployed on it. For example, one of the sites is this blog and another one is my personal site. In order to be able to serve two different sites running both in port 80 in the same VM, you will need to have something in front of them that distribute the traffic correspondently. That’s where nginx enters the game. Nginx serving static files and routing to wordpress

Nginx (<engine x>) is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP proxy server. Its really easy to configure and use with docker.

First, you need to create the docker-compose.yml file in order to create and configure the docker container easily. In that file, add the following to have the server running and serving files form the path specified under volumes.

web:
  restart: always
  image: nginx
  ports:
    - "80:80"
  volumes:
    - /path/in/vm/www:/usr/share/nginx/html

Although that would be enough for most of the cases, the idea here is to use a custom configuration that enables doing a reverse proxy on the WordPress site we already have. In order to do that, you will need to link both containers. As this container and the WordPress one are in different compose files, you need to use the external_links configuration instead of links.
In the terminal, use docker ps to find out the identifier of your WordPress container (e.g. wordpress_web_1). Add the falling code snippet to your docker-compose.yml file.

  external_links:
    - wordpress_web_1:bloghost

Additionally, configure the container to use a custom image instead of the default one. In the custom image, you will update the configuration by replacing it with a custom file. To configure the container to use the custom image, replace the image configuration with build using as value the path to the Dockerfile of the custom image (e.g. ./conf/).

You will end up with the docker-compose.yml file looking similar to the following one.

web:
  restart: always
  build: ./conf/
  ports:
    - "80:80"
  volumes:
    - /path/in/vm/www:/usr/share/nginx/html
  external_links:
    - wordpress_web_1:bloghost

Now, we will create the new Dockerfile inside the path you specified in the docker-compose.yml file. This file is really simple, just specify what is the image that this new image is based from and then copy the nginx.conf file to it.

FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf

Create a new nginx.conf file in the same folder where the Dockerfile is located. In that file you should have the configuration that is needed to route the request to your different sites. In the following example, you can see that I’m using the domain name to route the request. You can find a full version of this configuration here.

The following snippet show how to set up nginx to serve the static files. Note that its using the server_name configuration to route the traffic only from the domains specified (e.g. nbellocam.me and www.nbellocam.me). Additionally, the files that will be serving are located in the /usr/share/nginx/html which is the path that we map with the vm in the docker-compose.yml file.

#...

http {

    #...

    server {
        listen       80;
        server_name	nbellocam.me www.nbellocam.me;

        root   /usr/share/nginx/html;
        index  index.html index.htm;

        error_page  404              /404.html;
    }

    #...
}

The best part comes now, when we route the traffic to the WordPress site. To do this, first you will configure an upstream named wordpress that is mapped with the blogpost’s external_link defined before in the docker-compose.yml file. By doing this, you don’t need to know the internal IP that the docker containers have. Then, you will configure a new server node that uses a new server_name configuration targeting to the new domain, in this case blog.nbellocam.me. Then in the root location node you will have the proxy_pass configuration using as value the schema and the upstream name specified before. Additionally, you will have several properties that enables you to perform a silent redirect, making it possible to use the domain specified even when the real url is a private ip located inside the docker vm.

#...

http {

    #...

    upstream wordpress {
      server bloghost:80;
    }

    #...

    server {
        listen 80;

        server_name blog.nbellocam.me;

        location / {
            proxy_pass http://wordpress/;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_cache_bypass $http_upgrade;
        }
    }
}