Introduction to Docker – Global Azure Bootcamp 2016

Today as part of the Global Azure Bootcamp 2016 in Buenos Aires, Argentina; I give an introduction to Docker and what are the possibilities to use it from within Microsoft Azure.

Global Azure Bootcamp 2016 logo

The Global Azure Bootcamp is an incredible community event that take place the same day in multiple locations around the world since 2013. Last year I take the challenge to organize the local event and it was an incredible experience. This year, Matías Quaranta y Guillermo Bellmann organize the local event and it was even better. You can find more information about the local event here.

If you want to know more about the WordPress/Nginx scenario I showed, you can review my blog posts:

  1. Creating a docker Virtual Machine on Azure manually
  2. WordPress Docker container with Compose
  3. Nginx for serving multiple sites in docker
  4. Using Let’s Encrypt with nginx on docker
  5. Automating the Let’s Encrypt certificate renewal

Regarding the demos, the docker command to run dotnet inside a container but modifying the host machine was the following. This command is really useful is you want to test the latest version taken from the build server of dotnet.

docker run -it --rm -v "$PWD":/tmp/app/ -w /tmp/app/ microsoft/dotnet-preview dotnet new

I also, showed kitematic as an easy to use interface for docker.

Docker demo showing kitematic

Finally, you can find the slides I used for my presentation.

Using letsencrypt with nginx on docker

Now that I have my site running on a docker container using nginx (more info here), I want to add a secure endpoint and support https. In order to do this, the first thing I would need is to have a SSL Certificate, but those are usually too expensive for a personal site. That were you can take advantage of letsencrypt.

letsencrypt nginx in docker Let’s Encrypt is a new Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates. It simplifies the process by providing a software client, letsencrypt, that attempts to automate most (if not all) of the required steps.

But, why do you need the certificate for? When you request a HTTPS connection to a webpage, the website will initially send its SSL certificate to your browser. This certificate contains the public key needed to begin the secure session. Based on this initial exchange, your browser and the website then initiate the SSL handshake. The SSL handshake involves the generation of shared secrets to establish a uniquely secure connection between yourself and the website.

The first thing you will need is to configure the access to the VM, which means that you will need to set up your DNS for each of the domains you plan to create the certificate for. In my case, I created two entries, one for nbellocam.me and the second one for www.nbellocam.me. In addition to setting up your DNS, you will need to make sure that the ports 80 and 443 are available and accessible. This requirements are due to the fact that the validation process will resolve your domain and access those ports to validate that you are how you say you are.

Then, you will need to download the letsencrypt client. To do this, you need to have git and bc and then execute the following command.

sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt

Once you have the client in your VM and you have access to it from your domain, the easiest way to obtain the certificate is to execute the following command, replacing the domains and your email.

sudo /opt/letsencrypt/letsencrypt-auto certonly --standalone --email [email protected] -d nbellocam.me -d www.nbellocam.me

This will create the certificate in your /etc/letsencrypt folder. Note that you have two folders there, archive and live. The first one contains all the certificates history while the second one contains a symlink to the latest one.

Now that you have the certificates its time to configure your docker-compose.yml file to enabling the 443 port as well as sharing the folders with the certificates.

web:
  restart: always
  build: ./conf/
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - /local/path/to/www:/usr/share/nginx/html
    - /etc/letsencrypt:/etc/letsencrypt
  external_links:
    - wordpress_web_1:bloghost

Before restarting the docker container using the new configuration, let’s update the nginx configuration file to add the support for https. To do this, update the server node for your site adding the following

server {
        listen 443 ssl;

        server_name	nbellocam.me www.nbellocam.me;

        ssl_certificate /etc/letsencrypt/live/nbellocam.me/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/nbellocam.me/privkey.pem;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

        root   /usr/share/nginx/html;
        index  index.html index.htm;

        error_page  404              /404.html;
    }

You can then add a new node that permanent redirects all the content in port 80 that target your domains to the secure endpoint.

    server {
        listen 80;
        server_name nbellocam.me www.nbellocam.me;
        return 301 https://$host$request_uri;
    }

Finally, build the new image using docker-compose build and restart the container with docker-compose restart.

You should be able to connect to your https endpoint now. However, note that these certificate expires 90 days after the creation, so you’ll need to renew the certificate before it expires using the same command as before but this time adding the --renew-by-default parameter.

sudo /opt/letsencrypt/letsencrypt-auto certonly --standalone --renew-by-default --email [email protected] -d nbellocam.me -d www.nbellocam.me

Once everything is up and running, you can verify how secure its your site using this SSL Server test

WordPress Docker container with Compose

As I describe in my previous post, this site is running in a docker container on an Azure VM. In this post, I will explain how to configure WordPress docker container using Docker Compose.

WordPress Docker container

I wanted to deploy WordPress on my own host instead of using it as a service. In order to do this, you require a MySQL database in addition to the web host, unless you use Project Nami which enables you to use SQL Azure instead of MySQL. One option could be to use, for example, Azure App Service and a MySQL instance from ClearDB (you can find more information for this approach here).

I preferred to use another approach by taking advantage of Docker. With this approach I can have WordPress and MySQL running in separated containers in the same VM and I can add even more containers in the future at no extra cost (which is my idea).

The first step is to connect to the VM where Docker is running using ssh (if you have any questions about this, see my previous post). I wanted to use docker-compose to set up the environment, which is an orchestration tool that makes spinning up multi-container applications effortless. In a new folder (e.g. /home/user/dev/), create a new file named docker-compose.yml.

In the docker-compose.yml file, add the MySQL container based on the official mysql image under a configuration which I named db (you can choose any name instead of db). Additionally, set up the root password for MySQL specifying the MYSQL_ROOT_PASSWORD environment variable. The following snippet shows how the docker-compose file should looks like at this point.

db:
  image: mysql
  environment:
    MYSQL_ROOT_PASSWORD: YourWordPressPasswordHere

Now, add the WordPress container based on the official wordpress image, adding the link to the mysql container by specifying the name of the configuration under links as it’s shown in the following code snippet.

web:
  image: wordpress
  links:
    - db:mysql

Now, add the expose configuration under the WordPress container configuration (i. e. the web entry) to be able to access port 80 from the outside. Note that you could map the port to any other just by specifying “OutsidePort:80” (remember to use the double quotes).

web:
  image: wordpress
  links:
    - db:mysql
  expose:
    - "80"

This is enough for having the WordPress Docker container up and running, but there is another interesting thing to do before starting the containers. You can expose the wp-content folder in order to easily update themes, or other content from WordPress to the VM where docker is running. In order to do this, you will need to configure the working_dir and map the volumes as it’s performed below.

web:
  image: wordpress
  links:
    - db:mysql
  expose:
    - "80"
  working_dir: /var/www/html
  volumes:
    - "/pathInVM/wp-content:/var/www/html/wp-content"

The final version of the docker-compose.yml file should look similar to the following one:

web:
  image: wordpress
  links:
    - db:mysql
  expose:
    - "80"
  working_dir: /var/www/html
  volumes:
    - "/pathInVM/wp-content:/var/www/html/wp-content"

db:
  image: mysql
  environment:
    MYSQL_ROOT_PASSWORD: W0rdpressPassw0rd!

Once you have everything ready, save the changes to the file and execute docker-compose up to start both containers and try accessing the site which should be running in port 80. And that’s it, you have configured WordPress Docker container.