Using letsencrypt with nginx on docker

Now that I have my site running on a docker container using nginx (more info here), I want to add a secure endpoint and support https. In order to do this, the first thing I would need is to have a SSL Certificate, but those are usually too expensive for a personal site. That were you can take advantage of letsencrypt.

letsencrypt nginx in docker Let’s Encrypt is a new Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates. It simplifies the process by providing a software client, letsencrypt, that attempts to automate most (if not all) of the required steps.

But, why do you need the certificate for? When you request a HTTPS connection to a webpage, the website will initially send its SSL certificate to your browser. This certificate contains the public key needed to begin the secure session. Based on this initial exchange, your browser and the website then initiate the SSL handshake. The SSL handshake involves the generation of shared secrets to establish a uniquely secure connection between yourself and the website.

The first thing you will need is to configure the access to the VM, which means that you will need to set up your DNS for each of the domains you plan to create the certificate for. In my case, I created two entries, one for nbellocam.me and the second one for www.nbellocam.me. In addition to setting up your DNS, you will need to make sure that the ports 80 and 443 are available and accessible. This requirements are due to the fact that the validation process will resolve your domain and access those ports to validate that you are how you say you are.

Then, you will need to download the letsencrypt client. To do this, you need to have git and bc and then execute the following command.

sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt

Once you have the client in your VM and you have access to it from your domain, the easiest way to obtain the certificate is to execute the following command, replacing the domains and your email.

sudo /opt/letsencrypt/letsencrypt-auto certonly --standalone --email [email protected] -d nbellocam.me -d www.nbellocam.me

This will create the certificate in your /etc/letsencrypt folder. Note that you have two folders there, archive and live. The first one contains all the certificates history while the second one contains a symlink to the latest one.

Now that you have the certificates its time to configure your docker-compose.yml file to enabling the 443 port as well as sharing the folders with the certificates.

web:
  restart: always
  build: ./conf/
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - /local/path/to/www:/usr/share/nginx/html
    - /etc/letsencrypt:/etc/letsencrypt
  external_links:
    - wordpress_web_1:bloghost

Before restarting the docker container using the new configuration, let’s update the nginx configuration file to add the support for https. To do this, update the server node for your site adding the following

server {
        listen 443 ssl;

        server_name	nbellocam.me www.nbellocam.me;

        ssl_certificate /etc/letsencrypt/live/nbellocam.me/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/nbellocam.me/privkey.pem;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

        root   /usr/share/nginx/html;
        index  index.html index.htm;

        error_page  404              /404.html;
    }

You can then add a new node that permanent redirects all the content in port 80 that target your domains to the secure endpoint.

    server {
        listen 80;
        server_name nbellocam.me www.nbellocam.me;
        return 301 https://$host$request_uri;
    }

Finally, build the new image using docker-compose build and restart the container with docker-compose restart.

You should be able to connect to your https endpoint now. However, note that these certificate expires 90 days after the creation, so you’ll need to renew the certificate before it expires using the same command as before but this time adding the --renew-by-default parameter.

sudo /opt/letsencrypt/letsencrypt-auto certonly --standalone --renew-by-default --email [email protected] -d nbellocam.me -d www.nbellocam.me

Once everything is up and running, you can verify how secure its your site using this SSL Server test

Nginx for serving multiple sites in docker

As part of ideas related to my docker VM, I planned to have multiple sites deployed on it. For example, one of the sites is this blog and another one is my personal site. In order to be able to serve two different sites running both in port 80 in the same VM, you will need to have something in front of them that distribute the traffic correspondently. That’s where nginx enters the game. Nginx serving static files and routing to wordpress

Nginx (<engine x>) is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP proxy server. Its really easy to configure and use with docker.

First, you need to create the docker-compose.yml file in order to create and configure the docker container easily. In that file, add the following to have the server running and serving files form the path specified under volumes.

web:
  restart: always
  image: nginx
  ports:
    - "80:80"
  volumes:
    - /path/in/vm/www:/usr/share/nginx/html

Although that would be enough for most of the cases, the idea here is to use a custom configuration that enables doing a reverse proxy on the WordPress site we already have. In order to do that, you will need to link both containers. As this container and the WordPress one are in different compose files, you need to use the external_links configuration instead of links.
In the terminal, use docker ps to find out the identifier of your WordPress container (e.g. wordpress_web_1). Add the falling code snippet to your docker-compose.yml file.

  external_links:
    - wordpress_web_1:bloghost

Additionally, configure the container to use a custom image instead of the default one. In the custom image, you will update the configuration by replacing it with a custom file. To configure the container to use the custom image, replace the image configuration with build using as value the path to the Dockerfile of the custom image (e.g. ./conf/).

You will end up with the docker-compose.yml file looking similar to the following one.

web:
  restart: always
  build: ./conf/
  ports:
    - "80:80"
  volumes:
    - /path/in/vm/www:/usr/share/nginx/html
  external_links:
    - wordpress_web_1:bloghost

Now, we will create the new Dockerfile inside the path you specified in the docker-compose.yml file. This file is really simple, just specify what is the image that this new image is based from and then copy the nginx.conf file to it.

FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf

Create a new nginx.conf file in the same folder where the Dockerfile is located. In that file you should have the configuration that is needed to route the request to your different sites. In the following example, you can see that I’m using the domain name to route the request. You can find a full version of this configuration here.

The following snippet show how to set up nginx to serve the static files. Note that its using the server_name configuration to route the traffic only from the domains specified (e.g. nbellocam.me and www.nbellocam.me). Additionally, the files that will be serving are located in the /usr/share/nginx/html which is the path that we map with the vm in the docker-compose.yml file.

#...

http {

    #...

    server {
        listen       80;
        server_name	nbellocam.me www.nbellocam.me;

        root   /usr/share/nginx/html;
        index  index.html index.htm;

        error_page  404              /404.html;
    }

    #...
}

The best part comes now, when we route the traffic to the WordPress site. To do this, first you will configure an upstream named wordpress that is mapped with the blogpost’s external_link defined before in the docker-compose.yml file. By doing this, you don’t need to know the internal IP that the docker containers have. Then, you will configure a new server node that uses a new server_name configuration targeting to the new domain, in this case blog.nbellocam.me. Then in the root location node you will have the proxy_pass configuration using as value the schema and the upstream name specified before. Additionally, you will have several properties that enables you to perform a silent redirect, making it possible to use the domain specified even when the real url is a private ip located inside the docker vm.

#...

http {

    #...

    upstream wordpress {
      server bloghost:80;
    }

    #...

    server {
        listen 80;

        server_name blog.nbellocam.me;

        location / {
            proxy_pass http://wordpress/;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_cache_bypass $http_upgrade;
        }
    }
}

WordPress plugins for the cloud

WordPress Plugins for the cloudWhen I created this blog, I wanted to minimize the impact on the VM I was using by storing whatever I could outside the VM. Additionally, I was concern about the possibility to restore the blog if I ever move it. The best solution for my concerns was to take advantage of the existing WordPress plugins.

As I couldn’t find a list of recommended WordPress plugins that tackle my concerns, I’m sharing the ones that I choose.

Windows Azure Storage for WordPress

This plugin allows you to use Azure Storage Service to store your media files. This is a simple step to scale up your site without setting up the infrastructure for a content delivery. Additionally, it reduces the amount of disk space consumed as all the media files are stored externally. Moreover, storing the media files externally enables you to move the content easier.

BackWPup

Another awesome WordPress plugin that enables you to have an automatic back up of your blog content. One of the best features about this plugin is that you can store the backups in an Azure Storage account as well as in others cloud storage providers.

Disqus Comment System

Disqus, is a service and tool for web comments and discussions. One of the greatest thing about the Disqus Comment System plugin is that it enables you to reduce the amount of storage consumed in your own database. Additionally, it’s simplifies the moderating tasks avoiding some of the spam bots created for WordPress.

Google Analytics by Yoast

Whenever you are planning to scale your site, it’s important to have tools that enables you to monitor and see analytics about it. There are plenty of tools for it but a very simple one is Google Analytics. That’s where this plugin enters the game, not only adding the support for Google Analytics in your site, but also adding a dashboard to your administrator panel with all the information.

Yoast SEO

In addition to knowing your site’s metrics, you will provably like to improve the amount of visits you have. Yoast SEO plugin enables you to improve the rankings on the different search engines performing page analysis, adding meta tags for each page, adding social networks integration, generating the sitemap for your site and lot more.

AddToAny Share Buttons

Talking about improving the amount of visits to your site, you definitely need to add social networks share buttons. The AddToAny Share Buttons plugin adds lots of different networks buttons that you would like to have in your site.

SyntaxHighlighter Evolved

I’m a Software developer and one of the most important things that I will post about is code. It’s a must for me to use a syntax highlighter plugin. I choose the SyntaxHighlighter Evolved plugin, which is really simple and supports lots of different languages.


One of the best things about WordPress is the incredible big plugin ecosystem that it has. Most of them are free, adding value to your site with just a few clicks. I hope that these WordPress plugins help you creating your site.

WordPress Docker container with Compose

As I describe in my previous post, this site is running in a docker container on an Azure VM. In this post, I will explain how to configure WordPress docker container using Docker Compose.

WordPress Docker container

I wanted to deploy WordPress on my own host instead of using it as a service. In order to do this, you require a MySQL database in addition to the web host, unless you use Project Nami which enables you to use SQL Azure instead of MySQL. One option could be to use, for example, Azure App Service and a MySQL instance from ClearDB (you can find more information for this approach here).

I preferred to use another approach by taking advantage of Docker. With this approach I can have WordPress and MySQL running in separated containers in the same VM and I can add even more containers in the future at no extra cost (which is my idea).

The first step is to connect to the VM where Docker is running using ssh (if you have any questions about this, see my previous post). I wanted to use docker-compose to set up the environment, which is an orchestration tool that makes spinning up multi-container applications effortless. In a new folder (e.g. /home/user/dev/), create a new file named docker-compose.yml.

In the docker-compose.yml file, add the MySQL container based on the official mysql image under a configuration which I named db (you can choose any name instead of db). Additionally, set up the root password for MySQL specifying the MYSQL_ROOT_PASSWORD environment variable. The following snippet shows how the docker-compose file should looks like at this point.

db:
  image: mysql
  environment:
    MYSQL_ROOT_PASSWORD: YourWordPressPasswordHere

Now, add the WordPress container based on the official wordpress image, adding the link to the mysql container by specifying the name of the configuration under links as it’s shown in the following code snippet.

web:
  image: wordpress
  links:
    - db:mysql

Now, add the expose configuration under the WordPress container configuration (i. e. the web entry) to be able to access port 80 from the outside. Note that you could map the port to any other just by specifying “OutsidePort:80” (remember to use the double quotes).

web:
  image: wordpress
  links:
    - db:mysql
  expose:
    - "80"

This is enough for having the WordPress Docker container up and running, but there is another interesting thing to do before starting the containers. You can expose the wp-content folder in order to easily update themes, or other content from WordPress to the VM where docker is running. In order to do this, you will need to configure the working_dir and map the volumes as it’s performed below.

web:
  image: wordpress
  links:
    - db:mysql
  expose:
    - "80"
  working_dir: /var/www/html
  volumes:
    - "/pathInVM/wp-content:/var/www/html/wp-content"

The final version of the docker-compose.yml file should look similar to the following one:

web:
  image: wordpress
  links:
    - db:mysql
  expose:
    - "80"
  working_dir: /var/www/html
  volumes:
    - "/pathInVM/wp-content:/var/www/html/wp-content"

db:
  image: mysql
  environment:
    MYSQL_ROOT_PASSWORD: W0rdpressPassw0rd!

Once you have everything ready, save the changes to the file and execute docker-compose up to start both containers and try accessing the site which should be running in port 80. And that’s it, you have configured WordPress Docker container.