Nginx for serving multiple sites in docker

As part of ideas related to my docker VM, I planned to have multiple sites deployed on it. For example, one of the sites is this blog and another one is my personal site. In order to be able to serve two different sites running both in port 80 in the same VM, you will need to have something in front of them that distribute the traffic correspondently. That’s where nginx enters the game. Nginx serving static files and routing to wordpress

Nginx (<engine x>) is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP proxy server. Its really easy to configure and use with docker.

First, you need to create the docker-compose.yml file in order to create and configure the docker container easily. In that file, add the following to have the server running and serving files form the path specified under volumes.

web:
  restart: always
  image: nginx
  ports:
    - "80:80"
  volumes:
    - /path/in/vm/www:/usr/share/nginx/html

Although that would be enough for most of the cases, the idea here is to use a custom configuration that enables doing a reverse proxy on the WordPress site we already have. In order to do that, you will need to link both containers. As this container and the WordPress one are in different compose files, you need to use the external_links configuration instead of links.
In the terminal, use docker ps to find out the identifier of your WordPress container (e.g. wordpress_web_1). Add the falling code snippet to your docker-compose.yml file.

  external_links:
    - wordpress_web_1:bloghost

Additionally, configure the container to use a custom image instead of the default one. In the custom image, you will update the configuration by replacing it with a custom file. To configure the container to use the custom image, replace the image configuration with build using as value the path to the Dockerfile of the custom image (e.g. ./conf/).

You will end up with the docker-compose.yml file looking similar to the following one.

web:
  restart: always
  build: ./conf/
  ports:
    - "80:80"
  volumes:
    - /path/in/vm/www:/usr/share/nginx/html
  external_links:
    - wordpress_web_1:bloghost

Now, we will create the new Dockerfile inside the path you specified in the docker-compose.yml file. This file is really simple, just specify what is the image that this new image is based from and then copy the nginx.conf file to it.

FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf

Create a new nginx.conf file in the same folder where the Dockerfile is located. In that file you should have the configuration that is needed to route the request to your different sites. In the following example, you can see that I’m using the domain name to route the request. You can find a full version of this configuration here.

The following snippet show how to set up nginx to serve the static files. Note that its using the server_name configuration to route the traffic only from the domains specified (e.g. nbellocam.me and www.nbellocam.me). Additionally, the files that will be serving are located in the /usr/share/nginx/html which is the path that we map with the vm in the docker-compose.yml file.

#...

http {

    #...

    server {
        listen       80;
        server_name	nbellocam.me www.nbellocam.me;

        root   /usr/share/nginx/html;
        index  index.html index.htm;

        error_page  404              /404.html;
    }

    #...
}

The best part comes now, when we route the traffic to the WordPress site. To do this, first you will configure an upstream named wordpress that is mapped with the blogpost’s external_link defined before in the docker-compose.yml file. By doing this, you don’t need to know the internal IP that the docker containers have. Then, you will configure a new server node that uses a new server_name configuration targeting to the new domain, in this case blog.nbellocam.me. Then in the root location node you will have the proxy_pass configuration using as value the schema and the upstream name specified before. Additionally, you will have several properties that enables you to perform a silent redirect, making it possible to use the domain specified even when the real url is a private ip located inside the docker vm.

#...

http {

    #...

    upstream wordpress {
      server bloghost:80;
    }

    #...

    server {
        listen 80;

        server_name blog.nbellocam.me;

        location / {
            proxy_pass http://wordpress/;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_cache_bypass $http_upgrade;
        }
    }
}
  • Pingback: Using letsencrypt with nginx on docker - Nicolas Bello Camilletti()

  • A.D

    can you share some about how this is working out for you? In terms of updates, latency, VM load, or others..? thanks!

    • Ok, so, first I need to clarify that I’m a developer rather than a devops guy, so my comments might not be the most useful in this case.

      I’m currently using a D1 instance of a VM in Azure, and basically, the CPU is never above 5% of usage. Regarding to the ram, It’s using around 54% which is near 1,8gb.

      Currently, I have 2 blogs running(one of them is not publicly available) with 2 mysqls and the nginx server. I also end adding cloudfare, just for testing it.

      So, its working fine and without issues. However, for products sites I would try to use the Azure Container Service using swarm or Mesos. Kubernetes seems to be the other alternative. Moreover, there are alternatives that make things easier to expose new containers. Using the approach I used, whenever I want to expose something new, I need to modify the nginx configuration and restart it (which is fast, but not something you want to do manually if you are giving a service as it could fail as every manual step).

      I hope this is useful. Sorry for the delay

      • A.D

        Hey thanks for the reply.. How long have you been running this? How about the devops perspective? it is a managed instance ?

        • Its running for about 6 months. I didn’t perform too much updates, so for the devops perspective, there is no much to say. I have backups, updates and the letsencrytp certificate automatically updated. You can see more information about this in the other posts. Honestly, as a developer, I prefer using something like Azure Websites and forgot about everything else :), however, this is really useful to have more custom things.

  • Mike Barlow

    I too would be interested to hear how it’s working out for you in terms of performance. We’re looking at docker at my work place and handling our setup plus the multiple clients on each server makes something like this a possible idea for our structure.

    • I just answered the other question: http://disq.us/p/1av9hwv
      Try to look for something less manual, as it might end up begin really frustrating.

      This is a great solution for the problem, however, I think that nowadays there are some alternatives that might be a bit more complex to implement at first but ends up making your life easier in the future, specially if the services are always changing