Securing Seq with Nginx and Docker

— 4 minute read

Recently I was setting up an internet facing Seq instance for a client and wanted to add SSL to it. If you don't know about Seq, it's a server for structured logging that just works, and is the single fastest way to get some observabiilty into your applications. Go look at it now.

In this case I was hosting it as a Docker container on a Linux VM on Azure, my options seemed to be:

  1. Upload a certificate and bind it to an Application Gateway or use Azure Front Door with an Azure issued cert.
  2. Use Nginx.

I decided that if I could do something with Nginx I could make the solution a bit more portable, so my next question was the best way to add a certificate to it.

I could have built my own image with a certificate baked in. But again I wanted simplicity and portability so decided to look into Let's Encrypt.

Enter Let’s Encrypt

Let’s Encrypt is a service aimed at making the web more secure by issuing free certificates to everybody.

It works by having a small script on the target machine that registers itself by its web accessible hostname, and is issued a certificate with a 30 day expiry. By running this script regularly using a cron job, we can renew certs and everything is happy.

Problem is again, I want simple. I don’t want to register a cron job on the server.

Thankfully, there’s a really nice solution out there for Docker.

The Nginx Proxy project on GitHub has two Docker containers they publish.

One is the Nginx proxy container itself, which configures an Nginx container to provide an automatic proxy for other containers with a simple convention.

The second container is the Let's Encrypt companion container, which will handle the acquisition of Let's Encrypt certificates and binding to the Nginx proxy.

The readme page for the Let's Encrypt container has a step by step guide for creating the relevant containers, but as I wanted a one step solution I could repeat so I brought all the components together in a single Docker Compose file.

The fun part of that file is here:

seq:
image: datalust/seq:latest
restart: unless-stopped
container_name: seq
ports:
- 5341:80
expose:
- 80
network_mode: bridge
environment:
ACCEPT_EULA: Y
BASE_URI: https://seq.domain.com/
SEQ_CACHE_SYSTEMRAMTARGET: 0.8
VIRTUAL_HOST: seq.domain.com
LETSENCRYPT_HOST: seq.domain.com
LETSENCRYPT_EMAIL: damian.maclennan@domain.com
volumes:
- /seqdata:/data

This solution only requires a few changes for any environment.

  1. The volume specified for persistent Seq data. This will allow you to create, destroy, and upgrade containers without losing your logs and configuration. In my case this is called /seqdata
  2. An email for Let's Encrypt to notify in the event of any issues with renewal.
  3. The hostname for your service. This is the URL you’d like a certificate issues for, which must also be reachable from the internet for the Letsencrypt ACME server to check before issuing a certificate.
  4. The BASE URI is important for Seq to know how it should be reached if you're doing any OpenID Connect (such as Azure AD).

The final result is in this Gist. It's fairly generic and should be fairly simple to change for your domain.

If you want a simple and reusable solution for HTTPs on a Linux VM hosted Seq instance. This should get you up and running. I've now dropped this in to a few places where I've needed to stand up logging quickly.