Deploying a Ghost blog with Packer, Terraform, AWS, Nginx and Docker

I've been using Ghost for a while now. I'd always liked the simplicity, markdown editor and the fact that I could run it myself on an EC2 instance, in AWS, behind Nginx. This proved to be a great learning experience, albeit not without it's ups and downs. In the end, mostly downs.

This was not due to the tech though. I've experimented with the config a lot and the instance I've created became the archetypical "snowflake" server. I was left on an abandoned version of Ghost that would not start and an Nginx config full of things I no longer fully grasped. This should've been a lesson in over-engineering; instead I'd chosen to make it a lesson in how I can use modern infrastructure tools to make a setup that's repeatable, updatable and resistant to config drifts.

I've set out with two main goals: repeatability and automation. If something went wrong, I should be able to checkout a working version of the config and deploy a new env, complete with domain config and an uploaded backup, all with a minimum amount of bash wrangling. There should be a minimum amount of external artefacts I need to keep and can't put in a repp, like ssh keys, environment variables etc. Obviously the cost of the whole thing should be minimal too.

The stack starts with Packer. Files and software that the app needs are prepped here into an Amazon AMI. Thanks to Docker, there is only a few things we need - Docker itself and docker-compose, as well config files used by them. Packer will build an image based on a standard ubuntu image, then upload it to AWS. From this step, we get a new source_ami, the ID of the image we will use. We need that id for Terraform. This step is not yet automated; but could be done fairly easily as both tools handle external inputs/outputs fairly well.

Terraform does a lot of the heavy lifting; it will setup Route53 records for the domain, security groups in EC2, the instance and all the wiring in-between. It keeps hold of the state of our infra, so running it will cover any config "tweaks". We can also easily see a plan of changes to be made. If we make a change to our image with packer and have a new AMI, only the instance will be changed - everything else will stay intact. Notably, doing this will wipe the instance, hence I have a little backup script for ghost that should be run before.

The backup script takes advantage of the fact that you can mount the content volume of the ghost docker image. It uses tar to pack the dir and puts it into an S3 bucket with a timestamp. The upload script fetches the latest backup and un-tar's it on the instance. For the moment this is local; it requires access both to the AWS account and instance. Ideally this would be done by the CI, using AWS IAM Roles.

Now we have all the pieces in place, we can make them dance. docker-compose is launched from the home dir and builds a container from docker-nginx-certbot. This image will do some heavy lifting for us, as it will start Nginx and obtain a cert from Let's Encrypt. Only downside to it is that the image does not support dry-runs so when debugging, it's easy to hit rate limits. docker-compose will also spin up our main actor; the official ghost image. That affair is pretty straightforward; the container needs to mount the correct volume but otherwise hides all the gnarly bits inside (like the DB).

And that's it (for now!). As you probably noticed, the automation part of this project is still missing as everything needs to be locally. The setup also needs to some tweaks, especially in terms of moving some key pieces into environment variables. Overall I think it shows well though how even small projects can benefit from tools like Docker, Packer or Terraform and how declarative, modular infrastructures can be assembled at a small, low-cost scale.

Check out the code Github.