Immutable Deploys with Containers
The safest, most consistent way to deploy an application. Why it matters and how Docker is a big step forward in working immutably.
At Trek10, we’re big believers in immutable infrastructure and immutable deploys. We think this is one of the most powerful, and often overlooked, aspects of the modern cloud infrastructure. Never heard of it? Here’s the story:
Configuration Management: Not the whole answer
In the past few years, the rise of configuration management solutions like Chef, Puppet, Ansible, and Salt has made code-defined configuration much more common. This is a very powerful concept: You can define a configuration state for installed software and other dependencies with some code in a (usually) highly portable, cross-platform way. This makes configuring many servers much more efficient and it makes the configuration process much more repeatable; gone are the days of one-off, ad-hoc configurations by a sys admin.
However, these tools are often used in a way that limits their power. This is where the term “immutable” comes in. These tools can be used to apply some code-defined change to an existing state. That’s nice, but what is the existing state? If you cannot guarantee some existing state, how can you be sure that your change will have the desired effect?
Immutable configuration management is more strict: Some code defines a configuration change, but it is always applied to an initial system with some base, more or less “empty” state. Usually that is a pre-built image like a given release of CentOS or Ubuntu.
So the basic philosophy is, never change existing state. When you make any change, you make the change in the code, run that code on some base “empty” system, and build a new immutable system from scratch, and then replace the old with the new. Not only does this guarantee a much more consistent state, but it forces you to automate everything.
Immutable Deploys: Replace it all, all the time
So if your OS configuration is going to be defined from an immutable state, what about your application? Well, the same principles apply. If you deploy your application by applying some changes to an existing state (i.e. copy some build into a folder, overwriting the previous build), plenty can go wrong. Dependencies might not match, all the changes might not get applied, or some assumption about existing state might be wrong. Everyone in IT has been in this situation: A production deploy happens, something is broken, and everyone says “but it worked on staging!”.
So the best practice is to not only deploy your OS configuration, but also your application, with the same immutable model. Basically, any time you have a change to deploy, whether it is to the application, the OS, or dependencies, build a completely new immutable state, then test that new state in a staging environment, then deploy it.
Moving Beyond Baking AMIs
In the past few years, the most common way to implement immutable configuration management within AWS was “baking the AMI”. Netflix made this popular and many others (including Trek10) adopted it. The basic idea is to use/build some tools that launch a new AWS server with some “empty state”, use configuration management tools to configure the server and deploy the application, then create an Amazon Machine Image (a VM image) of the final state. Then this AMI represents a given deploy and it can be shipped around to a staging environment and then a production environment.
This was a big improvement over old ways of doing things, and it meets the definition we described above of Immutable Deploys. But it has some significant flaws… it requires some non-trivial tooling and it can be a slow process. Rebuilding a full VM from scratch and shipping it around, with all of its GBs of data, is a pretty inefficient model when you’ve just deployed a small bug fix to your app.
This is where Docker, in our opinion, has really become a compelling solution. It solves a lot of the above problems and allows immutable infrastructure and immutable deploys to become a much more straightforward proposition. A Docker container is an immutable representation of a set of configuration dependencies and the application, just like described above. But it sits on top of a vanilla virtual machine, so it does not include all of the OS definitions. This makes it much easier to build and more lightweight to move around. This gives you a lot of benefits:
- Built-in configuration management: If your requirements are not overly complicated, you can just use the Dockerfile to define your dependencies, eliminating the need for an additional configuration management tool. And once you compare a Dockerfile to a Chef cookbook, you’ll appreciate the improvement in simplicity.
- Lightweight builds: Building a Docker image is usually much faster than launching a VM, configuring it, and creating an AMI.
- Easy to move: Dockerhub and private Docker registries make it easy to store and deploy Docker images. And because an image definition is usually MBs not GBs, it is much quicker to ship around or download and launch.
- Useful for local dev: Unlike baking an AMI, which can only be done in the AWS cloud, Docker is meant to be easily used for local development. So now you don’t just have the same code-defined immutable infrastructure in staging and production, but also in development.
- Portable: Docker is highly cross-platform and cross-infrastructure, meaning that you are a lot less locked in to a certain infrastructure provider or flavor of Linux.
If you want consistently defined infrastructure and servers, think immutable. And if you want a lightweight way to accomplish this, think Docker.
Already using Docker yourself or thinking about giving it a spin? We’ve implemented Docker & AWS infrastructure for companies everywhere and would love to help you out! Shoot us an email at firstname.lastname@example.org.