How to deploy your software on Amazon Web Services? There are numerous options out there. Many people stick with traditional methods, but the flexibility of cloud services (particularly with the rich feature set of AWS, unmatched by rivals) open up exciting new possibilities.
One of the more compelling options is AMI-based deployment. AMI stands for Amazon Machine Image… AWS’s proprietary virtual image format. This is a great model for a few reasons:
- It is simple & clean. Each AMI represents a distinct release.
- You can keep an easy catalog of past deployments and roll back at any time.
- For AMI (and all snapshot) storage, you only pay for the storage for differences from the last snapshot. Thus keeping tens or hundreds of similar AMIs can cost pennies.
- This is critical… boot up is very fast. When you deploy an AMI into an auto-scaling group, time is of the essence. When your group responds to an increased load, you want new servers to come online as quickly as possible. An involved configuration management process on boot can take 10-15 minutes easily… this is not acceptable when your application is under heavy load.
One key piece of this model is that the AMI needs to be “environment independent”… in other words, any settings that would differ in, say, staging vs. production, need to be set at boot time. You can use user data to do this easily.
A challenge with this model, in the past, was how do you deploy the AMI to your autoscaling groups, once it is built? Netflix’s answer to this is their open-sourced Asgard tool. However this not a lightweight approach… you really have to be “all-in” on using Asgard to manage your whole AWS environment.
There is a great lightweight alternative, recently enabled by the release of the CloudFormation UpdatePolicy feature. Let me step back.
CloudFormation lets you template your AWS resources with JSON… so your whole infrastructure is scripted. You can also define parameters in that template, for example to control the size of the instances launched. Take JSON template, add parameters, and launch, and you now have a “stack” of AWS resources.
So here is how this deployment model works:
- Define a CloudFormation stack with all of the AWS components that make up your app, including your autoscaling group. Make sure that the AMI ID that launches your app/web servers in the autoscaling group is a parameter in your template.
- Define the UpdatePolicy attribute in your autoscaling group. We’ll talk about the details later.
- Then, when are ready to deploy a new version of your app, build the AMI (with an automated pipeline, ideally, that creates the server then creates the AMI), and update the AMI ID parameter in your CloudFormation stack.
From there, UpdatePolicy automatically handles rolling your new AMI out. It does this with three parameters: MaxBatchSize, MinInstancesInService, and PauseTime. You can read the details in the AWS docs, but the gist is, it will slowly roll out your new AMI to your autoscaling group, at the pace you define.
The result is a robust and lightweight deployment pipeline. Pretty slick.