Updated (again) for Nov-Dec 2019 announcements
Since we originally published this popular article in 2018, we have tried to keep it updated as announcements have changed the landscape and conclusions a bit. Below you can see the update we made a year ago, in January 2019. More recently, in November & December 2019, there are have been another series of announcements that affect this analysis:
- Compute Savings Plans which apply equally to EC2 AND Fargate. (Check out this deeper dive from our fellow traveler in AWS obsession Corey Quinn.) Our analysis previously relied heavily on the fact that EC2 became a lot more compelling when you applied reserved instances, which you couldn’t do with Fargate. Now that part of the analysis is irrelevant… with Compute Savings Plans you just commit to a total compute spend and save equally on Fargate or EC2. So we’ve taken that part of the analysis out and just compare on-demand pricing. There are also “EC2 Savings Plans” which give you a deeper discount on specific instances, but given the huge increase in simplicity we expect most people to use Compute Savings Plans in the future.
- ECS Cluster Autoscaling: Savings Plans have made Fargate far more competitive, but on the flipside, ECS Cluster Autoscaling have given EC2 a small edge back. A key piece of our analysis is the idea of cluster reservation rate… how much of the cluster’s host CPU & RAM is reserved by containers. Before ECS Cluster Autoscaling was introduced, stitching together Lambda functions and alarms to scale your cluster up & down was fairly painful, and as a result it was often quite hard to get consistently high reservation rates. Now, it is much, much more realistic that all ECS users can run cluster autoscaling with less effort and maintain high reservation rates.
Of course, there are even more factors… Fargate Spot, EKS Cluster Autoscaler, ECS Spot Auto-draining, new instance types… but we’ll save those for another day!
Updated for the Jan 2019 Fargate price reduction
On Jan 7, 2019 AWS released a major price reduction for Fargate, reducing prices 35-50%. This is great news for a service that had relatively high costs as one of its only downsides. If you have a legacy app for which it isn’t feasible to rearchitect into serverless, there are very few good excuses to not moving it to Fargate.
So of course, we’ve updated the below analysis to reflect these new prices (as well as recent changes to EC2 instance types and pricing, to keep a fair fight). Here was our conclusion with the old pricing:
Perhaps most important are the upper and lower bounds. On the lower end, it is unlikely that you will find material savings on infrastructure cost alone when switching to Fargate: break-even does not happen until under 30-50% reservation rate in most cases. On the upper end, if you cluster is fully utilized, Fargate will at least double your current compute costs, perhaps triple them if you have a very high container reservation rate and reserve all of your EC2 instances.
The story is dramatically improved with these new price reductions… price savings with Fargate are now a very realistic possibility! Break-even between Fargate & EC2 now happens in the 60-80% reservation rate, so if your cluster is only 50% utilized you might see a 10-20% cost reduction with Fargate! At the high end Fargate will increase your costs by 50-100% for a very tightly packed cluster with heavy EC2 Reserved Instances. The case for Fargate is much harder to ignore now: Having a reservation rate above 60-80% is challenging in an environment with dynamic load, and even if you can accomplish it, does the management overhead warrant it?
In a recent post I discussed how AWS’s newer container management service Fargate, while a compelling alternative to container cluster management, doesn’t deliver on most of the benefits that are the hallmark of “Serverless”.
That said, as much as we are serverless cheerleaders at Trek10 we recognize that all workloads can't immediately move to this paradigm.
If you are in this situation and looking at containers, you may be weighing the options of Fargate vs other container management options on AWS like ECS, EKS, or a DIY cluster. Of course, Fargate isn’t for everyone: You may have very specific requirements that force you to host-level customization. Or maybe you are at a scale where compute costs dwarf the TCO of cluster management. But for the vast majority of companies the lower management overhead of Fargate can be compelling; however it needs to be carefully weighed against the added cost of Fargate in relation to EC2.
In this post we’ll try to provide some context to that pricing comparison. In other words, what will Fargate cost you, and will that (likely extra) cost be worth doing away with cluster management?
Each time a container is deployed on the cluster, the cluster manager is reserving the specified RAM & CPU for that container. Reservation rate is the sum of the reserved RAM or CPU of deployed containers divided by total available in the cluster.
If you have 24 vCPUs in your cluster and Docker containers have 16 of those reserved, your CPU reservation rate is 67%. (Soft limits for RAM make this analysis more complicated so we will ignore that for now, feel free to try to factor it in if your analysis doesn't show a clear winner.)
Like we did in our popular Lambda pricing post, we'll try to put this pricing comparison in the context of some real-world scenarios. Consider an ECS cluster of 10 m5a.xlarge instances, which gives you 40 total vCPU and 160 GB of RAM. If you have placed 30 containers on your cluster, each with 1 vCPU and 4 GB of RAM, your CPU and RAM reservation rates are both 75%. In the case of Fargate, your cost is calculated from those 30 vCPU and 120 GB of RAM of deployed containers. If you place 5 more containers, in Fargate your cost will go up by 1/6th, while in that EC2 cluster your costs stay flat and your reservation rate goes to 87.5%.
Remember that we're talking about average reservation rate: the dynamic nature of reservation rate matters. This line of thought harkens back to the classic analysis of EC2 autoscaling vs on-premise. Because cluster auto-scaling can never perfectly match container auto-scaling, you will be always paying for some extra EC2 as you try to keep your cluster near but just above your provisioned container capacity. This pushes down average reservation rate further in highly dynamic environments.
Using us-east-1 pricing and ignoring ELBs & storage, this chart shows the percent cost of Fargate below or (usually) above the cost of the EC2 cluster for the m5a.xlarge scenario described above, given various CPU & RAM reservation rates along the X & Y axes.
As you can see, around the 70-80% reservation rate, Fargate costs are about even to EC2. At the high end of 90-100% reservation, Fargate will start to cost about 35% more.
This point is worth re-emphasizing: In the above comparison, it will cost more running on EC2 unless you can keep your reservation rate above 70-80%, and if you ECS cluster is perfectly packed, Fargate will cost you 35% more.
How do EC2 reserved instances change this? Here is how it looks if 100% of instances are reserved:
Here you can see that the lower end is now about a 30% savings with Fargate, and the upper end is about 120% increase (a little more than double the cost).
Because companies rarely can forecast accurately enough to reserve every instance, looking at the numbers with 50% of EC2 instances reserved may be the most realistic:
A lower bound of perhaps 50% savings with Fargate and an upper bound of about 70% cost increase. The break-even point is about 50-70% reservation rate... if you can't consistently keep your ECS cluster above 70% reserved, you are going to probably save money with Fargate.
For the c5 instance family, the trends are very similar but the break-even points slightly different. This chart shows a cluster of c5.2xlarge instances compared to Fargate, again with 50% of EC2 instances reserved.
So What Can You Take Away From This?
Perhaps most important are the upper and lower bounds. On the lower end, if you can't keep your ECS cluster reserved at a rate of 50%, you will almost certainly save money moving to Fargate; and that break even rate can be as high as 80% container reservation rate if you don't use Reserved Instances.
On the upper end, if you cluster is very utilized, Fargate will cost between about 35% and 120% more, only hitting the high end if you have a near 100% container reservation rate and reserve all of your EC2 instances. A likely cost increase for a well-managed ECS cluster might be about 60%.
We can tell you from experience that you should not underestimate cluster management effort. While container auto-scaling is straightforward, AWS does not give you any magic button for scaling clusters up & down effectively. Challenges like detecting out-of-capacity clusters and elegantly packing containers during scale-down require custom coding. Many a Lambda function has been written to glue together a smoothly scaling cluster.
Avoiding this additional complexity is one of the most compelling aspects of Fargate. Hopefully with this pricing analysis in hand, you can now weigh those intangible and personnel costs against the hard infrastructure cost and make the decision for your environment.