Andy Warzon in aws 1691 minutes to read

Fargate Pricing in Context

In a recent post I discussed how AWS’s newer container management service Fargate, while a compelling alternative to container cluster management, doesn’t deliver on most of the benefits that are the hallmark of “Serverless”.

That said, as much as we are serverless cheerleaders at Trek10 we recognize that all workloads can’t immediately move to this paradigm. Check out one of our recent Think FaaS podcasts where we dive deeper into some of these cases.

If you are in this situation and looking at containers, you may be weighing the options of Fargate vs other container management options on AWS like ECS, EKS, or a DIY cluster. Of course, Fargate isn’t for everyone: You may have very specific requirements that force you to host-level customization. Or maybe you are at a scale where compute costs dwarf the TCO of cluster management. But for the vast majority of companies the lower management overhead of Fargate can be compelling; however it needs to be carefully weighed against the added cost of Fargate in relation to EC2.

In this post we’ll try to provide some context to that pricing comparison. In other words, what will Fargate cost you, and will that (likely extra) cost be worth doing away with cluster management?

Reservation Rate

Fargate costs more per GB of RAM and vCPU, however costs are directly metered off of provisioned container RAM & CPU (each variable is metered independently) and you are never paying for unused cluster capacity. Therefore the key variable in comparing Fargate pricing to EC2 is cluster reservation rate.

Each time a container is deployed on the cluster, the cluster manager is reserving the specified RAM & CPU for that container. Reservation rate is the sum of the reserved RAM or CPU of deployed containers divided by total available in the cluster.

If you have 24 vCPUs in your cluster and Docker containers have 16 of those reserved, your CPU reservation rate is 67%. (Soft limits for RAM make this analysis more complicated so we will ignore that for now, feel free to try to factor it in if your analysis doesn’t show a clear winner.)

Like we did in our popular Lambda pricing post, we’ll try to put this pricing comparison in the context of some real-world scenarios. Consider an ECS cluster of 10 m5.xlarge instances, which gives you 40 total vCPU and 160 GB of RAM. If you have placed 30 containers on your cluster, each with 1 vCPU and 4 GB of RAM, your CPU and RAM reservation rates are both 75%. In the case of Fargate, your cost is calculated from those 30 vCPU and 120 GB of RAM of deployed containers. If you place 5 more containers, in Fargate your cost will go up by 1/6th, while in that EC2 cluster your costs stay flat and your reservation rate goes to 87.5%.

Remember that we’re talking about average reservation rate: the dynamic nature of reservation rate matters. This line of thought harkens back to the classic analysis of EC2 autoscaling vs on-premise. Because cluster auto-scaling can never perfectly match container auto-scaling, you will be always paying for some extra EC2 as you try to keep your cluster near but just above your provisioned container capacity. This pushes down average reservation rate further in highly dynamic environments.

The Data

Using us-east-1 pricing and ignoring ELBs & storage, this chart shows the percent cost of Fargate below or (usually) above the cost of the EC2 cluster for the m5.xlarge scenario described above, given various CPU & RAM reservation rates along the X & Y axes.


As you can see, around the 40-50% reservation rate, Fargate costs are about even to EC2. At the high end of 90-100% reservation, Fargate will start to cost about 100-120% more.

How do EC2 reserved instances change this? Here is how it looks if 100% of instances are reserved: