Monitoring, Ops & DevOps

DynamoDB Auto Scaling - The Serverless Way

Andy Warzon Trek10
Andy Warzon | May 02 2016

In June 2017, Trek10 got spacklepunched by AWS when they launched when DynamoDB launched Auto Scaling natively. See here. This wasn't the first and won't be the last time, which is why we love working in such a fast-paced ecosystem.

The original post is below, but we highly recommend you ignore it and use native autoscaling!

Mon, 02 May 2016


So you’re building your serverless app on AWS for unlimited scalability and true pay-per-use. Your web server is Amazon API Gateway - unlimited scale with zero scaling effort. Your app server is AWS Lambda - ditto, unlimited scale with zero scaling effort. And your database is Amazon DynamoDB - unlimited scale… but not quite zero scaling effort.

With DynamoDB you have to set provisioned read and write capacity for every table and global secondary index. Set something too low, and your app will crash just like any old server-based app. Set something too high and leave it on 24/7, and your bill might be pretty hefty. So how do we make DynamoDB maintenance-free, perfectly scalable, and with costs that track usage, just like the other components of our system?

Auto Scaling DynamoDB the Serverless Way

The principle is pretty simple… since provisioned read & write throughput can be set via the AWS API, a little bit of code should be able to automate the decision about scaling your capacity up and down in response to your consumed read & write throughput. This idea has actually been around for a while. But that older solution used a server — eww! We obviously need a serverless solution here to keep our overall maintenance footprint low and availability high.

There are two other small issues to tackle:

  • While you can scale your capacity up any time you want, you can only scale your capacity down four times per day per table. This means we should scale up immediately in response to high usage, but we should only check for scale-down four times a day. This is reasonable because, as with all AWS Auto Scaling, you never want to scale down too aggressively. The mantra is scale up fast, scale down slow.
  • Ultimately you want to scale up and down based on a percentage: {consumed read or write capacity} / {provisioned read or write capacity} … however this metric is not available in CloudWatch. Thus we need to store our percentage thresholds somewhere else (a Dynamo table, naturally!) by table and index. This is also a good place to store our other scaling parameters: minimum & maximum throughput and throughput units to step up or down when scaling. Then a Lambda function will: 1) read these configurations and set the actual CloudWatch alarm values for read or write capacity and 2) dynamically update these values when the provisioned throughput is changed: {throughput scale-up alarm value} = {provisioned throughput} * {percent threshold for scale-up}, and so on.

So with that in mind, here is the basic layout of a serverless DynamoDB Auto Scaling solution:

As you can see, scale-up activity is triggered by a CloudWatch Alarm -> SNS Topic -> Lambda function, which updates DynamoDB’s provisioned throughput. Scale-downs are checked every 6 hours (because of the four-times-per-day limit), but the update process is otherwise the same.

One missing piece in these diagrams is the system for managing this: The config tables mentioned earlier and a “configurator” Lambda function that would check the settings in the config table and update CloudWatch alarms appropriately.

So there’s a lot here, but once all built this can be encapsulated into a single CloudFormation template and a single config table. At the end of the day, from the operator’s perspective, it’s fairly simple and incredibly powerful to have a truly unlimited scale with almost no day-to-day effort.

Do you have any ideas about how to Auto Scale DynamoDB? We’d love to hear about them on Twitter, LinkedIn, or email us directly.

Andy Warzon Trek10
Andy Warzon

Founder & CTO, Andy has been building on AWS for over a decade and is an AWS Certified Solutions Architect - Professional.