Building the Architecture for Pokémon Go in AWS
If you live anywhere near a park, you may have suddenly started to see people standing around like this in the past few weeks.
It isn’t as much of an issue now, but when I took the picture, they were all waiting for the game to load. Or some of them might have been waiting for the PokéStop to work!
Really though, Pokémon Go has accomplished what almost anyone that has played Pokémon has ever wanted to do: go out into the real world and catch Pokémon.
As you may have noticed, Pokémon Go has had quite a few performance issues due to its insane popularity. It’s so popular right now, even Werner Vogels, the CTO of Amazon, is playing the game.
Let’s think about how we can use AWS to make an architecture that would be able to scale easily. AWS Lambda could be a great help for this type of architecture. If you haven’t heard of it yet, Lambda is a serverless computing platform. That means that you, as the developer, don’t have to worry about the servers in any way except for:
- How long you want the function to run before the call times out
- How much memory you want to allocate to the function
You pay for how long the call runs (in 100 millisecond increments), the amount of memory allocated, and the number of requests made. What’s great is that if no one is hitting Lambda, you aren’t paying for it!
Lambda is built as a “platform” for the serverless compute paradigm, rather than a full service for building applications. There are frameworks to help and more frameworks being created every day. We have been using Serverless as our goto framework for Lambda projects.
How can Lambda help with a game like Pokémon Go? Well, because of the nature of Lambda, you don’t need to worry if your instance size is large enough or about random spikes of users (assuming any APIs your Lambda function is calling can handle it). Let’s layout a plan for a game like Pokémon Go.
Above is a Cloudcraft diagram showing what the basic infrastructure could look like.
You would set up endpoints on API Gateway and have them route to different Lambda functions based on the request being made. Those requests would hit DynamoDB to get and set state for the player or the world around the player. You could use geospatial indexes in DynamoDB for finding collectable objects in the real world. The requests then could be secured by either Auth0 or Amazon Cognito. If there is any static content that you don’t want stored in the app (news about the game and company come to mind), you can have it delivered with CloudFront and S3.
The best part about this infrastructure is that Lambda automatically scales up as more users are hitting it. You will need to take measures to scale DynamoDB and you will need to be aware of any other dependencies on APIs or other infrastructure that can’t scale per request like Lambda does. We have built architectures supporting hundreds to thousands of requests per second without worrying about the “will it scale”, or “what about spikey loads” questions. It’s truly freeing!
Based on what I, as a player, have seen of Pokémon Go, it is almost certainly built on the same codebase and stack that Ingress (another Niantic game) is built on. That stack predates Lambda by a few years, but new games coming out could consider this framework. It will keep costs down as you are only paying when people are playing and your servers (or lack thereof) will be able to handle any spikes when you have the next Pokémon Go on your hands.