Using AWS XRay for ECS Observability
Learn how AWS X-Ray is a vital tool for enhancing the observability of containerized applications on ECS.
It’s based on widely adopted container standards, but slim and straightforward. It’s squarely and narrowly focused on the most common use-case: web apps. Deployments and (optionally) builds are built right in, making it easy to get started the right way. Is App Runner the AWS service of our dreams? Maybe :)
And App Runner is another intriguing one from our view. Why? Read on...
Ultimately, the point of cloud abstraction is business impact: As compared to “the traditional approach” (whatever that means to you), how can you deliver more of your unique value, faster? With less effort, how can you create something that is more secure, more scalable, more highly available, and lower cost? Cloud-based virtual machines were one answer to this question (over a decade ago!). So were containers—a better abstraction for many use-cases that has helped deliver on those goals. Fargate has built on the container model by abstracting away more infrastructure, simplifying container deployments even further. Lambda and event-driven architectures have been game-changing as well if you can design or re-architect your application to use the beautiful Lego box of serverless services offered by AWS.
But the search for the perfect abstraction never ends. All of those approaches have their downsides and there is always room for more simplicity and more focus on business value. In particular, we see three glaring issues with some of the best AWS abstractions, and App Runner, for the narrow web app use-case, might just crush those problems head-on. First, let’s start with a quick primer on AWS App Runner.
You can read all the details of App Runner here, but the gist is that App Runner takes the container abstraction to a new level of simplicity for the web app use-case. Instead of managing a container cluster and host (ECS/EKS/Fargate) or exposing a function primitive (Lambda) but requiring you to wire up all the other related services (API Gateway, etc), App Runner uses the familiar container abstraction and hides everything else… no host or load balancer to configure, no container placement options, and health checking and auto-scaling by concurrent requests are built in. Really, you just focus on your app container, and when you deploy you get a running web app with a load balanced endpoint. Everything else is managed by AWS behind the scenes.
Now that you have a 10,000 foot view of App Runner… how does this address problems of other AWS abstractions? Let’s dive deeper...
A service like Fargate sounds straightforward: skip the virtual machine and just deploy your container. But because AWS does AWS things and builds their services to be very powerful and flexible, the reality is that there are still many details under the hood to tackle. If you’re doing your AWS right, you’re using something like CloudFormation or Terraform to define your infrastructure as configuration code. And when you see the actual configuration required to get something like Fargate running, the complexity that has to be handled can be striking. You might think to yourself: Potentially hundreds of lines of YAML to deploy a single web app. Understand ECS clusters, services, and tasks. Figure out where the Fargate CapacityProvider fits in. What PlatformVersion do I need? What DeploymentStrategy?
Once you have all of that figured out, go tackle Application Load Balancers and learn about listeners vs target groups. And don’t forget your Service auto-scaling configuration and your VPC!
By narrowing down the use-case, AWS App Runner can greatly streamline this. When you use the source code option, your configuration file only needs to define a few things around runtime, build commands, and deploy commands… things that will look very familiar to any DevOps engineer and most developers. A simple config file could be as small as 8 lines of YAML. Even a more complex use-case is only 10’s of lines of code… and critically, that extra complexity is relevant to your application. It’s not AWS stuff that makes their service more powerful but without being relevant to your use-case.
And what do you get for that minimal configuration? Everything you should want in a standard web application! A load balanced endpoint, high availability and security, and auto-scaling on concurrent requests, all built in. Here’s a simple configuration example of a Python web application running with the gunicorn server.
version: 1.0 runtime: python3 build: commands: build: - pip install -r requirements.txt run: command: gunicorn app:app env: - name: STAGE value: "DEV"
Time and again, we think that one of the biggest mistakes AWS makes with new services is that the DevOps feels like an afterthought. Sure you can click through the console and create a Lambda function or deploy an EKS service… but you shouldn’t create your solutions in the console! Best practice DevOps, namely infrastructure as code and automated build & deployment, are critical to enable your team to build and deploy their applications quickly, securely, and reliably.
But with most AWS services, you have to piece together the DevOps yourself: configuring your build and deployment services, managing cross-account access, writing scripts to glue it all together, etc. A CodePipeline to build and deploy your web app could easily be 500+ lines of CloudFormation YAML! This is often a significant barrier to true organizational and enterprise adoption with cloud native. Solving this does not differentiate your business, but you have to do it.
Finally, a service that starts with GitOps!
App Runner really represents a new model for AWS, one that we’re very happy to see. Put your configuration file in your code repo, hook up your App Runner service to your repo, and go. AWS App Runner handles the builds (if you’re using the source code option) and deployment. No additional services to configure, as best practices are included out of the box: configuration as code, git commits trigger deploys, and automated build and deploy. It’s all there.
Related to the first two problems, many new AWS abstractions bring with them a substantial learning curve. It’s important that your team really wraps their head around the services that are involved and gets hands on and comfortable with how to use them correctly and securely. This takes time… time that could be otherwise spent building differentiated business value!
AWS App Runner is one of the lowest learning curve services that we’ve seen. It uses familiar container concepts, build & deployment commands, and a simple configuration file format. We predict that most teams will be up and running with this service much faster than the typical AWS service offering. This is great for beginners - an entry point into AWS that is more DevOps-centric than LightSail and simpler than Elastic Beanstalk or other options. But the advanced configuration capabilities and bring-your-own-container option make it suitable for advanced use cases (within the narrow web app use case) as well.
Again, it’s great that AWS has prioritized simplicity and low barrier to entry for a narrow use-case without sacrificing scalability, availability, security, or usage-based pricing.
App Runner is just launching in GA, so like most new AWS services it is bound to have rough edges. It also may have some initial limitations so you’ll want to look closely before using it right away on a production system. But we’ll be watching it closely and seeing how it comes together in the next few months. If you have a straightforward web app use-case, AWS App Runner has the potential to be the default approach in the future.
Bigger picture, it’s great to see AWS producing new services with simplicity as a focus and with the DevOps built in. We hope this is just the beginning of a new trend in AWS service releases that will help builders like Trek10 get building faster!