Serverless in the Brownfield - Think FaaS Podcast

The reality is that most of the world's problems aren't brand new. They're legacy.
Forrest Brazeal Trek10 191210 171202
Forrest Brazeal | Apr 18 2019

Thu, 18 Apr 2019

subscribe on google play subscribe on apple podcasts


Hi, I’m Forrest Brazeal at Trek10, and you’re listening to “Think FaaS”, where we learn about the world of serverless computing in less time than it takes to run a Lambda function. So put fifteen minutes on the clock - it’s time to Think FaaS.

Quick housekeeping item as we get started today. ServerlessConf is now open for early bird registrations - you can sign up for that conference now at This is, in my opinion, the premier conference for people that are actually building and doing stuff in the serverless space. It’s in New York City this year, October 7th through the 9th, put on by our friends at A Cloud Guru. This will be the third time I’ve attended, I’m actually co-chairing this year with Linda Nichols, and we’ve seen a lot of great talk submissions come in already, but if you have a serverless story to tell, there’s still time for you to submit to the CFP. I hope you’ll come, learn, speak, whatever, it’ll be a great time. And hopefully, just like last year, we’ll do Think FaaS live on the main stage with some more great lightning talks from you, the community. Again, to register. We’ll see you there.

Serverless in the Brownfield

OK, on to our main topic today. I want to talk about serverless in the brownfield. That’s as opposed to these beautiful greenfield serverless projects that you can find so many Hello World tutorials and even real-world success stories about. I’m not diminishing the impact of those projects. Creating a serverless app from scratch is an awesome experience and I highly recommend it. We do plenty of that at Trek10.

But the reality is that most of the world’s problems aren’t brand new. They’re legacy, they’re incremental improvements to existing systems, they’re heavily constrained by time and resources and existing choices that have you locked in on some older technology. And the larger the enterprise, the bigger the challenge. Can serverless actually provide value in that kind of situation?

A few weeks ago I had the opportunity to speak at the Leading Edge Forum Study Tour out in Palo Alto, which is a pretty nifty event put on by Simon Wardley and his team of Wardley Mapping fame. They come over from the UK and they get a bunch of technology leaders together, CIOs and CTOs in industry and government, and they try to get a handle on the state of technology by talking to AWS and Microsoft and various other people on the cutting edge. At this particular event they were focused a lot on serverless, hence why I was there, and right away it became clear that we were not going to be having that sunshine and rainbows “Hello World” conversation.

These are Fortune 500 CIOs who are responsible for massive infrastructure, some dating back thirty or more years, that generates big-time revenue. They don’t need to be convinced that these systems need an architectural overhaul in many cases. They get the TCO and time to market arguments for serverless that we like to make on this show. That makes sense to them. But the path from where they are now, to that idealized serverless future, is so fraught with challenges that it kind of boggles the mind.

The Challenges of Legacy Systems

Let’s break down some of the challenges, and we’ll do this by inventing a legacy system for the sake of example. This is not a real use case, but it is based on a number of real interactions that I’ve had with enterprise folks over the last few months.

Let’s say you are a large manufacturing company that builds products for both the public and private sector. You’re doing defense contracts, you’ve got consumer-facing stuff, you name it. A lot of these applications and systems are twenty to thirty years old. They’re running on mainframes or in legacy datacenters, or they’re on-premise with customers. Here are some of the problems that you have:

New feature development has slowed to a crawl, or outright stopped in some cases, because the systems are poorly documented and running on languages and hardware that are not well-understood. The motivating impulse is to “keep the lights on”, not to rock the boat by trying to refactor.

Because of this, you’ve got significant cultural challenges. This company has a lot of trouble attracting and retaining high-achieving technical talent, particularly people earlier in their careers, because the idea of contending with these legacy systems, and all the challenging political dynamics that go along with them, is simply not appealing. So not only do you have systems that are gradually losing their utility, you have a decreasing ability to hire the kind of people who can execute meaningful change. Plus, the longer you sit there, as your team dwindles by attrition, the more overwhelming any kind of migration or significant refactor starts to look. You wouldn’t even know where to start.

Here’s the reality though, you have to evolve to survive. I was talking to a friend recently who works for a finance company that’s still running on mainframes. Their business rules are built around specific bugs in the floating-point implementations on those systems. Nobody has any idea what would happen if they migrated, or if their numbers would even add up anymore. Talk about lock-in! And smart leadership gets that. Unless your business model is to be a graveyard for enterprise apps, you need some kind of gameplan to keep your tech stack in fighting shape.

Serverless Migration Strategies

And the good news is that you can turn this around. It’s not easy, but it does happen. The serverless mindset, as Ben Kehoe has eloquently said, is a ladder. You want to move progressively towards owning and operating less of this legacy stuff. But that doesn’t mean bam, overnight everything is functions. There are some tactics you can take to start teasing these old monoliths apart.

Strangling Legacy Systems With Serverless

Serverless systems are event-driven by nature. Things happen in the world, and they trigger some piece of compute or service integration. So if you can figure out how to get events out of these legacy systems, you can start to make progress. There are kind of two directions this can go. Number one, you can write some job that ships changes out of a legacy database and sends them to the cloud. Depending on the volume and your ordering requirements, you could post these events to a streaming service like Kinesis or Kafka, or you could go SNS to SQS like the Event Fork Pipeline patterns that AWS has recently introduced. I’m personally a fan of CloudWatch Events here, and I’ll have a whole post coming soon about that.

Once you’ve got those events streaming into the cloud, now you can start to do interesting things with them. You can ship them off into a data warehouse for reporting. You can aggregate them into DynamoDB or Elasticsearch and build read-only apps in front of them. The key, though, is that your initial source of truth is still that legacy database. You’re probably not going to be writing a lot of data back to it from these cloud systems, or you’ll get yourself all tangled up. But what you’ve done is reduced the load on that system, gotten more comfortable with cloud workloads, and maybe now you’ve opened up a path to think about moving other parts of the system as well.

I said there were two patterns, and you can also flip this strategy around the other way. You can sort of strangle your legacy database by placing a new API in front of it. This could be AWS AppSync or API Gateway. You build an API that writes to a new canonical datastore like DynamoDB, and then you ship events out of that table using DynamoDB Streams, consume them into a Lambda function that syncs up your legacy database. That’s a good pattern if you eventually plan to cut over your entire user base to the serverless app.

Detangling The Database

You can see that a lot of these ideas center on the database, and I do find that that’s often a good place to start when you’re thinking about taking advantage of more cloud-native services. So many of these legacy systems are built on a huge, monolithic database, an Oracle or SQL Server that the devs just keep throwing everything into because it’s convenient and they know how to access it. This is how you end up with nasty stuff in your database like a bunch of PDF files, or a hundred gigs of application logs. There’s no reason any of that should live in a relational database, and it turns out it’s often pretty easy to move it out to a cloud-native data store like DynamoDB or S3 that scales much better and takes load off the system.

Choosing The Right POC

It’s important to remember, too, that your team are not going to be serverless experts at first. Remember, we said there are some challenging cultural dynamics at play here, maybe even more difficult than the technical ones. So you have to balance risk and reward with your first forays into serverless. Maybe the critical core of your system shouldn’t be uprooted and placed onto Lambda without the operational understanding needed to make that successful. But you don’t want your POC to be irrelevant, either.

So you start looking for these kind of quick wins that add value while building confidence on your team. Is your database full of stored procedures? Start moving those to Lambda functions. Obviously, background jobs, crons, these things that are spiky in load, put those on functions if you can. If you’ve got homegrown support systems that could be replaced with a managed platform service like AWS Glue, see what that migration would look like. Those are the easier things. The big, gnarly, core application, that may take longer. And it may not look very “serverless” at first.

Add Value, Not Technology

As part of the new Developer Acceleration program that we’re rolling out here at Trek10, I recently spent a week onsite with a large, established enterprise. We spent two days evaluating the benefits of serverless in the context of their organization, and we also spent two days diving into Kubernetes. Yes, I said the k-word. For one of their big apps, we eventually concluded that it made sense for their backend API to go to Lambda and API Gateway, and their Angular JS web app to run on AWS EKS for now - that’s AWS’s managed Kubernetes service. That moves them up the serverless ladder, gets them off their on-premises IIS servers, and sets their team up to make bigger changes in future.

With serverless in the brownfield, then, it’s more about the direction you’re headed than the exact place you are. Keep moving forward. It’s definitely less pretty than the greenfield stuff somebody else might be working on, but if you keep turning the soil over you’ll find there’s a lot of value to be harvested.

And with that very labored analogy, we’re at the end of our time for today. We’ve barely scratched the surface of this topic, obviously. I hope to have some more content coming out soon about legacy migrations to serverless, and in the meantime, if you want to dive deeper you can always contact Trek10 on Twitter @Trek10inc. I’m there as well @forrestbrazeal, and we’ll see you on the nest episode of Think FaaS.

Forrest Brazeal Trek10 191210 171202
Forrest Brazeal