‘We Can't Contain Ourselves’ - Think FaaS Podcast
Howdy, I’m Jared Short at Trek10, and this is ‘Think FaaS’, where we learn about the world of serverless computing in less time than it takes to run an AWS Lambda function. So put five minutes on the clock - it’s time to ‘Think FaaS’.
Now, I’ve dealt with these questions, and implementation considerations a lot. When evaluating projects and deciding on infrastructure, there’s a few guiding thoughts I like to consider when it comes to deciding if serverless, containers, or some other technology is a fit.
Before we get to those principles, let’s be clear that nearly all of the FaaS platforms are themselves abstractions of containers.
Your code is shipped into manage container run-times, and requests or events are shuttled to those containers. Here’s the great part, in FaaS you don’t care! You get to ship features, not OS patches. The underlying infrastructure is containers and run-times that are managed entirely by the FaaS provider, whether it is Amazon, Microsoft or your own internal team on a self-run FaaS.
Okay, so what about these container things?
Now I must admit, I still interact with containers pretty frequently. They absolutely have their place, and they have improved a lot even in the cloud native world. When it comes to shipping a dev environment to your developers, and using them as consistent and ephemeral pipeline builders it’s a no brainier to go for containers.
Now when considering further I would say there are a couple more natural fits for containers that I wouldn’t immediately try to force fit serverless… firing off long running tasks and batch jobs with heavy compute or memory requirements is the first (particularly those that are not embarrassingly parallel, which are in fact a good fit for FaaS). The second is addressing existing and legacy apps that don’t make sense to rebuild right away, and using containers to bundle them up and run them on Kubernetes, ECS or some other container management platform. This can serve as a great gateway drug to native cloud operations and stepping stone to proper serverless if you are still trying to convince your team or company to leave a data center.
I will happily give the credit to containers they deserve, but they aren’t the best abstraction we have available to us now.
When it comes to containers vs serverless, I have stated in several interviews that I think containers are the wrong abstractions for most application developers. When it comes to what your developers should be caring about, they should care about their application, its code, its direct dependencies, and how it interacts with the platform services you leverage. Effectively we want to ensure developers focus as directly as possible on business value with as little distraction as possible.
Further more, and we talked about this in last weeks episode, in the serverless world the developers and your operations have a chance to focus on things they might not have previously, such as more granular performance metrics and optimizations. A final note in this space, is that serverless approach certainly makes it much easier to approach compute consumption versus availability equilibrium, making sure you are paying only for what you need. With containers, you do still run into the problem of paying for idle resources.
Okay great, FaaS is great, but it’s a fad, right?
So this one is interesting, and I think there’s actually an important distinction to make here between FaaS and serverless. Functions as a Service specifically references a construct and contract that you can hand a FaaS provider code, and they guarantee they will run it when you need, at the scale you need it.
Serverless is more of an idea of wanting to get away from undifferentiated heavy lifting, aiming for higher value tasks, leveraging managed platforms and services wherever possible. This is so important to communicate, that many of us in the space have started calling this “servicefull”. I don’t think we’ve seen the final iteration of FaaS, nor unlocked the full abstraction it could provide. There will be a lot of innovation in this space in the next 5 year. I do however think that “servicefull” is here to stay, we are going to see many service providers offering native event driven abstractions, design and streams. Think “cloud events” for everything.
Now on a final note, I do want to address DIY Containers (that run functions) as a Service
My best response is an anecdote, but a powerful one… in the recent spectacular meltdown that was spectre and err… meltdown, Trek10 had clients, even at the enterprise level, with heavily trafficked serverless apps that were patched with zero downtime, zero failovers, and zero effort. All container runtimes and underlying infrastructure patches and updates were handled by the platform. It was announced that all AWS Lambda infrastructure and functions were patched within 24 hours of the public release of the vulnerabilities.
We got to leverage one of the best security and ops teams in the world to handle it while we slept. Who knows how many cloud instances, let alone private data centers, haven’t even gotten the full scope of the issue handled yet. Certainly there are valid cases where things like OpenWhisk (an open source kubernetes based FaaS solution) makes sense, but consider the trade-offs diligently.
Time to put a package up and deliver this this episode. So remember, containers are a great bit of tech, but think higher up your value chain and consider serverless and servicefull first. You can follow Trek10 on twitter @trek10inc, or myself @shortjared. See yah next week!