Reasons Not To Serverless - Think FaaS Podcast

Forrest explains several edge cases where serverless computing doesn't (yet) make sense, and introduces the principle of 'least compute possible'.
Forrest Brazeal Trek10 191210 171202
Forrest Brazeal | Jun 21 2018

Thu, 21 Jun 2018

subscribe on google play subscribe on apple podcasts

Transcript

Hello again, I’m Forrest Brazeal at Trek10, and this is ‘Think FaaS’, where we learn about the world of serverless computing in less time than it takes to run a Lambda function. So put five minutes on the clock - it’s time to ‘Think FaaS’.

So in a recent episode, I touched on some serverless fallacies, basically bad excuses for avoiding serverless. I want to turn the tables today and talk about some use cases where serverless doesn’t make sense, at least not yet. These cases certainly do exist and we need to be clear about them.

Now, for the purposes of this discussion I’ll be defining serverless kind of the way Mike Roberts does, which is to say either functions as a service, like AWS Lambda, or managed backends as a service like the gaggle of services that plug into AWS AppSync. So a managed container service like EKS, or a PaaS like Heroku where you have to provision the underlying instance sizes, would obviously not be serverless. With that said, let’s look at reasons not to serverless at this point in time.

First, if you have a tight timeline to deliver a product and a lot of expertise in a different technology.

The education barrier for serverless is not a joke. Yes, we make a big deal about serverless technologies being simple, but that does not mean they are easy. It takes legitimate time and effort to design a serverless architecture that is maintainable and performant. Just as importantly, it takes time to recalibrate your development workflows, to get both developers and your ops team comfortable with the radically different approach that serverless requires. At the organizational level, you have to be willing and able to take a step back in order to sprint forward at the end of the day.

If you are under the gun to ship something, and you have a team and a workflow that is experienced with a different technology, now may not be the time to dive headfirst into serverless. That said, there’s no long term benefit to sticking your head in the sand here. Make sure you understand when you can afford to wait on serverless and when you are losing out.

Second, your execution time and space requirements may legitimately rule out serverless.

I remember talking to someone a while back who had a workflow that had to download a huge number of files from S3, do some processing on each file, zip them up, and upload the zip to S3 again. The files didn’t fit in Lambda’s disk space, the time to upload and download the zip was longer than five minutes, and all in all this was simply a job that didn’t parallelize well and didn’t map onto the constraints of a Lambda invocation. But this would be easy to do with a longer-running process that had more resources. The trick here is to use the least compute possible. Something like AWS Batch will give you the advantage of variable-length runtimes without the overhead of managing a full server.

As a side note here, I’ve fallen into the trap in the past of using Lambda so much that every problem starts to look like a Lambda problem. A few months ago I was working on a data ingestion architecture. I spent a lot of time juggling state between parallel Step Functions invocations to try and keep my workflow serverless. But honestly, that code would have been much less complex and easier to coordinate if I had just run it on Batch or Fargate. Use the right tool for the job.

Third, and I’ll mention this quickly, you may have very specific edge cases that don’t fit the current state of serverless technology.

For example, your business may have ultra-low latency requirements between functions that would be better served by colocating them on the same hardware. My Think FaaS partner Jared Short recently encountered someone who needed a specific encoding over UDP packets with no HTTPS for remote IoT devices. If you’re really dealing with something that idiosyncratic, you’re out of the serverless happy path and you’re undoubtedly going to need more control over your infrastructure.

The key takeaway here is that as serverless services continue to mature and add features at a rapid rate, the number of edge cases does decrease. So be sure you know where you stand relative to today’s technology.

Finally, there’s a small chance that the cost model of serverless doesn’t make sense for your application.

On a millisecond-by-millisecond basis, Lambda is more expensive than a comparable EC2 instance. So if you’re handling thousands of requests per second and you’re running hundreds of concurrent Lambda invocations all the time, you may end up paying more for infrastructure than you would for a server fleet. Now remember, server prices aren’t the only thing to consider when looking at the costs of serverless. You also have to think about maintenance overhead, technical debt, and all the other things I went into in our episode on this subject awhile back. But when all’s said and done, you may come to the conclusion that your application will do better on dedicated infrastructure.

If you’re not sure, I highly recommend the website servers.lol. You can enter your scale requirements and get a pretty reasonable ballpark figure on costs between EC2 and Lambda. But the reality is that before serverless, most of us were running massively overprovisioned for most things. So even if you need to keep certain high-volume systems on traditional servers, you can almost certainly find some savings by using the least compute possible and switching to managed services where you can.

If you’re not sure how serverless fits into your environment, Trek10 would be glad to help. You can find us on Twitter @Trek10inc, or hit me up @forrestbrazeal, and I’ll see you on the next episode of Think FaaS.

Author
Forrest Brazeal Trek10 191210 171202
Forrest Brazeal