Jared Short in think-faas 5 minutes to read

‘We Need to Talk About Multicloud’ - Think FaaS Podcast

Listen to “Think FaaS with Trek10”

subscribe on google play subscribe on apple podcasts

Transcript

Howdy once again, I’m Jared Short at Trek10, and this is ‘Think FaaS’, where we learn about the world of serverless computing in less time than it takes to run a Lambda function. So put five minutes on the clock - it’s time to ‘Think FaaS’.

This episode is going to be venturing into some of the most controversial waters we have yet, and while I am a cautious about doing so, I think it’s an important topic to foster discussion around. Today’s content also isn’t strictly applicable to serverless, rather a general insight into public cloud.

The Multicloud vision, since the early days of public cloud, has been built up to be this promised-land at the intersection of scale-economics and vendor-agnosticism. Simply put, I don’t think this vision has been realized, and I have strong doubts that with the current landscape we will ever see an effective play at the full vision. However, this does not preclude the opportunity for other interesting plays on what I think may be interesting in the multicloud world.

Let’s look at multicloud from the economic perspective. The dream is that we could ship around compute, storage, network, etc to the cheapest vendor at any given time. We may be running hordes of spot instances for pennies on the dollar in AWS one day, and the next day that workload has shifted to azure or GCP to save us a few bucks over the course of the day. I want to step back and take a look at the potential hidden costs that may lie around this.

First off, truly understanding a cloud providers offerings and building resilient infrastructure can take months or years. If you want to do that multicloud, you need to understand many disparate and sometimes arcane incantations, the engineering overhead and training is probably a 2x or 3x investment. Even once you do understand what it takes to re-platform, you are going to be dealing with the raw capability of portability. Each service offers different storage layers, networking layers, etc. There is no pragmatic approach to creating a “write once, run anywhere” code-defined infrastructure that will leverage anything but the most basic offerings of each cloud provider. On top of all that, the monitoring complexity increases and you’ll spend an unreasonable amount of time ironing out monitoring and alerting to make sure it stays sane and actionable as anything else. Even then, once you have all this in place, you still need to have written the glue between the platforms to allow yourself to make those migrations, and regularly test them. If you think DR within the same cloud vendor is bad, this would be a categorical nightmare.

How about the benefit from vendor-agnosticism, an enterprise may consider themselves to no longer be beholden to the will and feature roadmap of a single provider, if one doesn’t give you what you want, you go to the one that will! If we are being honest, in nearly every case if you want to take a truly cloud-native approach you want to leverage a particular platform with all its features as much as you can. The deeper you integrate, the greater a force-multiplier of adoption if each feature. For instance, AWS Lambda is at its best when you are directly tying to events from S3 when new objects are created, or subscribing to Kinesis Streams with managed iterators. Using Lambda as nothing but small chunks of compute gives up so much underlying power that I would say you mind as well just go run containers on Kubernetes (preferably managed of course)! Just remember this, operating with strict vendor-agnosticism requirements limits you to only the least-common denominator services of the cloud providers you have chosen, leaving you holding the reigns on all sorts of complexity that would have been handled otherwise.

I understand this took a negative path, but I want to bring us home with a brighter picture and discuss where I think there are some interesting plays. Instead of thinking about multicloud from the conceptual cage of vendor lock-in, let’s think about it from an opportunistic clouds as composable features mindset. If you are willing to allow different teams or service operators to buy into a particular public cloud, and build using all of its’ features and deep integrations, I see a future where each service you operate can leverage the absolute best of any particular cloud they choose. Maybe you run AI and data sciences on GCP, an analytics streaming service on AWS, and your distributed ticketing system on Azure. Each exposes well thoughtful interfaces to the others, but no single team or service is built for more then on cloud vendor. You’d still make some sacrifices, but you would be set up for success if new features or cloud players arrive on the scene that make sense for you to leverage.

The next few years are going to be interesting, and I would love to hear your thoughts on twitter @shortjared. While you are there, you can follow @Trek10inc for more serverless goodness. See you next time on Think Faas!