Cloud Native

How and When to Use Amazon EventBridge Pipes

Amazon EventBridge Pipes: Useful, but not magical.
Matt Skillman Featured
Matt Skillman | Aug 28 2023
4 min read

Amazon EventBridge Pipes were announced in December of 2022 yet we still haven’t seen much usage of them in serverless architectures; I feel that there is still uncertainty on the utility of this feature. Specifically, there is not yet widespread awareness of both the existence of the feature as well as the typical scenarios in which you would want to be using a Pipe. I hope to clear up some of this confusion here by sharing some of the thoughts I’ve gathered while using this service. In particular, as the title of this post suggests, I’ll identify both how and where this new feature is relevant.

A brief summary of the purpose of EventBridge Pipes: it serves as an AWS-native way to “glue” two AWS resources together. This entails simply moving information from one “Source”, such as an SQS queue, into some “Target” such as a Kinesis stream. It is important to note that technically any system that exposes an HTTP API, regardless of whether it runs inside AWS or not, can be used as a target for the EventBridge Pipe by virtue of EventBridge API Destinations feature. However, only a handful of services may serve as a Source:

First off, as one might expect, the new Pipes feature is not “magic.” In this context, what I mean here is that you will often need to define an InputTemplate at a minimum to transform the Pipe’s input into an acceptable format for your target. The service itself will not automatically detect what your specific intentions were. Pipes will not automatically translate the output of one system into the appropriate input for another AWS system.

As seen above, we might consider placing an EventBridge pipe between an SQS queue serving as a DLQ and a Lambda consumer, in order to allow for “redrive” functionality of those failed events back into the Lambda consumer. One could ask “Why not use the existing features in Lambda which already allow for SQS->Lambda integration?” This is a fair point, however, the only issue with this is that now the Lambda would need to be adjusted to support both invocations from SQS and EventBridge. The events the Lambda receives from either service differ in their format. You might expect me to say here “This is where EventBridge Pipes come in and allow for integration without code changes.” Unfortunately, this is not the case, because the Pipes feature is not quite that magical. To use EventBridge Pipes in this scenario, some minimal code changes will still be required. The only benefit here is that Pipes support Implicit body data parsing, which will save you from having to include a single JSON.parse call in your handler. If we are committed to supporting redrive functionality from the DLQ that mimics existing EventBridge Rule invocations as closely as possible, we might use something like this:

LambdaPipe:
    Type: AWS::Pipes::Pipe
    Properties:
        Name: 'my-service-lambda-pipe'
        RoleArn: !GetAtt PipeRole.Arn
        Source: !GetAtt LambdaSQSDLQueue.Arn
        SourceParameters:
            SqsQueueParameters:
                BatchSize: 1
        Target: !GetAtt MyLambdaFunction.Arn
        TargetParameters:
            InputTemplate: '{"originalEvent":<$.body>}'
            LambdaFunctionParameters:
                InvocationType: FIRE_AND_FORGET # async
        DesiredState: STOPPED # in actual usage, we will flip this to RUNNING


The approach seen above will still require code changes, as the event when “redriving” from the DLQ will now be an array as opposed to an object. Moreover, we’ll need to access the “originalEvent” property within the only object in that array to grab what the EventBridge Event originally looked like.

The previously-described situation serves as an example of where not to use Pipes, so I think it would be useful to describe an example of where Pipes can truly shine: simple replacements of existing Lambda functions that serve only as a connector between two systems. “Simple” here could still include some filtering or enrichment, as Pipes support both, but the ideal case is that you just replace a Lambda function that is about three lines of code with a Pipe.

export const handler = async (event, context) => {
 logger.info({event, context, message: 'myMessage'});
 return sendPayloadIntoMyBus(event);
}


Any Lambda functions that serve purely as the “glue” between systems may easily be replaced with a Pipe, thereby giving you less code to manage and one less Lambda to worry about. Furthermore, costs will drop significantly. Pipes are currently at a flat rate of 40 cents per million requests. While Lambda’s request cost is 20 cents per million requests, if we say that your function is configured with 128 MB memory and takes 100ms to complete, then the associated “GB-seconds” cost of those million requests would be ~$201 when using Lambda. In this scenario, Pipes offer a 98% cost reduction, which is clearly significant if you happen to be in a situation where you are processing billions of requests.

To get started with using EventBridge pipes, ideally, it is as simple as adjusting existing IAC definitions to include EventBridge Pipe resources instead of Lambda functions. All commonly used IAC tooling now supports this feature. If you need help with getting the InputTemplate working correctly, Amazon EventBridge Transformer helps tremendously. Additionally, the AWS console itself helps to convey the process of creating/configuring EventBridge Pipes, so I would suggest using the AWS console as a visual aid while still defining the Pipes within IAC.

1 (.128 GB * 0.0000166667 dollar per GB-second) * (1000000 requests / .1 second-per-request)
Author
Matt Skillman Featured
Matt Skillman