Serverless

Deploying Multi-Service Architectures with Lambda Layers, SAM, and GitLab CI

Struggling to organize and deploy your SAM services and share code? Lambda Layers and GitLab CI can help! Let's dive in.
Jessica Ribeiro Featured Team Member
Jessica Ribeiro | Jul 19 2021
4 min read

Deploying Multi-Service Architectures with Lambda Layers, SAM, and GitLab CI

Ever since AWS Lambda became generally available in 2015, engineers on the forefront of serverless application development have been trying to find ways to improve on how serverless applications are organized, built, and deployed. In this post, we'll cover two orthogonal-but-compatible AWS tools, AWS SAM and Lambda Layers, and show how they can be combined to improve organization and code reuse while also laying out a clear delivery process. We’ll walk through a step by step guide for using SAM & Lambda Layers to build multiple Node.js services with shared common code, with builds and deploys automated via GitLab CI. All the code for this example is available here.

What is SAM?

AWS says that SAM, short for Serverless Application Model, is an "open-source framework for building serverless applications". SAM is not a single tool or service. It is a system that combines a CLI, extended CloudFormation, and supporting AWS services. The AWS::Serverless CloudFormation transform allows users to define higher level constructs for Lambda functions, API Gateways, and more, making it easier and quicker to configure and deploy more complex services without the need for 3rd party dependencies. The CLI wraps the core functionality of the AWS CLI's CloudFormation commands, while also providing means to generate new projects and run parts of them locally. From the point of view of deploying a project/application, there are three main CLI commands: build, package, and deploy. sam build is able to run package installation for npm and pip. sam package uploads code artifacts and creates a modified SAM template with the correct URLs filled in. Finally, sam deploy deploys a packaged template via CloudFormation.

What are Lambda Layers?

Lambda Layers are a way to package various files for reuse between Lambda functions. These may be consumed by functions in the same deployment, in different applications, or even in different AWS accounts. Many examples of Lambda Layers focus on things like bundling precompiled binaries or common external packages, making them useful, but only to a certain degree it would seem. With AWS supported runtimes, Layers can be built in a way that makes dependencies available to runtime dependency paths. For example, if a Lambda Layer is intended to be consumed by a Node.js Lambda function, placing code such as a node_modules directory in the Layer's nodejs directory results in that code being made available as part of NODE_PATH. Layers are mounted inside of /opt. So, a Layer with logger.js inside of it's nodejs directory could be referenced by code in the consuming Lambda function with the following code:

const logger=require('/opt/nodejs/logger');

Why SAM and Layers Work Well Together

SAM requires a certain kind of code organization that makes it essentially impossible to directly access logic common to multiple applications in the same repository. However, armed with knowledge of how to build arbitrary Layers that integrate with your runtime, you can build a Layer with the common logic and deploy it with each SAM application in a repository. It turns out that this does work in practice with some upfront complexity. To get things like Jest and ESLint to work requires a few config changes and extra packages. In addition, both for the sake of keeping the configuration clean and to best interface with the Lambda Layer build mechanics, Make works well for constructing workflows for testing, building, deploying, etc. The rest of this post will focus on practically organizing a Node.js project of this kind, patching the gaps in Jest and ESLint, and deploying it with Make and the SAM CLI.

Project organization changes

We're going to adopt a service oriented repository structure. The top level files are related to project level functions, such as testing, linting, building, deployment. All of these are functions that should be standardized across code in a single repo as much as possible.

The first level of subdirectories will generally be the individual services. Within each service, we'll organize more like a regular Node.js package, with a src/ directory next to things like Makefile, package.json, and template.yaml. This promotes some level of intra-service code sharing.

Inter-service code sharing is then accomplished with the Lambda Layer, which is constructed from the shared/ directory. This is structured similar to the services. However, code does not need to be in a src/ subdirectory, which keeps the module paths shorter if you prefer.

Example project structure

repo-root/
  serviceX/
    src/
      code-files.js
      code-files.test.js
    .npmignore
      Makefile
    package.json
    package-lock.json
    README.md
    template.yaml
  (repeat for more services)
  misc/
    artifact-bucket.yaml
    Dockerfile
  shared/
    (node_modules/)
    shared-code.js
    shared-code.test.js
    .npmignore
    Makefile
    package.json
    package-lock.json
    README.md
    template.yaml
  .eslintrc.js
  .gitignore
  Makefile
  package.json
  package-lock.json
  README.md
  template.yaml

Making ESLint and Jest play nice with absolute pathing

Since the Lambda code consuming the shared files in the Lambda Layer needs reference the shared custom code absolute path (our logger was at /opt/nodejs/logger), this creates problems for testing and linting that code as the path on any system used for development will almost certainly differ from this path. Fortunately, ESLint and Jest have means of playing along. Both require configurations for npm packages and custom modules.

To make our tests play nice with Jest, we need the modulePaths option for shared npm packages and the moduleNameMapper option for mapping the shared custom modules. modulePaths takes a list of directories to be treated like they are part of NODE_PATH, and moduleNameMapper takes key-value pairs where the key is the expected path of the module and the value is the current path. The following Jest config (part in package.json) sets this up assuming tests are named like app.test.js:

{
  "jest": {
    "testMatch": [
      "**/?(*.)+(spec|test|e2e).js?(x)"
    ],
    "testPathIgnorePatterns": [
      "/node_modules/",
      "/__tests__/"
    ],
    "moduleNameMapper": {
      "/opt/nodejs/(.*)": "<rootDir>/shared/$1"
    },
    "modulePaths": [
      "<rootDir>/shared/node_modules/"
    ]
  }
}

To get ESLint to work, we need the following npm packages: eslint-plugin-import and eslint-import-resolver-alias. This adds support for the import/resolver config and custom aliases for that. Under import/resolver in the ESLint config, the node setting acts like a NODE_PATH shim for the the shared npm packages, and the alias settings map /opt/nodejs to ./shared for custom shared code, which is what our build and deployment process effectively does. The following .eslintrc.js example should add this functionality to an existing configuration.

module.exports = {
  settings: {
    'import/resolver': {
      node: {
        moduleDirectory: [
          'node_modules',
          'shared/node_modules',
        ],
      },
      alias: {
        map: [
          ['/opt/nodejs', './shared'],
        ],
      },
    },
  },
};

Makefiles and build processes with recursive make

Make review

Make is one of the oldest, if not the oldest, dependency tracking build utilities. If you were ever a CS student working with C/C++, there's a solid chance that the name at least rings a bell. It comes from a time long before CI/CD, git, DevOps, and all of this fancy cloud stuff. While its age shows in some of its quirks, it is still used today throughout the industry, including as an option for building Lambda Layers.

The original goal with Make was to achieve something similar to a modern config-as-code CI/CD system (like GitLab CI!). Most of these use a single file to contain all of the config, but some allow for multiple files. In order to simplify the build process and allow custom tuning for each SAM application, the build process was split among multiple Makefiles, with the top level Makefile recursively calling the ones in the shared and application folders. With the use of some interesting patterns, you can end up with a fairly clean implementation.

Note that the following details apply to Make 3.82. Certain options may not apply to older or newer versions.

Make has a number of special targets and features that become very useful in making this feel closer to modern tooling:

1) SHELL and .SHELLFLAGS: Setting these allows you to control the shell used to execute the recipes. SHELL sets the actual shell executable command used, e.g.: SHELL := bash, and ..SHELLFLAGS sets the options passed to the shell executable, e.g.: .SHELLFLAGS := -euo pipefail -c. The -c flag is needed for Make to properly execute commands with bash, and this flag is usually supplied by default by Make. As it is being overridden, it still needs to be included in the new value.

2) .EXPORT_ALL_VARIABLES: Declaring this target sends all Make variables to the shell. These are then accessible in your commands as $(make_var) instead of the typical ${make_var}.

3) Macros: In order to get something close enough to variable blocks in modern CI/CD systems, we can combine macros with the .EXPORT_ALL_VARIABLES target. Macros allow for reusable chunks of commands. Invoke the macro named "your-macro" with $(eval $(call your-macro)). If done at the top level of the Makefile, this should always execute the macro. If done inside of a recipe, the macro gets executed just like any other command when that recipe is run. Here is a sample macro declaration:

define env-vars
APPLICATION_NAME=sam-layers
REGION=us-east-1
endef

4)Wildcard % and variable $*: Targets may contain a single use of wildcard %. This matches any string. If % is used in the target, then it is assumed that the dependency name includes the exact same string. For example, a target deploy-% with a dependency of test-% would run test-dev then deploy-dev, if the deploy-dev was the initial target. This same string can be referenced in the commands for a recipe using the magic variable $*, which makes it easy to call the macro for the current target environment's variables (even macro invocations can include $*). Example of the start of a deploy-% recipe:

deploy-%: build-%
  $(eval $(call env-vars-$*))
  echo "doing the deployment stuff!!!"

This toolkit is enough to write straightforward scripts and use wildcard targets and variables to drive the build and deployment of the various applications while making future development as easy as possible.

CloudFormation and the build/deploy process

On the other side of all of this is the SAM/CloudFormation template. It is worth taking a peek at how the main stack nests additional SAM applications and how individual Lambda Functions are described. The main stack's role is to orchestrate the deployment of the Lambda Layer and SAM applications.

The following bit of CloudFormation is a bare bones SAM template that contains two SAM applications and a Lambda Layer. If you are familiar with nested CloudFormation stacks (AWS::CloudFormation::Stack), understand that AWS::Serverless::Application is essentially the same thing.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  App1:
    Type: AWS::Serverless::Application
    Properties:
      Location: ./app1/packaged.yaml
      Parameters:
        SharedLayerArn: !GetAtt
          - SharedLayer
          - Outputs.SharedLayerArn

  App2:
    Type: AWS::Serverless::Application
    Properties:
      Location: ./app2/packaged.yaml
      Parameters: 
        SharedLayerArn: !GetAtt
          - SharedLayer
          - Outputs.SharedLayerArn

  SharedLayer:
    Type: AWS::Serverless::Application
    Properties:
      Location: ./shared/packaged.yaml

The most interesting thing of note here is that the location references a packaged.yaml file inside of the service and layer directories, but this file does not exist in the hierarchy established previously. That is because it is generated during the SAM/CloudFormation packaging process (e.g.: sam package), notably replacing local paths with the S3 paths of the just uploaded code artifacts. Because the SAM CLI (also true for the AWS CLI) is not capable of recursing through nested applications or stacks, they must be packaged prior to packaging the main stack.

Packaging the nested applications requires the use of an .npmignore file to determine what should be excluded. In addition, functions in the nested applications need a reference to the shared layer and to have their CodeUri and Handler paths set correctly. This configuration works for the examples discussed here:

HelloWorldFunction:
  Type: AWS::Serverless::Function
  Properties:
    FunctionName: !Sub ${RootStackName}-app1-hello-world-${EnvironmentName}
    Handler: src/app.lambdaHandler
    Timeout: 3
    Runtime: nodejs12.x
    Tracing: PassThrough
    CodeUri: src
    Layers:
      - !Ref SharedLayerArn

Finally, there is one last special build target, which is used to build the Lambda Layer. SAM supports either building via either a default configuration for the target runtime or a Makefile. Since our structure does not match what works for the default configuration, the Makefile path is the only option. In the shared layer's Makefile, add a build target containing the logical ID of the AWS::Serverless::LayerVersion in the shared layer's CloudFormation template in the format build-LogicalIdHere. The one in this example is called SharedLayer, making the correct build target build-SharedLayer. In addition, in the CloudFormation template, add a Metadata section with a property that sets BuildMethod: makefile. With this configured, write the commands for the build target. It needs to create the target runtime directory, which goes in /opt, and put any shared packages or custom code in there. AWS supplies the ARTIFACTS_DIR environment variable for this target, containing the path that the runtime directory goes, instead of needing the /opt path to be hardcoded. The following CloudFormation snippet and Makefile target work for our example:

SharedLayer:
  Type: AWS::Serverless::LayerVersion
  Properties:
    CompatibleRuntimes:
      - nodejs12.x
    ContentUri: ./
    Description: test
    LayerName: !Sub sam-layers-example-shared-code-${EnvironmentName}
    LicenseInfo: MIT
  Metadata:
    BuildMethod: makefile
build-SharedLayer:
	mkdir -p "$(ARTIFACTS_DIR)/nodejs"
	npm install --loglevel=error
	cp -R node_modules $(ARTIFACTS_DIR)/nodejs/node_modules
	find . -name node_modules -prune -o -name "*.test.js" -prune -o -name "*.js" -exec rsync 
-R '{}' $(ARTIFACTS_DIR)/nodejs \;

Deployment with GitLab CI

In order to integrate all of this into a real production workflow, it is necessary to drop all of this into a CI/CD system to automate testing and deployment. GitLab CI is a feature-rich CI/CD system that is heavily used at Trek10. Of course with heavy use of shell scripts and Docker images, you could relatively easily adapt this to the CI system of your choice. We configure the .gitlab-ci.yml with jobs to run tests, linting, SAM validation, and deployment. Fundamental to this CI system is a Docker image as the context for running various jobs. This image needs to be loaded with any required packages and tools. Starting from the Node.js 12.x version of the lambci/lambda image, we create a custom image that also contains a few tools needed:

FROM lambci/lambda:build-nodejs12.x

RUN yum update -y \
    && yum clean all

RUN pip3 install awsume cfn-lint

Because of all of the work done inside of the Makefile, these job scripts become exceptionally simple to write, as illustrated by the script for the SAM validation job:

.validate_sam: &validate_sam
  stage: test
  script: |
    echo "===== Stage => ${STAGE_NAME//-/}, Region => ${REGION} ====="

    echo "===== Assuming permissions ====="
    . awsume --role-arn ${DEPLOYMENT_ROLE} --region ${REGION}

    echo "===== Validating SAM templates ====="
    make validate-components-${STAGE_NAME}

This snippet is then reused in jobs for each stage. One element of note here is the awsume command, which we included in our Docker image. Awsume is a utility produced by Trek10 to simplify cross account role assumption. Sourcing the output loads the temporary AWS credentials into the environment. Deployment is slightly more complex, only in order to handle issues with stacks that get stuck on first deployment:

.deploy_sam: &deploy_sam
  stage: deploy
  script: |
    echo "===== Stage => ${STAGE_NAME//-/}, Region => ${REGION} ====="

    echo "===== Assuming permissions ====="
    . awsume --role-arn ${DEPLOYMENT_ROLE} --region ${REGION}

    STACK_NAME=sam-layers-example-${STAGE_NAME}

    echo "===== Checking for stuck stack ${STACK_NAME} ====="
    export STATUS=$(aws cloudformation describe-stacks --stack-name ${STACK_NAME} --output text --query "Stacks[0].StackStatus" --region ${REGION})
    if [ ${STATUS} == "ROLLBACK_COMPLETE" ]; then
      echo "===== Found existing stack stuck in ROLLBACK_COMPLETE. Deleting... ====="
      aws cloudformation delete-stack --stack-name ${STACK_NAME} --region ${REGION}
      aws cloudformation wait stack-delete-complete --stack-name ${STACK_NAME} --region ${REGION}
    fi

    echo "===== Deploying ${APP_DIR} as stack ${STACK_NAME} ====="
    make deploy-${STAGE_NAME}

Combining these snippets with the rest of a proper job configuration for each stage yields a working CI configuration that can take one of these complex projects from development all the way to production.

Summary

While the process needed to create these well organized projects with nested applications and shared code is not trivial, it is a solid structure that leaves plenty of room to build large scale sets of services. It should now be clear what SAM and Layers are and why they work well together. In addition, the hurdles to getting multiple Node.js services deployed to production environments with GitLab CI should be greatly reduced by following the steps outlined above. Happy building!

Author
Jessica Ribeiro Featured Team Member
Jessica Ribeiro