Using Serverless Framework to build and deploy Docker images for AWS Lambdas

AWS Lambdas can be packaged and deployed using a Docker image, described in the docs here.

Serverless Framework makes building and deploying a Docker based Lambda incredibly simple. If you have a simplest Dockerfile like this (from the docs here):

FROM public.ecr.aws/lambda/nodejs:14

# Assumes your function is named "app.js", and there is a package.json file in the app directory 
COPY app.js package.json  ${LAMBDA_TASK_ROOT}

# Install NPM dependencies for function
RUN npm install

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ] 

The handler to be packaged in this image is this simplest hello world function:

exports.handler =  async function(event, context) {
    console.log("EVENT: \n" + JSON.stringify(event, null, 2))
    return "hello!"
  }

To define a Lambda using this image, with Serverless define an ECR section like this to define your image, which will get built using the above Dockerfile in the same folder:

service: lambda-container-1

provider:
  name: aws
  ecr:
    images:
      lambda-container-example:
        path: ./

functions:
  hello:
    image:
      name: lambda-container-example

Run ‘serverless deploy’ and it builds the image, uploads to ECR, and deploys the Lambda all for you.

After deploying, on first test I got this error:

  "errorMessage": "RequestId: 58fe500f-26ee-44ba-b6a9-6079b6ff2896 Error: fork/exec /lambda-entrypoint.sh: exec format error",
  "errorType": "Runtime.InvalidEntrypoint"

The key part of this error is this “exec format error”. I’m building and deploying this from my M1 MacBook Pro, which is Apple’s arm64, not x64.

If we look in the AWS Console for this lambda, on the first page under Image you’ll see the Lambda runtime architecture:

Updating the serverless.yml to include ‘architecture: arm64’, redeploy and now the architecture is arm64:

Invoking with ‘serverless invoke –function hello’ and now it successfully runs!

Deploying changes to individual Lambdas using Serverless Framework

I have a serverless project that deploys 2 Lambdas in the same stack:

service: example-apis2

provider:
  name: aws
  memorySize: 512
  region: us-west-1
  apiGateway:
    restApiId: ${env:APIGWID} #  API Gateway to add this api to
    restApiRootResourceId: ${env:RESOURCEID}
functions:
  example2:
    handler: index2.handler
    layers:
      - arn:aws:lambda:us-west-1:[myaccountid]:layer:example-layer:1
    events:
      - http:
          path: api2
          method: get
  example3:
    handler: index3.handler
    layers:
      - arn:aws:lambda:us-west-1:[myaccountid]:layer:example-layer:1
    events:
      - http:
          path: api3
          method: get

After first deploy, if I do

aws lambda get-function --function-name example-apis2-dev-example2 
...
"LastModified": "2021-10-27T06:30:31.002+0000",

and

$ aws lambda get-function --function-name example-apis2-dev-example3
...
"LastModified": "2021-10-27T06:30:31.987+0000",

Now if I make a code change only to the example 3 Lambda and redeploy only that function with:

serverless deploy function -f example-apis2-dev-example3

… example2 has not been modified since the first deploy (same timestamp as the original deploy):

$ aws lambda get-function --function-name example-apis2-dev-example2
...
"LastModified": "2021-10-27T06:30:31.002+0000",,

and only example3 shows it was updated/redeployed:

$ aws lambda get-function --function-name example-apis2-dev-example3
...
"LastModified": "2021-10-27T06:33:17.736+0000",

serverless deploy : deploys the whole stack (but if nothing has changed there is no update)

serverless deploy function -f functioname: updates just the code on that one Lambda (and updates in a couple of seconds vs several seconds for updating the whole stack).

This is described in this article here.

Deploying multiple Serverless Framework apis to the same AWS API Gateway

By default, each Serverless project you deploy will create a new API Gateway. In most cases this works fine, but for larger projects you may need to split your apis across multiple smaller Serverless projects, each with their own serverless.yml that can be deployed independently.

The Serverless docs describe how to do this here. In each additional Serverless project where you want to add additional apis to an existing API Gateway, you need to specify 2 additional properties in your Serverless.yml, apiGateway and restApiRootResourceId:

provider:
  name: aws
  apiGateway:
    restApiId: xxxxxxxxxx # REST API resource ID. Default is generated by the framework
    restApiRootResourceId: xxxxxxxxxx # Root resource, represent as / path

apiGateway – this is the 11 character id for your API Gateway that you want to add resources to. You can get this from the console and it’s the prefix in your api gw url, e.g. https://aaaaaaaaaaa.execute-api.us-west-1.amazonaws.com/dev

The id for the root resource is where in your api path structure you want to add your new resource to, either the id of the root / or one of the existing paths beneath the root.

This id value I don’t think is visible in the console, but you can get it a list of all the resources in your API Gateway including the ids of each of the existing resources, with:

aws apigateway get-resources --rest-api-id aaaaaaaaaaa --region us-west-2

It will give a response that looks like:

{
    "items": [
        {
            "id": "bbbbbb",
            "parentId": "aaaaaaaaaa",
            "pathPart": "example1",
            "path": "/example1",
            "resourceMethods": {
                "GET": {}
            }
        },
        {
            "id": "aaaaaaaaaa",
            "path": "/"
        }
    ]
}

In this example I have a root / with id = aaaaaaaaa and a resource bbbbbb for /example1.

In this case if I pass aaaaaaaaaa as the value for restApiRootResourceId then my new resource will be added to /, or passing bbbbbb it will be added as a resource under /example1