AWS Lambda Docker container runtime error: Runtime exited with error: exit status 127

While testing my Lambda using a Docker container, when the Lambda is invoked, I’m getting this runtime error:

Process 17(bootstrap) exited: Runtime exited with error: exit status 127

Process exit status code 127 errors are usually a file not found error. Going back through the Lambda logs, luckily there’s an error telling me I have an error in my shell script:

/var/task/test.sh: line 5: output: command not found

Quick and easy fix.

Debugging Docker container builds

When running a ‘docker build . -t imagename’ to build a new image, each of the steps in your Dockerfile outputs a one-line status, but if you need to see the actual output of each step, you need pass the –progress-plain option. If your build is stopping at a particular step and you need to see the output of previous steps that are now cached, you can use the –no-cache option:

docker build --progress=plain --no-cache . -t imagename

Using Serverless Framework to build and deploy Docker images for AWS Lambdas

AWS Lambdas can be packaged and deployed using a Docker image, described in the docs here.

Serverless Framework makes building and deploying a Docker based Lambda incredibly simple. If you have a simplest Dockerfile like this (from the docs here):

FROM public.ecr.aws/lambda/nodejs:14

# Assumes your function is named "app.js", and there is a package.json file in the app directory 
COPY app.js package.json  ${LAMBDA_TASK_ROOT}

# Install NPM dependencies for function
RUN npm install

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ] 

The handler to be packaged in this image is this simplest hello world function:

exports.handler =  async function(event, context) {
    console.log("EVENT: \n" + JSON.stringify(event, null, 2))
    return "hello!"
  }

To define a Lambda using this image, with Serverless define an ECR section like this to define your image, which will get built using the above Dockerfile in the same folder:

service: lambda-container-1

provider:
  name: aws
  ecr:
    images:
      lambda-container-example:
        path: ./

functions:
  hello:
    image:
      name: lambda-container-example

Run ‘serverless deploy’ and it builds the image, uploads to ECR, and deploys the Lambda all for you.

After deploying, on first test I got this error:

  "errorMessage": "RequestId: 58fe500f-26ee-44ba-b6a9-6079b6ff2896 Error: fork/exec /lambda-entrypoint.sh: exec format error",
  "errorType": "Runtime.InvalidEntrypoint"

The key part of this error is this “exec format error”. I’m building and deploying this from my M1 MacBook Pro, which is Apple’s arm64, not x64.

If we look in the AWS Console for this lambda, on the first page under Image you’ll see the Lambda runtime architecture:

Updating the serverless.yml to include ‘architecture: arm64’, redeploy and now the architecture is arm64:

Invoking with ‘serverless invoke –function hello’ and now it successfully runs!