Debugging Docker container builds

When running a ‘docker build . -t imagename’ to build a new image, each of the steps in your Dockerfile outputs a one-line status, but if you need to see the actual output of each step, you need pass the –progress-plain option. If your build is stopping at a particular step and you need to see the output of previous steps that are now cached, you can use the –no-cache option:

docker build --progress=plain --no-cache . -t imagename

Using Serverless Framework to build and deploy Docker images for AWS Lambdas

AWS Lambdas can be packaged and deployed using a Docker image, described in the docs here.

Serverless Framework makes building and deploying a Docker based Lambda incredibly simple. If you have a simplest Dockerfile like this (from the docs here):

FROM public.ecr.aws/lambda/nodejs:14

# Assumes your function is named "app.js", and there is a package.json file in the app directory 
COPY app.js package.json  ${LAMBDA_TASK_ROOT}

# Install NPM dependencies for function
RUN npm install

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ] 

The handler to be packaged in this image is this simplest hello world function:

exports.handler =  async function(event, context) {
    console.log("EVENT: \n" + JSON.stringify(event, null, 2))
    return "hello!"
  }

To define a Lambda using this image, with Serverless define an ECR section like this to define your image, which will get built using the above Dockerfile in the same folder:

service: lambda-container-1

provider:
  name: aws
  ecr:
    images:
      lambda-container-example:
        path: ./

functions:
  hello:
    image:
      name: lambda-container-example

Run ‘serverless deploy’ and it builds the image, uploads to ECR, and deploys the Lambda all for you.

After deploying, on first test I got this error:

  "errorMessage": "RequestId: 58fe500f-26ee-44ba-b6a9-6079b6ff2896 Error: fork/exec /lambda-entrypoint.sh: exec format error",
  "errorType": "Runtime.InvalidEntrypoint"

The key part of this error is this “exec format error”. I’m building and deploying this from my M1 MacBook Pro, which is Apple’s arm64, not x64.

If we look in the AWS Console for this lambda, on the first page under Image you’ll see the Lambda runtime architecture:

Updating the serverless.yml to include ‘architecture: arm64’, redeploy and now the architecture is arm64:

Invoking with ‘serverless invoke –function hello’ and now it successfully runs!

Running aitextgen model training in a Docker container

I’m setting up an approach to run text generation model training jobs on demand with aitextgen, and the first approach I’m looking at is to run the training in a Docker container. Later I may move this to an AWS service like ECS, but this is my first step.

I’ve built a Docker image with the following dockerfile:

FROM amazonlinux
RUN yum update -y
RUN yum install -y python3
RUN pip3 install aitextgen
ADD source-file-for-fine-tuning.txt .
ADD generate.py .
ADD train.py .

.. and then built my image with:

docker build -t aitextgen .

I then run a container passing in the cmd I want to run, in this case ‘python3 train.py’:

docker run --volume /data/trained_model:/trained_model:rw -d aitextgen sh -c "cd / && python3 train.py && mv aitextgen.tokenizer.json /trained_model"

I’m also attaching a bind point where the model output is being written to during the run, and -d to run the container in the background. The last step in the run command copies the token file to the mounted EBS volume so it can be reused by the generation.

To generate text from the model, run:

docker run --volume /data/trained_model:/trained_model:rw -d aitextgen sh -c "cd / && python3 generate.py"