When running a ‘docker build . -t imagename’ to build a new image, each of the steps in your Dockerfile outputs a one-line status, but if you need to see the actual output of each step, you need pass the –progress-plain option. If your build is stopping at a particular step and you need to see the output of previous steps that are now cached, you can use the –no-cache option:
I’m configuring an AWS Lambda with a custom runtime using the Serverless framework, and I’ve run into this error:
Architecture config by default on Lambda is showing x86_64::
If I trying to create with arm64 instead it gives:
An error occurred: HelloLambdaFunction - Resource handler returned message: "Runtime provided does not support the following architectures [arm64]. Please select different architectures from [x86_64] or select a different runtime
This is slightly obscure and it a result of the ‘provided’ runtime coming in 2 flavors, Amazon Linux 1 (provided) and Amazon Linux 2 (provided.al2), and only provided.al2 supports arm64.
If you change your serverless.yaml to include the provided.al2 runtime, then it deploys as expected.
This just means replacing this:
provider:
name: aws
runtime: provided
with:
provider:
name: aws
runtime: provided.al2
Note now how the runtime shows Amazon Linux 2 and arm64:
The odd thing about personal bot projects is that after you’ve deployed them and they’re up and running, unless apis change and need to be updated, there’s not much needed to keep them running, if anything. Some of my first bots I deployed as AWS Lambdas I’ve had running several times a day for 5 years. In this time AWS Lambda supported runtimes have come and gone out of support, so the Node6 runtime I was originally using has now definitely passed it’s official support.
This is mostly a todo list to help consolidate my todo list of bots that I need to look at as part of my migration from Twitter to Mastodon, but if you search you can find my previous posts that describe how these were built.
Mostly migrated to @kevinhookebot@botsin.space on Mastodon but running on Twitter and Mastodon at the same time. Sends the same generated text to both at the same time, but replying to the bot either on Twitter or Mastodon will interact with just that bot on that account.
My first Twitterbot project, and has now tweeted over 11k times since 2018 when it went live. This comprises multiple Lambdas to provide different features:
a trained RNN text generation model generates random text and tweets every ~ 3 hours. One scheduled AWS Lambda generates the text and inserts to a DynamoDB table. Another scheduled Lambda reads the next tweet from the table and tweets using Twitter’s apis.
A scheduled Lambda runs every minutes calling a Twitter api to check for replies and tweets at this account. It replies with one of a number of canned replies
If you tweet at this bot with ‘go north|south|east|west it replies with a generated response typical of a text based adventure game. The replies are generated with a template and randomly inserted words (it isn’t actually a game)
A BlackJack cardgame bot. Not migrated to Mastodon yet. @ the bot with ‘deal’ to start a game. Tracks game state per player in DynamoDB. Uses Twitter apis to check for replies to the game bot every 5 minutes.