Debugging Docker container builds

When running a ‘docker build . -t imagename’ to build a new image, each of the steps in your Dockerfile outputs a one-line status, but if you need to see the actual output of each step, you need pass the –progress-plain option. If your build is stopping at a particular step and you need to see the output of previous steps that are now cached, you can use the –no-cache option:

docker build --progress=plain --no-cache . -t imagename

AWS Lambda Custom Runtime for executing arm64 binaries

I’m configuring an AWS Lambda with a custom runtime using the Serverless framework, and I’ve run into this error:

Architecture config by default on Lambda is showing x86_64::

If I trying to create with arm64 instead it gives:

An error occurred: HelloLambdaFunction - Resource handler returned message: "Runtime provided does not support the following architectures [arm64]. Please select different architectures from [x86_64] or select a different runtime

This is slightly obscure and it a result of the ‘provided’ runtime coming in 2 flavors, Amazon Linux 1 (provided) and Amazon Linux 2 (provided.al2), and only provided.al2 supports arm64.

If you change your serverless.yaml to include the provided.al2 runtime, then it deploys as expected.

This just means replacing this:

provider:
  name: aws
  runtime: provided

with:

provider:
  name: aws
  runtime: provided.al2

Note now how the runtime shows Amazon Linux 2 and arm64:

Planning Twitter bot to Mastodon migration / updates – what do I have running right now?

The odd thing about personal bot projects is that after you’ve deployed them and they’re up and running, unless apis change and need to be updated, there’s not much needed to keep them running, if anything. Some of my first bots I deployed as AWS Lambdas I’ve had running several times a day for 5 years. In this time AWS Lambda supported runtimes have come and gone out of support, so the Node6 runtime I was originally using has now definitely passed it’s official support.

This is mostly a todo list to help consolidate my todo list of bots that I need to look at as part of my migration from Twitter to Mastodon, but if you search you can find my previous posts that describe how these were built.

@kevinhookebot

Mostly migrated to @kevinhookebot@botsin.space on Mastodon but running on Twitter and Mastodon at the same time. Sends the same generated text to both at the same time, but replying to the bot either on Twitter or Mastodon will interact with just that bot on that account.

My first Twitterbot project, and has now tweeted over 11k times since 2018 when it went live. This comprises multiple Lambdas to provide different features:

  • a trained RNN text generation model generates random text and tweets every ~ 3 hours. One scheduled AWS Lambda generates the text and inserts to a DynamoDB table. Another scheduled Lambda reads the next tweet from the table and tweets using Twitter’s apis.
  • A scheduled Lambda runs every minutes calling a Twitter api to check for replies and tweets at this account. It replies with one of a number of canned replies
  • If you tweet at this bot with ‘go north|south|east|west it replies with a generated response typical of a text based adventure game. The replies are generated with a template and randomly inserted words (it isn’t actually a game)

@productnamebot

Tweets randomly generated product names using lists of key words. Not yet migrated to Mastondon. Has tweeted 7k times since 2018

@blackjackcard

A BlackJack cardgame bot. Not migrated to Mastodon yet. @ the bot with ‘deal’ to start a game. Tracks game state per player in DynamoDB. Uses Twitter apis to check for replies to the game bot every 5 minutes.

Why Google and others are ‘freaking out’ about ChatGPT right now

The recent articles about Google’s concern around ChatGPT (‘Google is freaking out about ChatGPT’) and the reason why other’s like Microsoft have just announced Bing integrated with ChatGPT is not what you might think at first. There’s a deeper concern about how this tech is going to change everything from this point onwards.

Yes, the tech is impressive, even if it doesn’t always generate factually correct responses. The weird thing about this in software development communities online, especially groups focused on supporting new developers, is the examples of it being used are where new developers are using the tech to help them find examples to answer ‘how do I…?’ or ‘show me an example of …?’ type questions. The generated responses can be mostly correct, with text generated from source material that the model has been trained on, but the shocking realization of these examples shared online is that you could have found exactly the same content if you Googled for it.

This is why Google is worried. Not that they don’t have a comparable product readily available right now. They’re worried that traditional search traffic and therefore ad revenue suddenly has an alternative, one that is gaining a lot of interest and hype, and maybe for the first time in years, suddenly there is a threat that search traffic that would have previously gone to Google is now going to go somewhere else.

Microsoft’s announcement yesterday that they are adding ChatGPT integration into their Bing search engine hits the nail on the head. They didn’t announce a page where you can go and ask weird questions, they’re building it in to their search engine.

There’s something fundamentally game changing to the search (and ad venue) )industry about this. Instead of searching for key words and phrases like we’ve all got used to now for years, being able to ask a vague question on a topic and get what you’re looking for is a game changer. Instead of searching for links to content on other websites that have been indexed, you can now search knowledge and ask questions in a conversational style to find the information you’re looking for. That is a game changer.