Planning to migrate my AWS Lambda Twitter bots to Mastodon

Like everyone else right now, I’m mulling options to migrate away from Twitter, likely to Mastodon (follow me at @kevinhooke@mastodon.social !). Moving my personal usage is relatively simple, other than rebuilding a list of people and tags that I like to follow. I also have a number of Twitter bots running on AWS Lambdas that I’ve built over the years that I should move at some point.

The easy part is that the code that’s running as an AWS Lambda doesn’t need to physically move anywhere, that can continue to run where it is. The part that needs to change is the APIs it’s using to integrate with Twitter and update them to use APIs to post to Mastodon instead.

I’m still in early stage of looking at options so far. I’ve discovered there’s a Mastodon instance, BotsInSpace, that’s specifically for running bots, so that addresses that first step, where they need to run against. I’ve also been reading through a few articles on developing bots for Mastodon such as this one. So far looks like it shouldn’t be too much of a big deal to move them across.

Restructuring / refactoring my Mastodon ZorkBot

Now my ZorkBot is working over at @zorkbot@botsin.space I’ve restructured the 3 Lambdas so it’s easier to follow what calls what.

In it’s current/initial working state, I built it with 3 AWS Lambdas:

mastodon-zorkbot

This Lambda runs every 5 minutes, checking for players messaging the bot

  • checks for last replied to messages so it only replies to a message once, using a DynamoDB table
  • if a new reply is needed, sends the game move command from the player to the next Lamdba, mastodonbot-aws-sdk2
  • replies to the player using the Mastodon APIs

The Lambda is deployed as: mastodonbot-zorkbot-dev-mastodon-reply

mastodonbot-aws-sdk2

This Lambda is called by mastodon-zorkbot, and provides parsing of the text responses from the frotz Custom Runtime Lambda, which is calls next

custom-lambda-zork

This is a Docker image for a Custom Lambda Runtime that invokes the frotz binary. This is the zmachine game runtime that runs the Zork game. It could be updated to run any game supported by frotz. The Custom Runtime has a shell script that pipes echo commands into frotz, captures the response and retuns to mastodonbot-aws-sdk2

The Refactoring

For first steps I’m just going to rename the folders of each of the Lamdbas and the serverless.yml service names so it’s easier to track what calls what in sequence:

mastodon-zorkbot -> zorkbot-1-mastodonreplies

mastodonbot-aws-sdk2 -> zorkbot-2-responseparser

custom-lambda-zork -> zorkbot-3-frotzcustomruntime

Will update later with progress.

Building bots on Twitter with AWS Lambdas (and other stuff)

I’ve built a few different bots on Twitter and written several articles describing how I built them. Some of these were a few months back – once they’re up and running it’s easy to forget they’re up and running (thanks to the free tier on AWS Lambda which means you can run scheduled Tweets well within the free tier limits). This is a summary of the bots I’ve developed so far.

Looking at where I got started, my first bot was to build an integration between Amateur Radio 2m Packet, retweeting packets received locally to Twitter. This was my first experience working with the Twitter REST apis and the OAUTH authentication, so I lot of what I learned here I reapplied to the following bots too:

For my next project, I was inspired by articles by researcher Janelle Shane who has been training ML models to produce some hilarious results, such as weird recipes, college course names and many others. I was curious what content a ML model would generate if I extracted all of my past 4000+ Tweets from Twitter and trained a model with the content. I had many questions, such as would the content be similar in style, and is 4000 Tweets enough text to train a model? You can follow my progress in these posts:

This then led to repeating the experiment with over 10 years of my blog articles and posts collected here, which you can follow in these posts:

Next, what would it take to train my model in the cloud using AWS Sagemaker, and run using AWS Lambdas?

You can follow this bot on Twitter here: @kevinhookebot

I had fun developing @kevinhookebot – it evolved over time to support a few features, not just to retweet content from the trained ML model. Additional features added:

  • an additional Lambda that consumes the Twitter API ‘mentions’ timeline and replies with one of a number of canned responses (not generated, they’re just hard coded phrases). If you reply to any of it’s tweets or Tweet @ the bot it will reply to you every 5 minutes when it sees a new tweet in the mentions timeline
  • another Lambda that responds to @ mentions to the bot as if it is a text-base adventure game. Tweet ‘@kevinhookebot go north’ (or east/west/south) and the bot will respond with some generated text in the style of an adventure game. There’s no actual game to play and it doesn’t track your state, but each response is generated using @GalaxyKate ‘s Tracery library to generate the text using a simple grammar that defines the structure of each reply.

After having fun with the adventure text reply generator, I also used the Tracey library for another AWS Lambda bot that generates product/project names and tweets every 6 hours. I think it’s rather amusing, you can check it out here: @ProductNameBot

@ProductNameBot

My most recent creation I upped the ante slightly and wondered what it would take to develop a Twitter bot that playeda card game. This introduced some interesting problems that I hadn’t thought about yet, like how to track the game state for each player. I captured the development in these posts here:

I have some other ideas for something I might put together soon. Stay posted for more details 🙂