Building bots on Twitter with AWS Lambdas (and other stuff)

I’ve built a few different bots on Twitter and written several articles describing how I built them. Some of these were a few months back – once they’re up and running it’s easy to forget they’re up and running (thanks to the free tier on AWS Lambda which means you can run scheduled Tweets well within the free tier limits). This is a summary of the bots I’ve developed so far.

Looking at where I got started, my first bot was to build an integration between Amateur Radio 2m Packet, retweeting packets received locally to Twitter. This was my first experience working with the Twitter REST apis and the OAUTH authentication, so I lot of what I learned here I reapplied to the following bots too:

For my next project, I was inspired by articles by researcher Janelle Shane who has been training ML models to produce some hilarious results, such as weird recipes, college course names and many others. I was curious what content a ML model would generate if I extracted all of my past 4000+ Tweets from Twitter and trained a model with the content. I had many questions, such as would the content be similar in style, and is 4000 Tweets enough text to train a model? You can follow my progress in these posts:

This then led to repeating the experiment with over 10 years of my blog articles and posts collected here, which you can follow in these posts:

Next, what would it take to train my model in the cloud using AWS Sagemaker, and run using AWS Lambdas?

You can follow this bot on Twitter here: @kevinhookebot

I had fun developing @kevinhookebot – it evolved over time to support a few features, not just to retweet content from the trained ML model. Additional features added:

  • an additional Lambda that consumes the Twitter API ‘mentions’ timeline and replies with one of a number of canned responses (not generated, they’re just hard coded phrases). If you reply to any of it’s tweets or Tweet @ the bot it will reply to you every 5 minutes when it sees a new tweet in the mentions timeline
  • another Lambda that responds to @ mentions to the bot as if it is a text-base adventure game. Tweet ‘@kevinhookebot go north’ (or east/west/south) and the bot will respond with some generated text in the style of an adventure game. There’s no actual game to play and it doesn’t track your state, but each response is generated using @GalaxyKate ‘s Tracery library to generate the text using a simple grammar that defines the structure of each reply.

After having fun with the adventure text reply generator, I also used the Tracey library for another AWS Lambda bot that generates product/project names and tweets every 6 hours. I think it’s rather amusing, you can check it out here: @ProductNameBot

@ProductNameBot

My most recent creation I upped the ante slightly and wondered what it would take to develop a Twitter bot that playeda card game. This introduced some interesting problems that I hadn’t thought about yet, like how to track the game state for each player. I captured the development in these posts here:

I have some other ideas for something I might put together soon. Stay posted for more details 🙂

AWS VPCs and CIDR blocks (possibly more than you’ve ever wanted to know about IP addresses, networks, subnet masks and all that stuff)

If you’re interested in firing up some AWS EC2 instances, and some point you’re going to need to know about VPCs (Virtual Private Clouds) and CIDR blocks. If you’re studying for the AWS Solution Architect certification this is also an important topic covered in the exam. There’s a whole section in the AWS docs here that covers the topic of CIDR blocks.

Unless you’re an experienced network engineer or someone who works with configuring network topologies, this might be the first time you’ve come across this concept (I had seen the CIDR notation before but didn’t know what it meant). At first it might be easy to memorize a few of the common examples and remember which represents a smaller network and which is larger, but to understand why a /24 network has less available addresses than a /8 network you’ll need to dig a little deeper.

First, let’s look at typical IPv4 IP addresses, networks and subnets:

IPv4 IP addresses have 4 digits separated by ‘.’s. Each digit is 8 bits, and are referred to as ‘octets’.

A typical address on a home network like 192.168.1.16 has a subnet mask of 255.255.255.0. This means the first 3 octets are used to represent the network, in this case 192.168.1.0. Computers with this same IP prefix are therefore part of the same network, so

  • 192.168.1.1
  • 192.168.1.2
  • up to
  • 192.168.1.254

… are all on the same network and assuming your router and each individual computer is setup ok, then each of the computers with these addresses on this same network can also see each other (without any additional routing between networks).

In this range there are also some reserved IPs for special purposes:

  • 192.168.1.0 is referred to as the network itself
  • 192.168.1.255 is a broadcast address for this network
  • this leaves 192.168.1.1 through 192.168.1.254 as usable addresses in the network

Originally networks were divided and categorized by 8 bit boundaries and were referred to as ‘class’ based networks:

Class C network:

  • subnet mask 255.255.255.0
  • first 3 octets are the network
  • the smallest network Class, with 256 available addresses
  • only the 4th octet is available for your host addresses, 0-255, 256 available addresses, (or 1-254 ignoring 0 and 255)
  • 24 bits used for network, 8 for hosts

Class B network:

  • subnet mask 255.255.0.0
  • first 2 octets are the network
  • 3rd and 4th octets available for host addresses
  • 16 bits used for network, 16 bits for hosts

Class A network:

  • subnet mask 255.0.0.0
  • first octet is the network
  • the largest IPv4 network Class
  • 2nd, 3rd and 4th octets available for hosts
  • 8 bits used for network, 24 bits for hosts

Ok, so this summarizes Class based networks which are divided by 8 bit boundaries, of which we only have 3 options, A (largest), B and C (smallest). Now let’s look at Classless networks.

Instead of restricting to 8 bit boundaries, Classless networks can use any of the bits to represent the network and the remaining bits for the hosts. Now let’s first look at the Classless InterDomain Routing (CIDR) notation for the same Class based networks as the first examples:

  • /24 CIDR block is the same as Class C, with subnet mask 255.255.255.0
  • /16 CIDR block is the same as Class B, with 255.255.0.0
  • /8 CIDR block is the same as Class A, with subnet mask 255.0.0.0

You might have noticed that the number in the /xx CIDR block notation is referring to is the number of bits used for the network, and therefore implies the number of bits remaining for host addresses (from the 4 total octets, or 32 bits). This approach is not restricted to the 8 bit boundaries though, any number of bits can be used for combination of network and hosts, so /24, /23, /22 and any value from /32 to /1 are all valid (although /32 with all bits for the network is of little practical use, similarly for /31 as there’s only 1 bit remaining for a host address, but these are used for special purposes, e.g. for a single host route, or point to point network links).

Ok, so now let’s apply this to look at AWS VPCs and CIDR blocks.

For a /24 block, we already looked at x.x.x.0 address to refer to the network and x.x.x.255 for the broadcast address. AWS VPC subnets reserve a further 3 IP addresses (described here) for AWS usage, x.x.x.1, x.x.x.2, and x.x.x.3, so for each subnet there are 5 IP addressed unavailable for your own hosts.

Now we’ve looked at how the address blocks are comprised, it’s easy to calculate how many addresses are available in a VPC for any CIDR block. Taking /24 as an example:

  • 32 – 24 = 8 bits for the host addresses
  • 2^8 = 256
  • 256 – 5 = 251

Knowing how IP addresses are structured, how CIDR blocks define the range of possible IPs for your hosts, the purpose of the .0 and .255 and with the additional 3 AWS reserved IPs, you can now calculate how many IP addresses are available to your VPS for any CDR block.

Building PyTorch from source for a smaller (<50MB) AWS Lambda deployment package

I’ve been trying to deploy a Python based AWS Lambda that’s using PyTorch. The problem I’ve run into is the size of the deployment package with PyTorch and it’s platform specific dependencies is far beyond the maximum size of a deployable zip that you can deploy as an AWS Lambda. Per the AWS Lambda Limits page, the maximum deployable zip is 50MB (and unzipped it needs to be less than 250MB).

I found this article which suggested to build PyTorch from source in an Amazon AMI EC2, using build options to reduce the build size. I followed all steps up to but not including line 65 as I don’t need torchvision.

If you’re looking for the tl:dr summary, here’s the keypoints:

  • yes, this approach works! (although it took many hours spread over a few weeks to get to this point!)
  • the specific Amazon AMI you need is the one that’s currently in use for running AWS Lambdas (this will obviously change at some point but as of 9/3/18 this AMI works) : amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2 (ami-aa5ebdd2)
  • a t2.micro EC2 instance does not have enough RAM to successfully build PyTorch. I used a t2.medium with 4GB RAM.
  • you can’t take a trained model .pt file generated from a different version of PyTorch/torch and use it to generate text using a different version. The PyTorch version for training and generating output must be identical

Ok, beyond the tl;dr summary above, here’s my experience following the steps in this article.

At line 63: 

python setup.py install

I got this error:

Could not find /home/ec2-user/pytorch/torch/lib/gloo/CMakeLists.txt
Did you run 'git submodule update --init'?

I ran the suggested ‘git submodule update’ and then re-ran the setup.py script and now it ran for a while but ended with error:

gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1

I spent a bunch of time trying to work out what was going on here, but I decided to take a different direction and skip building Python 3.6 from source, and try recreating these steps using Python 2.7 that is preinstalled in the Amazon Linux 2 AMI. The only parts that are slightly different is pip is no preinstalled, so I installed it with:

sudo yum install python-pip
sudo yum install python-wheel

The then virtualenv with:

sudo pip install virtualenv

I think point I pick up the steps from creating the virtualenv:

virtualenv ~/shrink_venv

After the step to build pytorch, now I’ve got (another) different error:

as: out of memory allocating 4064 bytes after a total of 45686784 bytes
{standard input}: Assembler messages:
{standard input}:934524: Fatal error: can't close build/temp.linux-x86_64-2.7/torch/csrc/jit/python_ir.o: Memory exhausted
torch/csrc/jit/python_ir.cpp:215:2: fatal error: error writing to -: Broken pipe

Ugh, I’m running in a t2.micro that only has 1GB ram. Let’s stop the instance, change the instance type to a t2.medium with 4GB and let’s try building again.

Running free before:

$ free
total        used        free      shared  buff/cache   available
Mem:        1009384       40468      870556         288       98360      840700
Swap:             0           0           0

And now after resizing:

$ free
total        used        free      shared  buff/cache   available
Mem:        4040024       55004     3825940         292      159080     3780552
Swap:             0           0           0

Ok, trying again, but since we’ve rebooted the instance, remembering to set the flags to minimize the build options which was the whole reason we were doing this:

$ export NO_CUDA=1
$ export NO_CUDNN=1

Next error:

error: could not create '/usr/lib64/python2.7/site-packages/torch': Permission denied

Ok, let’s run the build with sudo instead then. That fixes that.

Now I’m at a point where I can actually run the generate.py script but now I’ve got a completely different error:

/home/ec2-user/shrinkenv/lib/python2.7/site-packages/torch/serialization.py:316: SourceChangeWarning: source code of class 'torch.nn.modules.sparse.Embedding' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
Traceback (most recent call last):
  File "generate.py", line 54, in <module>
    decoder = torch.load(args.filename)
  File "/home/ec2-user/shrinkenv/lib/python2.7/site-packages/torch/serialization.py", line 261, in load
    return _load(f, map_location, pickle_module)
  File "/home/ec2-user/shrinkenv/lib/python2.7/site-packages/torch/serialization.py", line 409, in _load
    result = unpickler.load()
AttributeError: 'module' object has no attribute '_rebuild_tensor_v2'

Searching for the last part of this error found this post, which implies my trained model .pt file is from a different torch/pytorch version … which it most likely is as I trained using a version installed with pip, and now I’m trying to generate with a version built from source.

Rather than spend more time on this (some articles suggested you can read the .pt model from one pytorch version and convert it, but this doesn’t seem like a trivial activity and requires writing some code to do the conversion), so I’m going to train a new model with the same version I just built from source.

Now that’s successfully done, I have my Lambda handler script ready to go, and ready to package up, so back to the final steps from the article to zip up everything built and installed so far in my virtualenv:

cd $VIRTUAL_ENV/lib/python2.7/site-packages
zip -r ~/kevinhookebot-ml-lambda-generate-py.zip *

We’re at 57MB, so looking ok so far (although larger than 50MB?). Now add char-rnn.pytorch, my generated model and Lambda handler into the same zip, and we’re now at 58M so well within the 250MB limit for a Lambda package deployed via S3.

Let’s upload and deploy. Test calling the Lambda, and now we get:

Unable to import module 'generatelambda': /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /var/task/torch/lib/libshm.so)

Searching for this error I found this post which has a link to this page which lists a specific AMI version to be used when compiling dependencies for a Lambda deployment (amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2). Picking the default Amazon Linux 2 AMI is probably not this same image (and I already tried the Amazon Linux AMI 2018.03.0 and ran into other issues on that one) so looks like I need to start over (but getting close now, surely!)

Ok, new EC2 t2-medium instance with the exactly the same AMI image as mentioned above. Retraced my steps and now I feel I’m almost back at the same error as before:

error: command 'gcc' failed with exit status 1
gcc: error trying to exec 'cc1plus': execvp: No such file or directory

Searching some more for this I found this post with a solution to change the PATH to point exactly where cc1plus is installed. Instead of 4.8.3 in this AMI though it seems I have 4.8.5, so here’s the settings I used:

$ export COMPILER_PATH="/usr/libexec/gcc/x86_64-amazon-linux/4.8.5/:$COMPILER_PATH";
export C_INCLUDE_PATH="/usr/lib/gcc/x86_64-amazon-linux/4.8.5/include/:$C_INCLUDE_PATH";

And then I noticed in the post they hadn’t included either of these in setting the new PATH which seems like an oversight (I don’t think these will make any difference if they are not in the PATH), so I set my path like this, including COMPILER_PATH first:

export PATH="$COMPILER_PATH:/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:$PATH";

Now cc1plus is in my path, let’s try building pytorch again! In short, that worked.

Going back to the packaging steps, let’s zip everything up and push the zip to S3 ready to deploy.

I zipped up:

  • char-rnn.pytorch : from github, including my own runner script and lambda handler script
  • modules installed into virtualenv lib: ~/shrinkenv/lib/python2.7/site-packages/* (tqdm)
  • modules installed into virtualenv lib64: ~/shrinkenv/lib64/python2.7/site-packages/* (torch, numpy and unidecode)

At this point I used ‘aws s3 cp’ to copy the zip to s3, and configured my Lambda from the zip in s3. Set up a test to call my handler, and success!