If you’ve followed any of my recent posts, you’ll know I have been using RNN models to generate text from a model trained with my previous tweets, and the text from all of my previous blog posts, and feeding this into a Twitter bot: @kevinhookebot
The trouble I have right now is the scripts and generate models are running using Lua, and although I could install this to an EC2 instance, I don’t want to pay for an EC2 instance being up 100% of the time. Currently when I generate a new batch of text for my Twitter bot, I startup a local server running the scripts and the model, generate new text, and then stage it to DynamoDB to get picked up by the bot when it’s scheduled to next run. With the AWS provided Machine Learning services, there has to be something out of the box I can use on AWS that would automate these steps.
Let’s take a look at using AWS SageMaker.
First I created a SageMaker notebook with a new role, to access S3 buckets with ‘sagemaker’ in the name.
Then I created an S3 bucket – sagemaker-kevinhooke-ml – and uploaded a copy of my data file (all my previous posts from this blog, concatenated into a single file).
Next I created a new Training Job.
You need to pick an algorithm for the training and there’s a selection of provided algorithms for different purposes. To generate new text ‘in the style of’ the text that I’m going to training the model with, the ‘Sequence2Sequence’ looks like it does what I need.
On completing the Training Job, I got this error:
Ok, so let’s change the instance type. I picked the smallest of the instances before:
And it looks like you can’t change the type on the Notebook. So let’s create a new Notebook. Looking at the instance types, the ones with GPU support are on the large side, so let’s pick the smallest of the options and try again.
At this point I realized the instance type it’s talking about is for the training job not the notebook, and it’s specified here:
So let’s pick one of the GPU types and try again.
First training job is running:
Next error:
Hmm, off to do some reading in the docs to see what’s needed to run this training job. The docs here describe what’s needed for the sequence2sequence algorithm and I’m clearly missing some steps, so taking a pause here and will come back with an update later.