Using AWS SageMaker to train a model to generate text (part 2)

This is part 2 following on from my previous post, investigating how to take advantage of AWS SageMaker to train a model and use it to generate text for my Twitter bot, @kevinhookebot.

From the AWS SageMaker docs, in order to get the data in a supported format to use to train a model, it mentions “A script to convert data from tokenized text files to the protobuf format is included in the seq2seq example notebook”

Ok, so from the SageMaker Notebook I created in part 1, let’s start it up via the AWS console:

Once started, clicking the ‘Open’ link to open the Jupyter notebook, we can open the seq2seq example which is in the ‘SageMaker Examples’ section:

From looking at the steps in this example Notebook, it’s clear that this character2character algorithm is more focused on translating text from source to destination (such as translating text in one language to another, as shown in this example notebook).

Ok, so this isn’t what I was looking for so let’s change gears. My main objective is to be able to train a new model using AWS SageMaker service, and generate text from it. From what I understand so far, you have two options how you can use SageMaker. You can either use the AWS Console for SageMaker to create Training Jobs using the built in algorithms, or you can use a Juypter notebook and define the steps yourself using Python to retrieve your data source, prepare the data, and train a model.

At this point the easiest thing might be to look for another Recurrent Neural Net (RNN) to generate characters to replace the Lua Torch char-rnn approach I was previously running locally on an Ubuntu server. Doing some searching I found char-rnn.pytorch.

This is my first experience setting up a Juypter notebook, so at this point I’ve no idea if what I’ve doing is the right approach, but I’ve got something working.

On the righthand side of the notetbook I pressed the New button and selected a Python PyTorch notebook:

Next I added a step to clone the char-rnn.pytorch repo into my notebook:

Next I added a step to use the aws cli to copy my data file for training the model into my notebook:

Next, adding the config options to train a model using char-rnn.pytorch, I added a step to run the training, but it gave an error about some Python modules missing:

Adding an extra step to use pip to install the required modules:

The default number of epochs is 2,000 which takes a while to run, so decreasing this to something smaller with –n_epochs 100 we get a successful run, and calling the generate script, we have content!

I trained with an incredibly small file to get started, just 100 lines of text, for a very short time. So next steps I’m going to look at:

  • training with the full WordPress export of all my posts for a longer training time
  • training with a cleaned up export (remove URL links and other HTML markup)
  • automate the text generation from the model to feed my AWS Lambda based bot

I’ll share another update on these enhancements in my next upcoming post.

 

Using AWS Sagemaker to train a model to generate text (part 1)

If you’ve followed any of my recent posts, you’ll know I have been using RNN models to generate text from a model trained with my previous tweets, and the text from all of my previous blog posts, and feeding this into a Twitter bot: @kevinhookebot

The trouble I have right now is the scripts and generate models are running using Lua, and although I could install this to an EC2 instance, I don’t want to pay for an EC2 instance being up 100% of the time. Currently when I generate a new batch of text for my Twitter bot, I startup a local server running the scripts and the model, generate new text, and then stage it to DynamoDB to get picked up by the bot when it’s scheduled to next run. With the AWS provided Machine Learning services, there has to be something out of the box I can use on AWS that would automate these steps.

Let’s take a look at using AWS SageMaker.

First I created a SageMaker notebook with a new role, to access S3 buckets with ‘sagemaker’ in the name.

Then I created an S3 bucket – sagemaker-kevinhooke-ml – and uploaded a copy of my data file (all my previous posts from this blog, concatenated into a single file).

Next I created a new Training Job.

You need to pick an algorithm for the training and there’s a selection of provided algorithms for different purposes. To generate new text ‘in the style of’ the text that I’m going to training the model with, the ‘Sequence2Sequence’ looks like it does what I need.

On completing the Training Job, I got this error:

Ok, so let’s change the instance type. I picked the smallest of the instances before:

And it looks like you can’t change the type on the Notebook. So let’s create a new Notebook. Looking at the instance types, the ones with GPU support are on the large side, so let’s pick the smallest of the options and try again.

At this point I realized the instance type it’s talking about is for the training job not the notebook, and it’s specified here:

So let’s pick one of the GPU types and try again.

First training job is running:

Next error:

Hmm, off to do some reading in the docs to see what’s needed to run this training job. The docs here describe what’s needed for the sequence2sequence algorithm and I’m clearly missing some steps, so taking a pause here and will come back with an update later.

Installing and Configuring Atlassian Confluence with MySQL in Docker Containers

Atlassian Confluence is already available as a Docker Image from the Docker Hub but you still need to provide a database instance for a production setup. Let’s build a docker-compose file to create a container from this image together with a container running MySQL.

First,  per the docs on the Docker Hub page, create an external folder /data/confluence that will get mounted as a volume by the Container.

This is my first version to get this working (keep reading for refining this to include a JDBC driver)

[code]

version: ‘3’
services:
confluence:
image: atlassian/confluence-server
restart: always
volumes:
– /data/confluence:/var/atlassian/application-data/confluence
ports:
– 8090:8090
– 8091:8091
confl-mysql:
build: ./mysql
restart: always
environment:
– MYSQL_RANDOM_ROOT_PASSWORD=yes
– MYSQL_DATABASE=confluence
– MYSQL_USER=confluence
– MYSQL_PASSWORD=your-password
[/code]

After hitting your-ip:8090 for the first time, you can pick the ‘My own database’ option:

To connect to a MySQL db you need to drop a MySQL JDBC driver into /opt/atlassian/confluence/confluence/WEB-INF/lib so at this point we’ve got a couple of options. We could either copy the JDBC driver into the container (but since containers are ephemeral we’d lose this change if we started a new container from the image), or take a step back and rebuild the image including the driver:

The right thing to do would be to rebuild a custom image including the driver. So let’s do that.

Download the MySQL Connector driver from here.

Let’s commit it into our project and add a new Dockerfile to build a modified version of the official Confluence image, which is simply just these two lines:

[code]

FROM atlassian/confluence-server
COPY mysql-connector-java-5.1.46.jar /opt/atlassian/confluence/confluence/WEB-INF/lib

[/code]

Update the docker-compose file to build this new image instead of using the provided one from Docker Hub. Replace:

[code]

image: atlassian/confluence-server

[/code]

with

[code]

build: ./confl-mysql

[/code]

(or your corresponding name of your custom image containing the above Dockerfile)

Now when we startup this container and hit the app, the JDBC driver was recognized and we’re on to the next config page for our database connection params:

Entering our credentials and pressing Test, we’ve got an error about the default encoding:

To address this, the Confluence setup docs here describe editing the my.cnf file in MySQL, or alternatively I could pass params. The MySQL docs have a chapter on configuring and running MySQL in Docker, and this Q&A on Stackoverflow describes passing the optional params in a command section in your docker-compose file.

My first attempt was to add this:

[code]

confl-mysql:
build: ./mysql
restart: always
command: character-set-server=utf8 collation-server=utf8_bin
[/code]

but the syntax was not quite right yet, resulting in the container startup in a restart loop, and this error appearing in the container logs:

/usr/local/bin/docker-entrypoint.sh: line 202: exec: character-set-server=utf8: not found

Reading docs for the command option, the command in the docker-compose file needs to be the command to start the app in the container as well as the optional params. So now I’m here:

[code]

confl-mysql:
build: ./mysql
restart: always
command: [mysqld, –character-set-server=utf8 –collation-server=utf8_bin]
[/code]

Now we’re getting closer. Logs from my MySQL container and how showing:

ERROR: mysqld failed while attempting to check config

command was: "mysqld --character-set-server=utf8 --collation-server=utf8_bin --verbose --help"

mysqld: Character set 'utf8 --collation-server=utf8_bin' is not a compiled character set and is not specified in the '/usr/share/mysql/charsets/Index.xml' file

Some Googling made me realize each of the params is command separated,  so next update is:

[code]
confl-mysql:
build: ./mysql
restart: always
command: [mysqld, –character-set-server=utf8, –collation-server=utf8_bin]
[/code]

and now we’ve got both containers starting up. The list of params should be updated to add all the optional params listed in the Confluence MySQL set up docs, otherwise you’ll get an error for each missing param. The complete list is:

command: [mysqld, --character-set-server=utf8, --collation-server=utf8_bin, --default-storage-engine=INNODB, --max_allowed_packet=256M, --innodb_log_file_size=2GB, --transaction-isolation=READ-COMMITTED, --binlog_format=row]

… and my VM has run out of diskspace, so time to expand my disk. Back shortly.

Ok, back. Restarted and now we’re in business:

Complete config and now the containers are up!

Changing a GitLab Runner from ‘Locked to a Project’ to Shared

I have a GitLab Runner assigned to a project that I’d like to share with another similar project. Currently it looks like this:

Pressing the small edit icon, I can see these options:

I want to reuse this same runner, so I unchecked the ‘Lock to current projects’ checkbox.

Now if I go to the CI/CD settings for my other project I can see it is available, so I click ‘enable for this project’

Now my Pending Job that was triggered after my first push to my repo has kicked in and is being deployed to my test Docker server. Cool.