Migrating from Mastodon botsin.space: self-hosted vs hosting service alternatives

Given the news that the bot-friendly Mastodon instance https://botsin.space/home is shutting down, I need to decide what my next steps should be for the bots that have accounts on that instance:

  • abandon them
  • migrate their accounts to another Mastodon instance or somewhere else like BlueSky
  • setup and run my own Mastodon instance
  • pay for a hosted Mastodon instance

Developing bots is a fun personal project to get up to speed with developing and running services in the cloud, so even if I don’t continue running my current bots, it’s likely I’ll deploy something else bot-related in the future, so I’m most likely going to migrate them somewhere.

I’ve already migrated a few of my bots from Twitter to Mastodon, and now faced with another move, the option of running my own Mastodon instance seems more appealing than relying on someone else’s instance that may or may not be running months from now. Given that I already host other things in the cloud, including this blog, I thought I’d give it a go to setup a Docker based Mastodon instance. The source project provides Dockerfile and docker-compose.yml so I thought it would probably be relatively easy. The docs look more detailed for installing on a bare OS though, so it’s not as obvious what you need to do to configure an instance to get it up and running successfully.

I followed multiple guides which all seem to cover various different parts of the install and setup, these two were the most comprehensive:

Despite following these guides, I ran into many, many issues, and as I found solutions I started putting together my own step by step guide below. Several times I discovered that the issues I was running into was because there was an additional step I needed to run first that wasn’t mentioned elsewhere, and even though I found work arounds it was easier to throw the install away and start fresh adding the step(s) I’d missed before.

The tl;dr conclusion

After spending a few hours over several days, I got to the point of having an instance up and running on GCP, but an e2-small instance was too slow, and while upgrading to a e2-medium ran ok, at that point that instance type would have been too expensive for a hobby project to leave up 24×7. Even though it was up and running I couldn’t seem to search for or follow anyone on another instance, or get any relays successfully added.

To run a self-hosted instance I’d also need an SMTP service as well for notification emails, so I decided that the cheapest ‘Moon’ hosting plan from https://masto.host/ would be more than for my projects, so I’ve set up my own instance with them. Sign up was effortless, and my own instance was up and running in a couple of minutes – it’s at: https://mastodon.kevinhooke.com/home

docker-compose Mastodon setup steps:

As explained above, despite getting to the point of a running server, it still had issues that I didn’t want to spend more time investigating, so I’ll leave these notes here in case they’re useful for someone else running into similar issues, but please take these with a grain of salt and no guarantee that you’ll get a working server as result.

  1. Clone the mastdon repo
  2. cp .env.production.sample .env.production
  3. Run secret generation steps from comments in .env.production and paste generated values into .env.production, using
docker compose run --rm web bin/rails db:encryption:init

and (run this one twice for SECRET_KEY_BASE and OTP_SECRET):

docker compose run --rm web bundle exec rails secret

and this one for VAPID_PUBLIC_KEY and VAPID_PRIVATE_KEY:

docker compose run --rm web  bundle exec rails mastodon:webpush:generate_vapid_key

4. Replace any localhost references with the name of the Docker container in .env.production, for example:

    REDIS_HOST=redis
    DB_HOST=db
    ES_HOST=es

    5. Run the db setup step:

    docker compose run --rm web  bundle exec rails db:setup

    I’d previously missed this step and so managed to get the db setup via several manual steps – skip these if you run db:setup instead: run psql in the db service container and manually create a mastodon user:

    CREATE USER mastodon WITH PASSWORD '<password>' CREATEDB; 

    Run the db:create script. If you get an error that the db already exists, run the db:migrate script.

    Mounted Volume ownership

    Within your mastodon dir, change the permissions on the following folders which get mounted as volumes.

    For static content accessed by the web container:

    sudo chown -R 991:991 public

    For elasticsearch runtime data:

    sudo chown -R 1000:root elasticsearch 

    … this avoid error in the es logs about being unable to access the mounted volume (from here):

    AccessDeniedException: /usr/share/elasticsearch/data/nodes

    ElasticSearch vm.max_map_count error

    bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

    ElasticSearch error in Admin page

    Elasticsearch index mappings are outdated. Please run tootctl search deploy --only=instances tags

    ‘docker exec -it container-id bash’ into the web container and run to fix.

    Post install setups

    RAILS_ENV=production bin/tootctl accounts create \
    alice \
    --email alice@example.com \
    --confirmed \
    --role Owner

    Troubleshooting

    On starting up, if you get any database connection errors, check the previous step about replacing localhost with Docker container names:

    Did you not create the database, or did you delete it? To create the database, run: bin/rails db:create

    docker-compose (v1.29.2) to remote host with ssh fails

    I have a personal project that is docker-compose based which I’ve deployed to remote servers before in the past (a few years ago, using steps here), and recently attempting to redeploy it from a more recent self-hosted GitLab pipeline on Ubuntu 24.0.4 I get this error:

    docker.errors.DockerException: Install paramiko package to enable ssh:// support

    This issue is exactly as described on this ticket. The issue also seems to be OS specific as well as docker-compose version specific – I have docker-compose 1.29.2 on MacOS Sequoia and it works fine, but 1.29.2 on Ubuntu 24.04 or 22.04 fails with the above error.

    The workaround as described by multiple comments on the ticket is to not use the version installed by apt-get, instead install a specific older/working version with pip instead:

    pip3 install docker-compose==1.28.2

    Deploying a container to Google Cloud Run via gcloud cli

    If you don’t already have one, create an Artifact Registry:

    gcloud artifacts repositories create your-repo-name \
    --repository-format=docker \
    --location=europe-west2 \
    --description="your-repo-description" \
    --immutable-tags \
    --async

    Authorize gcloud cli access to the registry in your region:

    gcloud auth configure-docker europe-west2-docker.pkg.dev

    This adds config to $HOME/.docker/config.json, you can look in this file to see what GCP registries you have already authenticate with.

    The image you’re deploying needs to listen on port 8080, and needs to be built for linux/amd64. If you’re building on an Apple Silicon Mac, build your image with:

    docker build . --platform linux/amd64 -t image-tag-name 

    Tag the image ready to push to your registry:

    docker tag SOURCE-IMAGE LOCATION-docker.pkg.dev/PROJECT-ID/REPOSITORY/IMAGE:TAG

    where:

    LOCATION = GCP region, e.g. europe-west2

    Authenticate your local Docker with your GCP Artifact Repository:

    gcloud auth configure-docker LOCATION-docker.pkg.dev

    Push your image to the Artifact Repository with:

    docker push LOCAITION-docker.pkg.dev/PROJECT-ID/REPOSITORY/IMAGE:TAG

    After pushing you can browse your Artifact Registry in the Console and see your image there.

    To deploy a new service using the image you just pushed:

    gcloud run deploy gcp-nginx-test --project your-project-name --image LOCAITION-docker.pkg.dev/PROJECT-ID/REPOSITORY/IMAGE:TAG

    These steps are a summary of the Artifact Registry docs here, and the Cloud Run docs here.

    GitLab Runner unable to run Docker commands

    I have a GitLab Runner using a Shell Executor that needs to build a Docker container. When it executes the first Docker command it gets this error:

    docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))

    If I logon as the gitlab-runnner user and try to execute docker commands manually I get this error:

    $ docker ps
    permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.47/containers/json": dial unix /var/run/docker.sock: connect: permission denied

    A quick Google and I need to add the gitlab-runner to the Docker group to grant it permission to execute Docker:

    sudo usermod -a -G docker $USER