Configuring nginx virtual hosts with sites-available / sites-enabled

I recently ran into an issue with how /etc/nginx/sites-enabled is used (or not) depending what nginx version you’re running on, or more likely according to this post, whether you’re running nginx on Debian/Ubuntu or the official nginx package.

The difference (which is pretty significant if you’re not expecting it), is that:

  • Debian/Ubuntu versions for nginx have sites-enabled included in nginx.conf by default
  • The official upstream nginx package does not

What this means is on Debian/Ubuntu, in /etc/nginx/nginx.conf you’ll have this include:

include /etc/nginx/sites-enabled/*;

but it’s missing if you run the upstream nginx package, like what seems to be included in the Docker image nginx:latest.

In my case running the Docker image nginx:latest every GET request was getting a 404 and the only clue was in my error.log each request was attempting to serve files from a default /usr/share/nginx/html folder, which is definitely not what was configured as my root in the config in /etc/nginx/sites-enabled:

open() "/usr/share/nginx/html/[url-request-here] HTTP/1.1" failed (2: No such file or directory)

Once I worked out what the issue was I just added the include line for sites-enabled.

Migrating from Mastodon botsin.space: self-hosted vs hosting service alternatives

Given the news that the bot-friendly Mastodon instance https://botsin.space/home is shutting down, I need to decide what my next steps should be for the bots that have accounts on that instance:

  • abandon them
  • migrate their accounts to another Mastodon instance or somewhere else like BlueSky
  • setup and run my own Mastodon instance
  • pay for a hosted Mastodon instance

Developing bots is a fun personal project to get up to speed with developing and running services in the cloud, so even if I don’t continue running my current bots, it’s likely I’ll deploy something else bot-related in the future, so I’m most likely going to migrate them somewhere.

I’ve already migrated a few of my bots from Twitter to Mastodon, and now faced with another move, the option of running my own Mastodon instance seems more appealing than relying on someone else’s instance that may or may not be running months from now. Given that I already host other things in the cloud, including this blog, I thought I’d give it a go to setup a Docker based Mastodon instance. The source project provides Dockerfile and docker-compose.yml so I thought it would probably be relatively easy. The docs look more detailed for installing on a bare OS though, so it’s not as obvious what you need to do to configure an instance to get it up and running successfully.

I followed multiple guides which all seem to cover various different parts of the install and setup, these two were the most comprehensive:

Despite following these guides, I ran into many, many issues, and as I found solutions I started putting together my own step by step guide below. Several times I discovered that the issues I was running into was because there was an additional step I needed to run first that wasn’t mentioned elsewhere, and even though I found work arounds it was easier to throw the install away and start fresh adding the step(s) I’d missed before.

The tl;dr conclusion

After spending a few hours over several days, I got to the point of having an instance up and running on GCP, but an e2-small instance was too slow, and while upgrading to a e2-medium ran ok, at that point that instance type would have been too expensive for a hobby project to leave up 24×7. Even though it was up and running I couldn’t seem to search for or follow anyone on another instance, or get any relays successfully added.

To run a self-hosted instance I’d also need an SMTP service as well for notification emails, so I decided that the cheapest ‘Moon’ hosting plan from https://masto.host/ would be more than for my projects, so I’ve set up my own instance with them. Sign up was effortless, and my own instance was up and running in a couple of minutes – it’s at: https://mastodon.kevinhooke.com/home

docker-compose Mastodon setup steps:

As explained above, despite getting to the point of a running server, it still had issues that I didn’t want to spend more time investigating, so I’ll leave these notes here in case they’re useful for someone else running into similar issues, but please take these with a grain of salt and no guarantee that you’ll get a working server as result.

  1. Clone the mastdon repo
  2. cp .env.production.sample .env.production
  3. Run secret generation steps from comments in .env.production and paste generated values into .env.production, using
docker compose run --rm web bin/rails db:encryption:init

and (run this one twice for SECRET_KEY_BASE and OTP_SECRET):

docker compose run --rm web bundle exec rails secret

and this one for VAPID_PUBLIC_KEY and VAPID_PRIVATE_KEY:

docker compose run --rm web  bundle exec rails mastodon:webpush:generate_vapid_key

4. Replace any localhost references with the name of the Docker container in .env.production, for example:

REDIS_HOST=redis
DB_HOST=db
ES_HOST=es

5. Run the db setup step:

docker compose run --rm web  bundle exec rails db:setup

I’d previously missed this step and so managed to get the db setup via several manual steps – skip these if you run db:setup instead: run psql in the db service container and manually create a mastodon user:

CREATE USER mastodon WITH PASSWORD '<password>' CREATEDB; 

Run the db:create script. If you get an error that the db already exists, run the db:migrate script.

Mounted Volume ownership

Within your mastodon dir, change the permissions on the following folders which get mounted as volumes.

For static content accessed by the web container:

sudo chown -R 991:991 public

For elasticsearch runtime data:

sudo chown -R 1000:root elasticsearch 

… this avoid error in the es logs about being unable to access the mounted volume (from here):

AccessDeniedException: /usr/share/elasticsearch/data/nodes

ElasticSearch vm.max_map_count error

bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

ElasticSearch error in Admin page

Elasticsearch index mappings are outdated. Please run tootctl search deploy --only=instances tags

‘docker exec -it container-id bash’ into the web container and run to fix.

Post install setups

RAILS_ENV=production bin/tootctl accounts create \
alice \
--email alice@example.com \
--confirmed \
--role Owner

Troubleshooting

On starting up, if you get any database connection errors, check the previous step about replacing localhost with Docker container names:

Did you not create the database, or did you delete it? To create the database, run: bin/rails db:create

docker-compose (v1.29.2) to remote host with ssh fails

I have a personal project that is docker-compose based which I’ve deployed to remote servers before in the past (a few years ago, using steps here), and recently attempting to redeploy it from a more recent self-hosted GitLab pipeline on Ubuntu 24.0.4 I get this error:

docker.errors.DockerException: Install paramiko package to enable ssh:// support

This issue is exactly as described on this ticket. The issue also seems to be OS specific as well as docker-compose version specific – I have docker-compose 1.29.2 on MacOS Sequoia and it works fine, but 1.29.2 on Ubuntu 24.04 or 22.04 fails with the above error.

The workaround as described by multiple comments on the ticket is to not use the version installed by apt-get, instead install a specific older/working version with pip instead:

pip3 install docker-compose==1.28.2