Docker specific lightweight OSes: Installing RancherOS under KVM on Debian

I’ve been playing around with Docker for a while and feel like I’m at the point where I have questions about how I’d scale up and manage my container deployments, so I’m interested to check out some Docker management tools. I’ve had my eye on Rancher for a while and have been curious about their RancherOS, this is minimal installable OS dedicated to running and managing Docker containers. I don’t have a spare machine free to do a bare metal install, so wondered what it would take to install it in KVM, and whether it would be usable.

The machine I’m installing on is an old HP desktop with an Phenom x4 processor, and only 4GB RAM. The host is running Debian.

Using KVM, I created a new VM with 1 cpu, 1GB, and booted it from the RacherOS ISO:

After booting from the ISO:

Continuing the instructions in the install guide here, I wondered how I would paste in my public ssh key from my host and dev machines, while running this as a guest in KVM. The instructions require you to create a cloud-config.yml and include your ssh public key. After RancherOS is booted, you can use vi to create the file, but pasting into the guest with no guest extensions installed isn’t possible. You can ssh from the vm out to your host to where your key is located, but going in that direction is not much help. What you really need to do is ssh into the guest vm from a host machine, and then you can easily create the cloud-config.xml and paste in your keys.

Trouble is, the whole point of these initial setup steps with installing with the config file including your keys is to enable ssh access to your RancherOS install, so this is bit of a chicken and egg situation. You can’t ssh in remotely because there’s no default user password for the rancher user, and you can’t ssh in with a key, because you haven’t copied it across to the RancherOS install yet.

Searching around, I’m not the only one installing in a VM and encountered this issue. The trick as suggested here, is to reset the rancher user password on first boot (before starting the install), then at least you know what the initial password is (there isn’t a default password apparently, for security reasons, see here). Look up the ip with ifconfig on first boot, reset the password, then you can ssh in from outside, create the cloud-config.yml file, paste in your key(s), and then install per the instructions with:

sudo ros install -c cloud-config.yml -d /dev/sda

After the install had completed, I was able to ssh from outside with my key that I had already added to the cloud-service.yml, and then following through the next section in the docs, listed the available services, all of which were listed as disabled:

Per the docs here, I attempted to start the rancher service, with:

sudo ros service enable rancher-service

and since everything runs on RancherOS is a docker container, it starts to download the image layers:

And then started it up with:

sudo ros service up rancher-service

At this point as was expecting to find rancher running on port 8080, but it was still not up. The docs seem a bit lacking in this area. Googling ‘how to run rancher on rancheros’ gave a few suggestions, but mainly echoing what I’d already done. Running ‘docker ps’ in the guest VM showed me that the container was up and running, and listening on 8080, so tried again in my browser and it seems it just took a few seconds to get started up. I now have Rancher running on RancherOS in a KVM! Now to start checking it out!

 

Installing Docker in an AWS EC2 instance

AWS offers their own EC2 Container Service (ECS) which simplifies deploying Docker containers to EC2 instances (and clusters of instances) and management of your containers. If you want to do-it-yourself though, you can easily install docker yourself in your own instance.

For example, in an Ubuntu EC2 instance,

sudo apt-get install docker.io

Start the docker service with:

sudo service docker start

If you want to manage you own Docker install on EC2, AWS have a guide walking you what you need to know – for further details see here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html

(Latest Ubuntu apt packages are docker-ce and docker-ee – see the Docker docs here for more info)

Configuring nginx to proxy REST requests across multiple Spring Boot microservices in Docker Containers

Using nginx to proxy requests across Docker containers is a common use case for nginx, and covered in many posts and articles. Trying to get it working from scratch is a not-so-trivial task. Following many articles, I got confused about whether I was needing to load balance requests across identical container instances, or whether I needed nginx to proxy requests across my different containers. I ran across quite a few configuration issues and challenges, but mostly I think because I didn’t understand what I was trying to do (and ended up with configuration trying to load balance and proxy at the same time which was not what I needed) 🙂

Articles like this one and this one show how to configure ‘upstream’ servers that you can load balance requests across. My understanding of this approach is that this is what you need if you have multiple server or container instances, and you want nginx to load balance across the instances. After a while of trying to get this configuration working what I realised I needed was just the proxy_pass config for nginx, telling it to proxy requests for a matching url to a given Spring Boot service in a container. This question captures this approach well.

To explore a typical configuration, I have 2 simple Spring Boot REST services, springbootservice1 and springbootservice2, each will be in their own Docker container. When run in individual containers, they are accessed via:

http://localhost:8080/service1/example1

and

http://localhost:8080/service2/example2

Without additional configuration to run them on different ports, as standalone Spring Boot services they can’t obviously be run on the same host at the same time. This would be trivial to do with just two services, but once you scale up this approach for many different services, managing different ports with manual configuration is not particularly practical.

Out of the box, docker lets you manage the exposes external ports easily when you ‘docker run’ a container with the ‘-p’ option. What I was interested in was a solution to run them in different containers, without manually defining and managing ports by hand, and to allow a web app to be able to call any of the services on port 80 so an app consuming these services has no need to know what ports the services are actually running on.

The Dockerfile for each looks like this (replace Service1 for Service2 for the second service):

#Official JDK8 Alpine-based image
FROM java:openjdk-8-alpine
ADD target/SpringBootService1-0.0.1-SNAPSHOT.jar /opt/SpringBootService1-0.0.1-SNAPSHOT.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/opt/SpringBootService1-0.0.1-SNAPSHOT.jar"]

The Dockerfile for nginx looks like this:

FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY conf /etc/nginx
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

And here’s my config for nginx:

http {
 server {
   listen 80;
   location /service1 {
     proxy_pass http://springbootservice1:8080/service1/;
   }
   location /service2 {
     proxy_pass http://springbootservice2:8080/service2/;
   }
 }
}
events { worker_connections 1024; }

The key part of this config is the proxy_pass config:

location /service1 { 
  proxy_pass http://springbootservice1:8080/service1/;
}

This takes incoming requests with a URI matching /service1 and proxies it to http://springbootservice1:8080/service1/ – in this case springbootservice1 is the default host name of the container running Service1. This is it’s default name from the docker-compose config which we’ll cover next. Note that the lack of trailing and trailing ‘/’s is relevant and important in order to match any incoming pattern and then forward that URI appended to the end of the target URI on the container service.

version: '3'
 
services:
     nginx-lb:
         build: ../nginx-loadbalancer
         #image: nginx-lb
         ports:
             - "80:80"
         links:
             - springbootservice1
             - springbootservice2
         depends_on:
             - springbootservice1
             - springbootservice2
     springbootservice1:
         image: springbootservice1
         ports:
             - "8080"
     springbootservice2:
         image: springbootservice2
         ports:
             - "8080"

To bring up all the contains in one go, ‘docker-compose up’ starts up all 3 containers, and now nginx handles requests on port 80 and proxies to the correct container based on the path. This approach could easily be scaled up to include more containers, but at some point it will become obvious with this configuration hardcoded in both the nginx.conf file and the docker-compose file, a better solution would be to use some kind of dynamic discovery of the containers as they become available. I’ll be investigating some options for this kind of approach next.

Running Consul for service discovery in Docker containers

I’ve been looking at Spring Boot services in Docker and options for Service Discovery. I started looking at Eureka a while back, but just took a look at Consul.

Here’s my docker-compose.yml that I’ve got working so far. I realize this is just Consul running in 3 containers (1 server, and 2 containers with Consul agents), but I was interested in taking a look at how the containers register with the main server and how it’s configured.

One of the agent containers is running the UI on port 8500, this looks interesting to get an overview of what’s registered with the Consul server.

More to come later.