Docker specific lightweight OSes: Installing RancherOS under KVM on Debian

I’ve been playing around with Docker for a while and feel like I’m at the point where I have questions about how I’d scale up and manage my container deployments, so I’m interested to check out some Docker management tools. I’ve had my eye on Rancher for a while and have been curious about their RancherOS, this is minimal installable OS dedicated to running and managing Docker containers. I don’t have a spare machine free to do a bare metal install, so wondered what it would take to install it in KVM, and whether it would be usable.

The machine I’m installing on is an old HP desktop with an Phenom x4 processor, and only 4GB RAM. The host is running Debian.

Using KVM, I created a new VM with 1 cpu, 1GB, and booted it from the RacherOS ISO:

After booting from the ISO:

Continuing the instructions in the install guide here, I wondered how I would paste in my public ssh key from my host and dev machines, while running this as a guest in KVM. The instructions require you to create a cloud-config.yml and include your ssh public key. After RancherOS is booted, you can use vi to create the file, but pasting into the guest with no guest extensions installed isn’t possible. You can ssh from the vm out to your host to where your key is located, but going in that direction is not much help. What you really need to do is ssh into the guest vm from a host machine, and then you can easily create the cloud-config.xml and paste in your keys.

Trouble is, the whole point of these initial setup steps with installing with the config file including your keys is to enable ssh access to your RancherOS install, so this is bit of a chicken and egg situation. You can’t ssh in remotely because there’s no default user password for the rancher user, and you can’t ssh in with a key, because you haven’t copied it across to the RancherOS install yet.

Searching around, I’m not the only one installing in a VM and encountered this issue. The trick as suggested here, is to reset the rancher user password on first boot (before starting the install), then at least you know what the initial password is (there isn’t a default password apparently, for security reasons, see here). Look up the ip with ifconfig on first boot, reset the password, then you can ssh in from outside, create the cloud-config.yml file, paste in your key(s), and then install per the instructions with:

sudo ros install -c cloud-config.yml -d /dev/sda

After the install had completed, I was able to ssh from outside with my key that I had already added to the cloud-service.yml, and then following through the next section in the docs, listed the available services, all of which were listed as disabled:

Per the docs here, I attempted to start the rancher service, with:

sudo ros service enable rancher-service

and since everything runs on RancherOS is a docker container, it starts to download the image layers:

And then started it up with:

sudo ros service up rancher-service

At this point as was expecting to find rancher running on port 8080, but it was still not up. The docs seem a bit lacking in this area. Googling ‘how to run rancher on rancheros’ gave a few suggestions, but mainly echoing what I’d already done. Running ‘docker ps’ in the guest VM showed me that the container was up and running, and listening on 8080, so tried again in my browser and it seems it just took a few seconds to get started up. I now have Rancher running on RancherOS in a KVM! Now to start checking it out!

 

Running Consul for service discovery in Docker containers

I’ve been looking at Spring Boot services in Docker and options for Service Discovery. I started looking at Eureka a while back, but just took a look at Consul.

Here’s my docker-compose.yml that I’ve got working so far. I realize this is just Consul running in 3 containers (1 server, and 2 containers with Consul agents), but I was interested in taking a look at how the containers register with the main server and how it’s configured.

One of the agent containers is running the UI on port 8500, this looks interesting to get an overview of what’s registered with the Consul server.

More to come later.

Docker usage notes – continued (2)

A few rough usage notes, continuing from my first post a while back.

Delete all containers:

Run a container from an image and delete it when it exits: use –rm:

docker run -it --rm containerid

 

Pass an argument into a build:

  • use ‘ARG argname’ to declare the argument in your Dockerfile
  • pass value for argname with –build-arg argname=value

Test during build if an arg was passed:

RUN test -n "$argname"

 

Getting past Vagrant’s “Authentication failure” error when starting up OpenShift Origin

For getting up and running quickly with OpenShift Origin, RedHat have an all-in-one VM image you can provision with Vagrant. The instructions mention to not use Vagrant 1.8.5 as there’s an issue with the SSH setup – since I already had 1.8.5 installed for some other projects, I tried anyway, and ran into issues with SSH’ing into the VM with SSH keys.

When provisioning the VM, you’ll see:

 

Kevins-MacBook-Pro:openshift-origin kev$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'openshift/origin-all-in-one' is up to date...
==> default: Resuming suspended VM...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Authentication failure. Retrying...
    default: Warning: Authentication failure. Retrying...

There’s a number of posts discussing this issue and a few workarounds, for example, here and here.

The suggestions relate to switching from the ssh authentication to userid/password, by adding this to your Vagrantfile:

config.ssh.username = "vagrant"
config.ssh.password = "vagrant"

I tried this, and when running vagrant up I had different errors about “SSH authentication failed”. Next I tried adding this recommendation:

 

config.ssh.insert_key = false

This didn’t make any difference initially, but doing a vagrant destroy, and then trying to bring it up again initially ran into the same issue, I Ctrl-C’d out and tried again and then it worked second time. I’m not sure what steps got past the ssh keys issue, but at this point I was up and running. There’s a long discussion in both the linked threads above that describe the cause of the issue, so if you’re interested take a look through those threads.