Error starting Openshift Origin on CentOS 7: systemd cgroup driver vs cgroupfs driver

Following the instructions to install the Openshift Origin binary from here,  on first attempt to start it up I got this error:

failed to run Kubelet: failed to create kubelet: 
misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

Per instructions in this issue ticket, to verify which cgroup drivers docker is using I used:

$ sudo docker info |grep -i cgroup

Cgroup Driver: cgroupfs

Unfortunately the steps to check the cgroup driver for kubernetes don’t match with my install because I’m guessing the single binary Openshift Origin has it packaged all in one, so there is no corresponding systemd config for it.

This article suggested to configure the cgroups driver for Docker so it matches kubernetes, but it looks like the yum install for docker-ce doesn’t configure systemd for it either.

Ok, to the docs. Per the Docker docs for configuring systemd here, it suggests to pull to preconfigured files from a git repo and place them in /etc/systemd/system

Now I have the systemd files for Docker in place,  this articles says to add this arg to the end of the ExecStart line in docker.service:

--exec-opt native.cgroupdriver=systemd

Now reload my config and restart the docker service:

sudo systemctl daemon-reload
sudo systemctl restart docker

and let’s check again what cgroups driver we’re using with:

$ sudo docker info |grep -i cgroup

Cgroup Driver: systemd

… and now we’ve switched to systemd.

Ok, starting up Openshift again, this issue is resolved, there’s a lot of log output as the server starts up. After opening up the firewall ports for 8443, my Openshift Console is now up!

Proxmox installation on a 2008 Mac Pro

Following on from my earlier article, I read some more about Proxmox running on a Mac Pro so decided to give it a go.

I added an empty drive into one of the spare bays, and then booted from the Proxmox installer.

 

 

 

 

 

 

 

 

 

 

 

 

After first boot and logon with the default root user to the web interface:

The first VM I want to create is for CentOS, and I have the iso ready to go on an attached usb drive, which I copied to the isos dir on Proxmox (/var/lib/vz/template/iso – defined storage locations for images are covered in answers to this post). The image now shows up on the local storage:

Creating a new VM based on this image:

Starting up the image and starting the CentOS install using the web-based vnc access:

… after completing the install, success!

Virtualization, homelabs, eBay rack servers and a 2008 Mac Pro

I’m fascinated with installing different OSes to see what they’re about. At one point on VirtualBox I had about 20 different VMs with all sorts of guests from OS/2 to many different Linux distros. Somewhere on my internet travels I ran into the Reddit Homelab group, a community of sysadmins who run virtualization on older, used rack servers (and other hardware), to experiment with configuration of VMWare ESXi, and other virtualization software like Proxmox VE.

Window shopping on eBay, you can pick up various used Dell or HP rack servers with dual Xeons and several swappable harddrive bays for around $100 to $200 depending on the specs. I was getting close to picking up one of these, until I wondered if you can run ESXi on a Mac Pro. Turns out you can and it is even supported hardware on VMWare’s HCL list. Trouble is my eBay 2008 Mac Pro is not on the supported list for current ESXi versions, so I’m not sure if a current version would install and work ok, or whether I’d have to go back a few versions.

My MacPro currently has 20GB RAM, and I’ve got 3 empty drive bays. Watching a few YouTube videos such as the ones below, I feel a weekend project coming on 🙂