Planning for a homelab with an HP Proliant DL380 G7

So I got caught up with the idea of running a rack mount server at home and setting up virtualization to run a ‘bunch-o-stuff’. As you do. Having never done anything with enterprise server hardware before (although I did recently set up Proxmox virtualization on my Mac Pro), naturally I have a ton of questions:

  • Should I run ESXi or Proxmox?
  • The DL380 has 8 hot-swappable 2.5″ drive bays. What’s the minimum number of disks needed to run, and do you need to configure a RAID array? Question here. A: Yes you do need to configure a RAID array, apparently it can’t be disabled. So you need 2 disks at a minimum for RAID0 (striped) or RAID1 (mirrored). See also HP Smart Array Controller docs here.
  • What about RAID10, what I understand is a combination of 0 + 1. See article here.
  • Can you run regular laptop 2.5 disks, or do they have to be ‘midline’ or enterprise? This seems like a hotly debated question with many varying opinions. The answer is probably ‘it depends’, at least on what you’re planning to use your server for. e.g. is it doing to be running 24/7, are you going to have more than 2 drives (more may cause vibrations that regular laptop drives may not be constructed to handle). A consistent answer if you’re not going to go with the HP branded/supported disks, is that WD Red drives, intended for use in NAS appliances, will work reliably in a rack server. Everything else is YMMV. I’m going with a pair of cheap HGST 500GB drives in RAID1 to get started, and then might add a couple of WD Blacks or Reds for more storage later.
  • 10k and 15k SAS drives are new to me, I’m more familiar with SATA. This might be something I’ll check out. The 146GB capacity seems a common size, but that’s rather small if I’m going to create a bunch of VMs. They’re pretty cheap at around $30 though, so could easily pick up a few for a RAID array.

So many options 🙂

 

Downloading Proxmox Container images

Before you can create a LXC container on a Proxmox virtualized environment, you need to download the template images first from an available list. You need to pre-download the images first befoe you can create new containers from them in the web ui.

From the docs here, there steps are (while ssh’d into your Proxmox server):

Update catalog of available templates:

pveam update

List the available templates:

root@pve:~# pveam available

system          alpine-3.3-default_20160427_amd64.tar.xz

system          alpine-3.4-default_20161206_amd64.tar.xz

system          alpine-3.5-default_20170504_amd64.tar.xz

system          archlinux-base_20170704-1_amd64.tar.gz

system          centos-6-default_20161207_amd64.tar.xz

system          centos-7-default_20170504_amd64.tar.xz

system          debian-6.0-standard_6.0-7_amd64.tar.gz

system          debian-7.0-standard_7.11-1_amd64.tar.gz

system          debian-8.0-standard_8.7-1_amd64.tar.gz

system          debian-9.0-standard_9.0-2_amd64.tar.gz

system          fedora-24-default_20161207_amd64.tar.xz

system          fedora-25-default_20170316_amd64.tar.xz

system          gentoo-current-default_20170503_amd64.tar.xz

system          opensuse-42.2-default_20170406_amd64.tar.xz

system          ubuntu-12.04-standard_12.04-1_amd64.tar.gz

system          ubuntu-14.04-standard_14.04-1_amd64.tar.gz

system          ubuntu-16.04-standard_16.04-1_amd64.tar.gz

system          ubuntu-16.10-standard_16.10-1_amd64.tar.gz

system          ubuntu-17.04-standard_17.04-1_amd64.tar.gz

For each of the templates you wish to use, download using for example for Ubuntu 14.04:

pveam download local ubuntu-14.04-standard_14.04-1_amd64.tar.gz

Now from the web ui, you should be able to click the ‘Create CT’ button and pick from your available templates:

 

Error starting Openshift Origin on CentOS 7: systemd cgroup driver vs cgroupfs driver

Following the instructions to install the Openshift Origin binary from here,  on first attempt to start it up I got this error:

failed to run Kubelet: failed to create kubelet: 
misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

Per instructions in this issue ticket, to verify which cgroup drivers docker is using I used:

$ sudo docker info |grep -i cgroup

Cgroup Driver: cgroupfs

Unfortunately the steps to check the cgroup driver for kubernetes don’t match with my install because I’m guessing the single binary Openshift Origin has it packaged all in one, so there is no corresponding systemd config for it.

This article suggested to configure the cgroups driver for Docker so it matches kubernetes, but it looks like the yum install for docker-ce doesn’t configure systemd for it either.

Ok, to the docs. Per the Docker docs for configuring systemd here, it suggests to pull to preconfigured files from a git repo and place them in /etc/systemd/system

Now I have the systemd files for Docker in place,  this articles says to add this arg to the end of the ExecStart line in docker.service:

--exec-opt native.cgroupdriver=systemd

Now reload my config and restart the docker service:

sudo systemctl daemon-reload
sudo systemctl restart docker

and let’s check again what cgroups driver we’re using with:

$ sudo docker info |grep -i cgroup

Cgroup Driver: systemd

… and now we’ve switched to systemd.

Ok, starting up Openshift again, this issue is resolved, there’s a lot of log output as the server starts up. After opening up the firewall ports for 8443, my Openshift Console is now up!

Proxmox installation on a 2008 Mac Pro

Following on from my earlier article, I read some more about Proxmox running on a Mac Pro so decided to give it a go.

I added an empty drive into one of the spare bays, and then booted from the Proxmox installer.

 

 

 

 

 

 

 

 

 

 

 

 

After first boot and logon with the default root user to the web interface:

The first VM I want to create is for CentOS, and I have the iso ready to go on an attached usb drive, which I copied to the isos dir on Proxmox (/var/lib/vz/template/iso – defined storage locations for images are covered in answers to this post). The image now shows up on the local storage:

Creating a new VM based on this image:

Starting up the image and starting the CentOS install using the web-based vnc access:

… after completing the install, success!