Accessing the RAID setup on an HP Proliant DL380 G7

When the HP Proliant DL380 G7 boots up the only displayed BIOS options are F9 for Setup, F11 for the boot disk menu, but neither other these take you to the RAID setup. To get to the RAID setup options, when the screen appears showing the F9 and F11 options press F8 every second or so, and you’ll first get the ILO configuration. Exit ILO, and next you’ll get the RAID configuration options. I found this tip mentioned in this post on the HP forums here.

My DL380 G7 has the HP R410i RAID controller card. Here’s step by step getting to the RAID settings:

First from the ILO settings, Exit from the File menu:

After exiting ILO you get the RAID controller options – press F8 for the Arrays Utility:

Now in RAID settings, create your Logical Drive from your available physical drives:

Here I have added 2 500GB drives to a RAID 1+0 array:

All disks are not equal, especially cheap consumer disks without temperature sensors (a.k.a HP DL380 G7 fan speeds running like a 747 on take off)

I’ve recently been enjoying the freedom of a homelab, creating VMs on VMware ESXi and installing all-the-things. I’ve had my HP Proliant DL380 G7 for a couple of weeks now and already accumulated a number of VMs for investigating a collection of things:

Prior to my DL380 arriving, I was pondering what type of disks to put in, and in particular whether I used go with cheap consumer laptop hard disks, more expensive (e.g. WD Red NAS disks), or named-brand HP disks. I went with a pair of HGST 500GB disks to start with, and ran into an issue with the cooling fans spinning up like a 747 taking off.  Googling for “dl380 disk fans” turns up many related posts, and it turns out that some non-HP drives in Proliant servers may not report their internal temperature correctly, resulting in the server thinking the drives are overheating, and cranking up the fans to compensate.

Here’s a couple of screenshots (from the iLO – Integrated Lights Out) showing the fans ramping up to near unbearable noise levels over about 5 minutes:

  • iLO reporting the drives overheating:

  • Only after a couple of mins, but still running faster than probably should be at idle:

  • Starting to ramp up:
  • Getting unbearable now:

If I’d spent some more time reading aound I would have found this excellent article detailing this issue, and more specifically drives known to work in the DL380 and drives known to have this issue. Turns out, most of the WD disks do work, so I replaced the HGST drives with 2 WD Black 750GB drives. Now the server at idle runs with the fans between 10-13% and is actually no louder than a regular desktop.

Back to creating some more VMs 🙂

Deploying kubernetes Dashboard to a kubeadm created cluster

Per installations steps here, to deploy the dashboard:

kubectl --kubeconfig ./admin.conf apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

The start the local proxy:

./kubectl --kubeconfig ./admin.conf proxy

Accessing via http://localhost:8001/ui, gives this error:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

Checking what’s currently running with:

./kubectl --kubeconfig ./admin.conf get pods --all-namespaces

Looks like the dashboard app is not happy:

kube-system   kubernetes-dashboard-747c4f7cf-p8blw          0/1       CrashLoopBackOff   22         1h

Checking the logs for the dashboard:

./kubectl --kubeconfig ./admin.conf logs kubernetes-dashboard-747c4f7cf-p8blw --namespace=kube-system

2017/10/19 03:35:51 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the –apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: getsockopt: no route to host

OK.

I setup my master node using the flannel overlay. I don’t know if this makes any difference or not, but I noticed this article using kubeadm used Weave Net instead. Not knowing how to move forward (and after browsing many posts and tickets on issues with kubeadm with Dashboard), knowing at least that kubadm + Weave Net works for installing dashboard, so I tried this approach instead.

After re-initializing and the adding weave-net, my pods are all started:

$ kubectl get pods –all-namespaces

NAMESPACE     NAME                                          READY     STATUS    RESTARTS   AGE

kube-system   etcd-unknown000c2960f639                      1/1       Running   0          11m

kube-system   kube-apiserver-unknown000c2960f639            1/1       Running   0          11m

kube-system   kube-controller-manager-unknown000c2960f639   1/1       Running   0          11m

kube-system   kube-dns-545bc4bfd4-nhrw7                     3/3       Running   0          12m

kube-system   kube-proxy-cgn45                              1/1       Running   0          4m

kube-system   kube-proxy-dh6jm                              1/1       Running   0          12m

kube-system   kube-proxy-spxm5                              1/1       Running   0          5m

kube-system   kube-scheduler-unknown000c2960f639            1/1       Running   0          11m

kube-system   weave-net-gs8nh                               2/2       Running   0          5m

kube-system   weave-net-jkkql                               2/2       Running   0          4m

kube-system   weave-net-xb4hx                               2/2       Running   0          10m

 

Trying to add the dashboard once more:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

… and, o. m. g. :

$ kubectl get pods –all-namespaces

NAMESPACE     NAME                                          READY     STATUS    RESTARTS   AGE

kube-system   etcd-unknown000c2960f639                      1/1       Running   0          37m

kube-system   kube-apiserver-unknown000c2960f639            1/1       Running   0          37m

kube-system   kube-controller-manager-unknown000c2960f639   1/1       Running   0          37m

kube-system   kube-dns-545bc4bfd4-nhrw7                     3/3       Running   0          38m

kube-system   kube-proxy-cgn45                              1/1       Running   0          30m

kube-system   kube-proxy-dh6jm                              1/1       Running   0          38m

kube-system   kube-proxy-spxm5                              1/1       Running   0          31m

kube-system   kube-scheduler-unknown000c2960f639            1/1       Running   0          37m

kube-system   kubernetes-dashboard-747c4f7cf-jgmgt          1/1       Running   0          4m

kube-system   weave-net-gs8nh                               2/2       Running   0          31m

kube-system   weave-net-jkkql                               2/2       Running   0          30m

kube-system   weave-net-xb4hx                               2/2       Running   0          36m

Starting kubectl proxy and hitting localhost:8001/ui now gives me:

Error: 'malformed HTTP response "\x15\x03\x01\x00\x02\x02"'
Trying to reach: 'http://10.32.0.3:8443/'

Reading here, trying the master node directly:

https://192.168.1.80:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

gives a different error:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"kube-system\"",
  "reason": "Forbidden",
  "details": {
    "name": "https:kubernetes-dashboard:",
    "kind": "services"
  },
  "code": 403
}

… but reading further ahead, it seems accessing via the /ui url is not correctly working, you need to access via the url in the docs here,  which says the correct url is:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

and now I get an authentication page:

Time to read ahead on the authentication approaches.

List available tokens with:

kubectl -n kube-system get secret

Using the same token as per the docs (although at this point I’ve honestly no idea what the difference in permissions is for each of the different tokens):

./kubectl --kubeconfig admin.conf -n kube-system describe secret replicaset-controller-token-7tzd5

And then pasting the token value into the authentication dialog gets me logged on! There’s some errors about this token not having access to some features, but at this point I’ve just glad I’ve managed to get this deployed and working!

If you’re intested in the specific versions I’m using, this is deployed to CentOS 7, and kubernetes version:

$ kubectl version

Client Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.1″, GitCommit:”f38e43b221d08850172a9a4ea785a86a3ffa3b3a”, GitTreeState:”clean”, BuildDate:”2017-10-11T23:27:35Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”linux/amd64″}

Server Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.1″, GitCommit:”f38e43b221d08850172a9a4ea785a86a3ffa3b3a”, GitTreeState:”clean”, BuildDate:”2017-10-11T23:16:41Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”linux/amd64″}

Detecting attached monitors on MacOS

If you boot MacOS with a KVM attached and a second monitor is not actually attached at boot time, on my 2008 Mac Pro at least, the second monitor is not automatically detected (it works fine if the KVM is already switched to the Mac at boot).

To force MacOS to detect additional monitors, hold the Option key in System Preferences / Displays, and then click on the ‘Detect Monitors’ button that appears.

See article here.