Building and deploying a Monero crypto currency miner in a Docker container … running on a Kubernetes cluster

Updated: 1/30/18: Thanks to Max for the comment asking how your wallet id is passed to the miner – the Kubernetes deploy yml file example was cut off at the end and missing the args. Updated the example to show the correct args passed, including your wallet address.

Disclaimer: I don’t claim to be an expert in crypto currency and/or mining, my interest is purely a curious interest in the technology. Please don’t interpret anything here as an endorsement or a recommendation. Is it profitable to mine any currency with a spare PC? Probably not. Are some currencies profitable to mine? Possibly, with some investment in appropriate hardware. Please do your own research before you make your own decisions.

Knowing that some currencies like Monero can be mined with CPU based mining scripts alone, I wondered what it would look like to package a miner as a Docker image, and then run it at scale on a Kubernetes cluster. As you do, right?

First, I followed a Monero getting started guide to pull the source and build a suggested miner, then captured the steps to build the miner as a Dockerfile like this:

FROM ubuntu:17.10

#build steps from https://www.monero.how/tutorial-how-to-mine-monero
RUN apt-get update && apt-get install -y git libcurl4-openssl-dev \
 build-essential libjansson-dev autotools-dev automake
RUN git clone https://github.com/hyc/cpuminer-multi
RUN cd /cpuminer-multi && ./autogen.sh && ./configure && make
WORKDIR /cpuminer-multi
ENTRYPOINT ["./minerd"]

This Dockerfile contains the steps you’d follow to pull the source and build locally, but written to build a Docker image.

Next,  build and tag the image with the ip of your local Docker repo, ready for deploying to your Kubernetes cluster:

Build the image:

docker build -t monero-cpuminer .

Tag and push the image (192.168.1.80:5000 here is my local Docker Repository) :

docker tag monero-cpuminer 192.168.1.80:5000/monero-cpuminer
docker push 192.168.1.80:5000/monero-cpuminer

Before we start the deployment to Kubernetes, let’s check kubectl on my dev laptop can reach my Kubernetes cluster on my rack server:

kubectl get nodes --kubeconfig ~/kubernetes/admin.conf 
NAME                  STATUS    ROLES     AGE       VERSION

unknown000c2960f639   Ready     master    50d       v1.8.1

unknown000c297262c7   Ready     <none>    50d       v1.8.1

unknown000c29ab1af7   Ready     <none>    50d       v1.8.1

Nodes are up and ready to deploy.

Following the example .yml deployment file here, here’s my Kubernetes deployment file:

apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
  name: monero-cpuminer-deployment
  labels:
    app: monero-cpuminer-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: monero-cpuminer-deployment
  template:
    metadata:
      labels:
        app: monero-cpuminer-deployment
    spec:
      containers:
      - name: monero-cpuminer-deployment
        image: 192.168.1.80:5000/monero-cpuminer
        args: [ "-o", "stratum+tcp://monerohash.com:3333", "-u", "your-wallet-id" ]

The args passed to the container are (scroll to the right above):

args: [ “-o”, “stratum+tcp://monerohash.com:3333”, “-u”, “your-wallet-id” ]

I’m using the monerohash.com mining pool – you can checkout their settings here.

Now let’s deploy with:

kubectl apply -f cpuminer-deployment.yml --kubeconfig ~/kubernetes/admin.conf

Listing the pods we can now see the two we requested starting up:

kubectl get pods --kubeconfig ~/kubernetes/admin.conf 

And we can check the status and other info about the deployment config with:

kubectl describe deployments monero-cpuminer-deployment --kubeconfig ~/kubernetes/admin2.conf 

This shows my required replicas available:

Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable

Now let’s scale it up to 4 replicas:

$ kubectl scale --replicas=4 deployment/monero-cpuminer-deployment --kubeconfig ~/kubernetes/admin2.conf 

deployment "monero-cpuminer-deployment" scaled

Replicas:               4 desired | 4 updated | 4 total | 4 available | 0 unavailable

Scaling up from 2 pods, to 4, then 8, we’re at about 75% of available CPU in my 2x Xeon HP DL380 rack server:

Fan speeds have ramped up from idle, but still comfortably running:

Hash rate so far:

So is it possible to run a Monero miner in Docker containers? Sure! Can you deploy to a kubernetes cluster and scale it up? Sure! Is it worthwhile? Probably not, and probably not profitable, unless you’ve got some spare low power consuming hardware handy, or something custom built to provide a cost effective hash rate depending on your power consumption and local utility rates. Still, personally this was an interesting exercise to check out building a Monero miner from source, and how to package it as a Docker image and deploy to Kubernetes.

Leave me a comment if you’ve done something similar and what hash rates did you get?

Kubernetes node join command / token expired – generating a new token/hash for node join

After running a ‘kubeadm init’ on the main node, it shows you the node join command which includes a token and a hash. It appears these values only stay valid for 24hrs, so if you try to use them again after 24 hours the  ‘kubeadm join’ command will fail with something like:

[discovery] Failed to connect to API Server “192.168.1.67:6443”: there is no JWS signed token in the cluster-info ConfigMap. This token id “78a69b” is invalid for this cluster, can’t connect

To create a new join string, from the master node run:

kubeadm token create --print-join-command

Running the new join command string on your new nodes will now allow them to join the cluster.

This is described in the docs here.

Revisiting Docker and Kubernetes installation on CentOS7 (take 3)

I tried a while back to get a Kubernetes cluster up and running on CentOS7, and captured my experience in this post here. At one point I did have it up and running, but after a reboot of my master and worker nodes, I ran into an issue with some of the pods not starting, and then decided to shelf this project for a while to work on something else.

Based on tips from a colleague who had recently worked through a similar setup, the main difference in the approach he took compared to my steps was that he didn’t do a vanilla install of Docker with ‘sudo yum install docker’ but instead installed a custom version for CentOS.

Retracing my prior steps, the section in the Kubernetes install instructions here  tell you to do a ‘sudo yum install docker’, but the steps on the Docker site for CentOS here walk you through installing from a custom repo. I followed these steps on a clean CentOS 7 install, and then continued with the Kubernetes setup.

Started the Docker service with:

sudo systemctl start docker.service

Next instead of opening the required ports, since this is just a homelab setup, I just disabled the firewall (following instructions from here):

sudo systemctl disable firewalld

And then stopped it from currently running:

sudo systemctl stop firewalld

Next, picking up with the kubernetes instructions to install kubelet, kubeadm etc.

Disable selinux:

sudo setenforce 0

From my previous steps, editing /etc/selinux/configand setting:

SELINUX=disabled

CentOS7 specific config for iptables (although disabling the firewall on CentOS7 this might not be relevant, but adding it anyway :

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Disable swap:

swapoff -a

Also, edit /etc/fstab and remove the swap line and reboot.

Next, following the install instructions to add the repo file, and then installing with:

sudo yum install -y kubelet kubeadm kubectl

Enabling the services:

sudo systemctl enable kubelet && systemctl start kubelet

Starting the node init:

sudo kubeadmin init

And realized we hadn’t addressed the cgroups config issue to get kubelet and docker using the same driver:

Dec 17 18:17:08 unknown000C2954104F kubelet[16450]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: “systemd” is different from docker cgroup driver: “

I have a post on addressing this with installing Openshift Origin. Follow the same steps here to reconfigure.

Kubeadm init, add the networking overlay (I installed Weave), and I think we’re up:

kubectl get nodes

[kev@unknown000C2954104F /]$ kubectl get nodes

NAME                  STATUS    ROLES     AGE       VERSION

unknown000c2954104f   Ready     master    31m       v1.9.0

Checking the pods though, the dns pod was stuck restarting and not coming up clean. I found this ticket for exactly the issue I was seeing. The resolution was to switch back to cgroupfs for both Docker and Kubernetes.

I did this by backing out the addition previously made to

/usr/lib/systemd/system/docker.service, and then adding a new file, 

/etc/docker/daemon.json, and pasting in:

{
"exec-opts": ["native.cgroupdriver=cgroupfs"]
}

Next, edit /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and replace:

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

with

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

Restart Docker:

sudo systemctl daemon-reload

sudo systemctl restart docker

Check we’re back on cgroupfs:

sudo docker info |grep -i cgroup

Cgroup Driver: cgroupfs

And now check the nodes:

$ kubectl get nodes

NAME                  STATUS    ROLES     AGE       VERSION

unknown000c2954104f   Ready     master    1d        v1.9.0

unknown000c29c2b640   Ready     <none>    1d        v1.9.0

And the pods:

$ kubectl get pods –all-namespaces

NAMESPACE     NAME                                          READY     STATUS    RESTARTS   AGE

kube-system   etcd-unknown000c2954104f                      1/1       Running   3          1d

kube-system   kube-apiserver-unknown000c2954104f            1/1       Running   4          1d

kube-system   kube-controller-manager-unknown000c2954104f   1/1       Running   3          1d

kube-system   kube-dns-6f4fd4bdf-jtk82                      3/3       Running   123        1d

kube-system   kube-proxy-9b6tr                              1/1       Running   3          1d

kube-system   kube-proxy-n5tkx                              1/1       Running   1          1d

kube-system   kube-scheduler-unknown000c2954104f            1/1       Running   3          1d

kube-system   weave-net-f29k9                               2/2       Running   9          1d

kube-system   weave-net-lljgc

 

Now we’re looking good! Next up, lets deploy something and check we’re looking good!

Allowing user on CentOS to run docker command without sudo

Out of the box for a Docker install on CentOS 7, you have to sudo the docker command to interact with Docker. Per the post-install steps here, create a docker group and add your user to that group:

sudo groupadd docker

sudo usermod -aG docker youruser

Logging off and back on again, you should now be able to run the docker command without sudo.

On CentOS 7 this still didn’t work for me. Following this post, it appeared that docker.sock was owned by root and in the group root:

$ ls -l /var/run/docker.sock

srw-rw---- 1 root root 0 Oct 21 15:42 /var/run/docker.sock

Changing the group ownership:

$ sudo chown root:docker /var/run/docker.sock

$ ls -l /var/run/docker.sock

srw-rw---- 1 root docker 0 Oct 21 15:42 /var/run/docker.sock

After logging back on, now can run docker commands without sudo.