Using Netflix Eureka with Spring Cloud / Spring Boot microservices (part 2)

Several months back I started to look at setting up a simple example app with Spring Boot microservices using Netflix Eureka as a service registry. I got distracted by other shiny things for a few months, but just went back to finish this off.

The example app comprises of 3 Spring Boot apps:

  • SpringCloudEureka: registers the Eureka server using @EnableEurekaServer
  • SpringBootService1 with endpoint POST /service1/example1/address
    • registers with Eureka server with @EnableDiscoveryClient
    • uses Ribbon load balancer aware RestTemplate to call Service2 to validate a zipcode
  • SpringBootService2 provides endpoint GET /service2/zip/{zipcode} which is called by Service1
    • also registers with Eureka server with @EnableDiscoveryClient so it can be looked up by Service1

SpringBootService1 and SpringBootService2 both register with the Eureka server using the annotation @EnableDiscoveryClient. Using some magic with @EnableFeignClients, SpringBootService1 is able to call SpringBootService2 using a regular Spring RestTemplate, but it is Eureka aware and able to lookup SpringBootService2 by service name inplace of an absolute ip and port.

This allows the services to be truly decoupled. Service1 needs to know it needs to call Service2 to perform some purpose (in this case validate a zip code), but it doesn’t need to know where Service2 is deployed or what ip address/port it is available on.

Example code is available on github here.

Configuring AWS S3 Storage Gateway on VMware ESXi for uploading files to S3 via local NFS mount

AWS provides a VM image that can be run locally to provide a local NFS mount that transparently transfers files copied to the mount to an S3 bucket.

To setup, start by creating a Storage Gateway from the AWS Management Console:

Select ‘VMware ESXi’ host platform:

Unzip the downloaded .zip file.

From your ESXi console, create a new VM and select the ‘Deploy VM from OVF / OVA file’ option:

Click the ‘select file’ area and point to the unzipped .ova file on your local machine:

Per Storage Gateway instructions, select Thick provisioned disk:

Press Finish on next Summary screen, and wait for VM to be created.

Back in your AWS Management Console, Next through remaining setup pages, enter IP of your Storage Gateway. The instructions say the VM does not need to be accessible from the internet, and instructions here walk you logging on to the VM with the default credentials to get your IP assuming it’s not already displayed on your ESXi console for the running VM):

Powering on the VM, I get a logon prompt:

After logging on with the default credentials, the next screen gives me the assigned IP:

Entering the IP in the Management Console, setting timezone to match my local timezone, and then press Activate to continue:

At this point, AWS is telling me I didn’t create any local disks attached to my VM, which is true, I missed that step:

According to the docs you need to attach 1 disk for an upload buffer, and 1 for cache storage (files pending upload). Powering down my VM, I created 2 new disks, 2GB each (since this is just for testing):

Pressing the refresh icon, the disks are now detected (interesting that AWS is able to communicate with my VM?),  and it tells me the cache disk needs to be at least 150GB:

Powering down the VM again and increasing one of the disks to 150GB, but thinly provisioned (not sure I have too much spare disk on my server for 150GB thickly provisioned):

Powering back on, pressing refresh in the AWS Console:

Ok, maybe it needs to be thick after all. Powering off and provisioning as thick:

I allocated the 150GB drive as the Cache, and left the other drive unallocated for now. Next to allocate a share:

At this point you need to configure the file share to point to an existing S3 bucket, so make sure you have one created at this point, if not open another Console and create one then enter it here:

By default, any client that’s able to mount my share on my VM locally is allowed to upload to this bucket. This can be configured by pressing edit. I’ll leave as the default for now. Press the Create File Share button to complete, and we’re done!

 

Next following the instructions here, let’s mount the Storage Gateway file share and test uploading a file:

For Linux:

sudo mount -t nfs -o nolock [Your gateway VM IP address]:/[S3 bucket name] [mount path on your client]

For MacOS:

sudo mount_nfs -o vers=3,nolock -v [Your gateway VM IP address]:/[S3 bucket name] [mount path on your client]:

Note: from the AWS docs there’s a missing space between the nfs drive and the mount point in both of these examples, and the Linux example has a trailing ‘:’ which is also not needed.

For Ubuntu, if you haven’t installed nfs-common, you’ll need to do that first with

sudo apt-get install install nfs-common

… otherwise you’ll get this error when attempting to mount:

mount: wrong fs type, bad option, bad superblock on ...

For Ubuntu, here’s my working mount statement (after installing nfs-common):

sudo mount -t nfs -o nolock vm-ip-address:/s3bucket-name /media/s3gw

… where /media/s3gw is my mount point (created earlier).

To test, I create a file, copied to the mount dir, and then took a look at my bucket contents via the Console:

My file is already there, everything is working!

 

Building and deploying a Monero crypto currency miner in a Docker container … running on a Kubernetes cluster

Updated: 1/30/18: Thanks to Max for the comment asking how your wallet id is passed to the miner – the Kubernetes deploy yml file example was cut off at the end and missing the args. Updated the example to show the correct args passed, including your wallet address.

Disclaimer: I don’t claim to be an expert in crypto currency and/or mining, my interest is purely a curious interest in the technology. Please don’t interpret anything here as an endorsement or a recommendation. Is it profitable to mine any currency with a spare PC? Probably not. Are some currencies profitable to mine? Possibly, with some investment in appropriate hardware. Please do your own research before you make your own decisions.

Knowing that some currencies like Monero can be mined with CPU based mining scripts alone, I wondered what it would look like to package a miner as a Docker image, and then run it at scale on a Kubernetes cluster. As you do, right?

First, I followed a Monero getting started guide to pull the source and build a suggested miner, then captured the steps to build the miner as a Dockerfile like this:

FROM ubuntu:17.10

#build steps from https://www.monero.how/tutorial-how-to-mine-monero
RUN apt-get update && apt-get install -y git libcurl4-openssl-dev \
 build-essential libjansson-dev autotools-dev automake
RUN git clone https://github.com/hyc/cpuminer-multi
RUN cd /cpuminer-multi && ./autogen.sh && ./configure && make
WORKDIR /cpuminer-multi
ENTRYPOINT ["./minerd"]

This Dockerfile contains the steps you’d follow to pull the source and build locally, but written to build a Docker image.

Next,  build and tag the image with the ip of your local Docker repo, ready for deploying to your Kubernetes cluster:

Build the image:

docker build -t monero-cpuminer .

Tag and push the image (192.168.1.80:5000 here is my local Docker Repository) :

docker tag monero-cpuminer 192.168.1.80:5000/monero-cpuminer
docker push 192.168.1.80:5000/monero-cpuminer

Before we start the deployment to Kubernetes, let’s check kubectl on my dev laptop can reach my Kubernetes cluster on my rack server:

kubectl get nodes --kubeconfig ~/kubernetes/admin.conf 
NAME                  STATUS    ROLES     AGE       VERSION

unknown000c2960f639   Ready     master    50d       v1.8.1

unknown000c297262c7   Ready     <none>    50d       v1.8.1

unknown000c29ab1af7   Ready     <none>    50d       v1.8.1

Nodes are up and ready to deploy.

Following the example .yml deployment file here, here’s my Kubernetes deployment file:

apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
  name: monero-cpuminer-deployment
  labels:
    app: monero-cpuminer-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: monero-cpuminer-deployment
  template:
    metadata:
      labels:
        app: monero-cpuminer-deployment
    spec:
      containers:
      - name: monero-cpuminer-deployment
        image: 192.168.1.80:5000/monero-cpuminer
        args: [ "-o", "stratum+tcp://monerohash.com:3333", "-u", "your-wallet-id" ]

The args passed to the container are (scroll to the right above):

args: [ “-o”, “stratum+tcp://monerohash.com:3333”, “-u”, “your-wallet-id” ]

I’m using the monerohash.com mining pool – you can checkout their settings here.

Now let’s deploy with:

kubectl apply -f cpuminer-deployment.yml --kubeconfig ~/kubernetes/admin.conf

Listing the pods we can now see the two we requested starting up:

kubectl get pods --kubeconfig ~/kubernetes/admin.conf 

And we can check the status and other info about the deployment config with:

kubectl describe deployments monero-cpuminer-deployment --kubeconfig ~/kubernetes/admin2.conf 

This shows my required replicas available:

Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable

Now let’s scale it up to 4 replicas:

$ kubectl scale --replicas=4 deployment/monero-cpuminer-deployment --kubeconfig ~/kubernetes/admin2.conf 

deployment "monero-cpuminer-deployment" scaled

Replicas:               4 desired | 4 updated | 4 total | 4 available | 0 unavailable

Scaling up from 2 pods, to 4, then 8, we’re at about 75% of available CPU in my 2x Xeon HP DL380 rack server:

Fan speeds have ramped up from idle, but still comfortably running:

Hash rate so far:

So is it possible to run a Monero miner in Docker containers? Sure! Can you deploy to a kubernetes cluster and scale it up? Sure! Is it worthwhile? Probably not, and probably not profitable, unless you’ve got some spare low power consuming hardware handy, or something custom built to provide a cost effective hash rate depending on your power consumption and local utility rates. Still, personally this was an interesting exercise to check out building a Monero miner from source, and how to package it as a Docker image and deploy to Kubernetes.

Leave me a comment if you’ve done something similar and what hash rates did you get?

Kubernetes node join command / token expired – generating a new token/hash for node join

After running a ‘kubeadm init’ on the main node, it shows you the node join command which includes a token and a hash. It appears these values only stay valid for 24hrs, so if you try to use them again after 24 hours the  ‘kubeadm join’ command will fail with something like:

[discovery] Failed to connect to API Server “192.168.1.67:6443”: there is no JWS signed token in the cluster-info ConfigMap. This token id “78a69b” is invalid for this cluster, can’t connect

To create a new join string, from the master node run:

kubeadm token create --print-join-command

Running the new join command string on your new nodes will now allow them to join the cluster.

This is described in the docs here.