Running GitLab in a Docker container on a different port

Gitlab by default runs on port 80. GitLab in a Docker container runs the same as a when natively installed, but to change the port you need to change the config, and change the exposed ports on the container.

First, per steps here, start the container with:

docker run --detach \	
--hostname gitlab.example.com \
--publish host-https-port:container-https-port
--publish host-http-port:container-http-port
--publish 22:22 \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest

By default host-http-port and container-http-port are 80, and host-https-port container-https-port are 443. Change these to be whatever port you want to run on, but keep each pair the same (e.g. host-http-port and container-http-port = 8090)

When the container is up, start a shell into the running container:

docker exec -it containerid sh

and then edit the config file:

vi /etc/gitlab/gitlab.rb

and add line (at the top is ok):

external_url 'http://localhost:your-new-port-here

setting your-new-port-here to the new port.

Reconfigure the server:

gitlab-ctl reconfigure
gitlab-ctl restart

Done!

Updating rke created Kubernetes cluster from 1.11.3 to 1.11.5

There was a vulnerability found today in some older Kubernetes versions. There are already patched versions available. If you have 1.11.3 installed from rke, you can update to 1.11.5 by editing your cluster.yml, replacing the kubernetes image:

kubernetes: rancher/hyperkube:v1.11.3-rancher1

with

kubernetes: rancher/hyperkube:v1.11.5-rancher1

And then run ‘rke up’ again.

This is from this Github ticket.

Rancher RKE Kubernetes install notes

Rancher’s RKE is a Kubernetes cluster installer – see more here.

Pre-reqs:

  • Docker must be running on the client machine where you are going to run the rke setup tool
  • The docs are not obvious, but the rke tool is run on a client machine to provision your cluster, it is not run on any of the target cluster nodes 

Notes using Ubuntu 16.04 server.

Remove prior Docker installs:

sudo apt-get remove docker docker-engine docker.io

Create docker group and add user to docker group:

sudo groupadd docker
usermod -aG docker <user_name>

Install per Docker CE install steps here, or use the Rancher provider install script here

Supported Docker versions for RKE (as of Dec 2018) are: 1.11.x 1.12.x 1.13.x 17.03.x

Configure Docker daemon to listen for incoming requests on 2376, as per steps here.

Using ‘rke config’ with the default/minimal cluster.yml here, and then install/setup with ‘rke up’

If you didn’t change the name of the cluster.yml file, after the install is complete, you’ll have a kube_config_cluster.yml file in the same dir which you can use with kubectl to interact with you cluster, or add it into your existing ~/.kube/config file

Updating Kubernetes master node and worker node config if an ip address changes

I have a test Kubernetes cluster running with a CentOS7 master nodes, and 4 CentOS7 worker nodes, under VMware ESXi. The ip addresses of each of the VMs is from DHCP, and as I hadn’t booted these VMs for a while, when I recently started them up they all got new IP addresses, so the cluster would not start up, and all the .kube/config files were now referring to incorrect IP addresses. Note to self – this is a good reason why you should use DNS names for the nodes in your cluster instead of ip addresses, especially IP addresses that can change.

Anyway, to restore my cluster back to a working state, I reinitialized the master node, and the joined the workers to the new master.

First on the master:

sudo kubeadm reset
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

#take a copy of the kubeadm join command to run on the workers

#copy kube config for local kubectl
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

#apply networking overlay
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml

#for each of the worker nodes, scp the config file to each node for local kubectl use
scp /etc/kubernetes/admin.conf kev@192.168.1.86:~/.kube/config

On each of the worker nodes:

sudo kubeadm reset

#then run the kubeadm join command shown from the master when you ran kubeadm init