In the next few posts we will setup a multi-host kubernetes toy lab using the kubeadm tutorial. This post itself will cover the lab setup and basic smoke tests.
The kubeadm tutorial leaves the network as an exercise for the reader, so we will fill that in using weave. There is indeed an embarras de richesses of networking choices provided for kubernetes, such as:
and a multitude of others. The network was chosen at random (the dice roll chose romana, but that didn’t actually work with a quite a few hair-pulling moments, and then landed on weave) so weave was the first one that worked out-of-the-box.
I’d now like to point to the github repo kubeadm-transcripts where the full commands and console output can be viewed. This is a mixture of ‘Transcript’ sections where you can directly cut-and-paste into your terminal, with ‘Output’ and ‘Verify’ sections. The ‘Output’ and ‘Verify’ sections mix in some of my comments, shell output, and shell prompts, so they are less cut-and-paste friendly.
Prerequisites: we will be using four VMs with their resources, and networking requirements stated below. I will be using libvirt/KVM as my virtualisation host, but of course you can easily adjust for VirtualBox or any other VM system. For VM host commands you should replace them with your VM specifics. Within the VMs I will use CentOS, so adjust package management commands as necessary.
- 4 x CentOS 7 VMs with latest CentOS 7.3 updates, 2 core, 2GB RAM. A starter image can be found here. A minimal install is fine. Enable the EPEL repository.
- single vdisk, 100GB: 40GB /dev/sda1 for OS, 60GB /dev/sda2 for docker’s /var/lib/docker and devicemapper
- single network interface, hostnames: kube0-3, IP addresses: 192.168.125.100-103/24, hostnames resolvable on each VM
- the host provides a bridged VM network 192.168.125.0/24 with IP address 192.168.125.1 and functions as DNS and NAT gateway
- kube0 will be the kubernetes master node, it has password-less root ssh access to kube0-3. There is a non-root user, centos, who has password-less ssh and sudo privileges on all four nodes. centos functions as the kubernetes superadmin. Except for VM host commands, we will usually be executing on kube0.
- productivity tooling: to make it easy to execute commands on all four nodes from kube0 we will install pdsh and the pdsh genders module on each host:
[root@kube0 install]# yum -y install pdsh pdsh-mod-genders ## repeat on the three remaining nodes [root@kube0 install]# rpm -qa | grep pdsh pdsh-2.31-1.el7.x86_64 pdsh-mod-genders-2.31-1.el7.x86_64 pdsh-rcmd-ssh-2.31-1.el7.x86_64
This /etc/genders file placed on kube0:
## /etc/genders kube[0-3] kubes kube[1-3] nodes
allows you to execute commands on all the nodes from kube0:
## smoke test all nodes [root@kube0 install]# pdsh -g kubes uptime kube0: 15:50:45 up 1 day, 1:00, 1 user, load average: 0.56, 0.25, 0.22 kube2: 15:50:45 up 1 day, 59 min, 0 users, load average: 0.08, 0.04, 0.09 kube3: 15:50:45 up 1 day, 59 min, 0 users, load average: 0.03, 0.03, 0.08 kube1: 15:50:45 up 1 day, 59 min, 0 users, load average: 0.10, 0.11, 0.15 ## smoke test workers [root@kube0 install]# pdsh -g nodes uptime kube3: 15:51:53 up 1 day, 1:00, 0 users, load average: 0.01, 0.02, 0.08 kube1: 15:51:53 up 1 day, 1:00, 0 users, load average: 0.03, 0.09, 0.13 kube2: 15:51:53 up 1 day, 1:00, 0 users, load average: 0.03, 0.03, 0.08
Indeed, pdsh is the original puppet, chef, ansible…
Smoke test your docker install on all four nodes (note: using the docker package from the CentOS repositories which has several other docker repositories configured in besides the default docker.io):
## repeat this smoke test on the worker nodes [root@kube0 install]# rpm -q docker docker-1.12.6-28.git1398f24.el7.centos.x86_64 [root@kube0 install]# docker run --rm -it busybox /bin/sh Unable to find image 'busybox:latest' locally Trying to pull repository registry.fedoraproject.org/busybox ... Trying to pull repository registry.access.redhat.com/busybox ... Trying to pull repository docker.io/library/busybox ... sha256:be3c11fdba7cfe299214e46edc642e09514dbb9bbefcd0d3836c05a1e0cd0642: Pulling from docker.io/library/busybox 27144aa8f1b9: Pull complete Digest: sha256:be3c11fdba7cfe299214e46edc642e09514dbb9bbefcd0d3836c05a1e0cd0642 Status: Downloaded newer image for docker.io/busybox:latest / #
In each of the transcripts, we have a print out of the docker images and their tags that end up on the nodes. It can be a time saver to pre-pull these images. This will also save a lot of anxiety, as when a command appears to hang, it’s not obvious whether something has gone wrong, or it is in the middle of a long running
For example: we will find that several
gcr.io/google_containers/* images appear on the nodes. We might create a template VM, pre-pull these images, and then clone it for kube0-3.
## avoid "Are we there yet? Are we there yet?" anxiety by ## pre-pulling some must-have images pdsh -g kubes docker pull gcr.io/google_containers/pause-amd64:3.0 pdsh -g kubes docker pull gcr.io/google_containers/kube-proxy-amd64:v1.7.0 ## only needed on the master kube0 docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.7.0 docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.7.0 docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.7.0 docker pull gcr.io/google_containers/etcd-amd64:3.0.17 ## these two images on the workers, and six images on the master are the ## minimum must-have images. Now is a good time to snapshot your lab VMs.
At this point your kubeadm lab is ready with the productivity tool pdsh and single-host docker all functioning. We also have a productivity tip for pre-pulling images to reduce the ‘on tenterhooks’ feeling when kubeadm does its magic. Now on to