We install a network add-on to kubernetes, and join the three workers. Finally, we get a working multi-node cluster.
The transcript to follow along for this post can be found here: 01-network.md.
Many of the network add-ons such as Weave, or Flannel, are SDNs in their own right. They can be installed separately from kubernetes and have their own use cases as external SDNs. It is useful to remember this, because, documentation on these SDNs, usually assumes that they are running as host services, with vendor tools installed. This tooling is usually not available or not relevant when installed as an add-on. In the add-on case, the SDN is “hidden” and managed behind the scenes by kubernetes and the plugin.
Remember the join token? When we bootstrap the cluster, kubeadm spits out a line:
kubeadm join --token 878d64.53d8c7dafd317b9e 192.168.125.100:6443
We need this to join the worker nodes later. If you have forgotten the token it can be recovered with
[root@kube0 install]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION 878d64.53d8c7dafd317b9e authentication,signing The default bootstrap token generated by 'kubeadm init'.
Now follow the transcript to install the weave add-on on the master node:
sudo -u centos kubectl apply -f https://git.io/weave-kube-1.6 serviceaccount "weave-net" created clusterrole "weave-net" created clusterrolebinding "weave-net" created daemonset "weave-net" created
Now we can join the nodes:
[root@kube0 install]# pdsh -g nodes kubeadm join --token 878d64.53d8c7dafd317b9e 192.168.125.100:6443 kube2: [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. kube2: [preflight] Running pre-flight checks kube1: [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. kube1: [preflight] Running pre-flight checks kube3: [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. kube3: [preflight] Running pre-flight checks kube2: [discovery] Trying to connect to API Server "192.168.125.100:6443" kube2: [discovery] Created cluster-info discovery client, requesting info from "https://192.168.125.100:6443" kube3: [discovery] Trying to connect to API Server "192.168.125.100:6443" ... kube3: Node join complete: kube3: * Certificate signing request sent to master and response kube3: received. kube3: * Kubelet informed of new secure connection details. kube3:
Bug #335: if the worker nodes fail to join the cluster with:
Failed to connect to API Server `192.168.125.100:6443": there is no JWS signed token in the cluster-info ConfigMap., you have encountered a bug, and a workaround is presented in the transcript.
Bug #347: if you follow this transcript and happened to get v1.7.1 during installation you will hit this bug. Error message:
kube2: [preflight] WARNING: hostname "" could not be reached
The solution is to run kubeadm with the option:
This is our first instance with a pod running on the pod network 10.32.0.0/12 that is constructed by the weave SDN add-on. The kube-dns pod is part of the kube-dns service which is available to all application pods.
[root@kube0 centos]# sudo -u centos kubectl get svc -n kube-system NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns 10.96.0.10 53/UDP,53/TCP 15m
In this virgin cluster, we have one service running (namely kube-dns) and DNS resolution should work:
[root@kube0 centos]# dig @10.96.0.10 kube-dns.kube-system.svc.cluster.local ;; QUESTION SECTION: ;kube-dns.kube-system.svc.cluster.local. IN A ;; ANSWER SECTION: kube-dns.kube-system.svc.cluster.local. 30 IN A 10.96.0.10 ## the kube-dns service provides named ports; so SRV resolution is also possible [root@kube0 centos]# dig @10.96.0.10 -t SRV _dns._udp.kube-dns.kube-system.svc.cluster.local ;; QUESTION SECTION: ;_dns._udp.kube-dns.kube-system.svc.cluster.local. IN SRV ;; ANSWER SECTION: _dns._udp.kube-dns.kube-system.svc.cluster.local. 30 IN SRV 10 100 53 kube-dns.kube-system.svc.cluster.local. ;; ADDITIONAL SECTION: kube-dns.kube-system.svc.cluster.local. 30 IN A 10.96.0.10 [root@kube0 centos]# dig @10.96.0.10 -t SRV _dns-tcp._tcp.kube-dns.kube-system.svc.cluster.local ;; QUESTION SECTION: ;_dns-tcp._tcp.kube-dns.kube-system.svc.cluster.local. IN SRV ;; ANSWER SECTION: _dns-tcp._tcp.kube-dns.kube-system.svc.cluster.local. 30 IN SRV 10 100 53 kube-dns.kube-system.svc.cluster.local. ;; ADDITIONAL SECTION: kube-dns.kube-system.svc.cluster.local. 30 IN A 10.96.0.10
- We have applied the weave network add-on. This allows the pods to receive IP addresses. The first pod with a pod network address is kube-dns.
- Each worker node runs two pods: kube-proxy and weave-net.
- Weave creates a flat L2-network 10.32.0.0/12 across the nodes: each node has a linux bridge "weave" for pods. The weave bridge is connected to an Open vSwitch datapath, unimaginatively called "datapath", by a veth pair. This datapath also has a vxlan interface "vxlan-6784" enslaved. The vxlan interface acts as the VTEP for the overlay network. Weave runs its own soft switch in the weave-net pod, it does not use ovs-vswitchd. This is not an Open vSwitch switch, so don't expect OVS commands to work. Only the lowest level tools like
ovs-dpctlwill work. Here's a simple schematic