Our first transcript is ready! It is a walk-through of
kubeadm init and this will build a cluster up to but not including a network.
The transcript for this blog post can be found here: 00-install.md.
kubeadm builds the kubernetes cluster by running one host service (kubelet), and six pods on the master node. This is the desired end state:
Summary of this transcript:
- Installation of kubeadm and kubernetes host tools
- Installation of a nuke-n-pave script as well as learn about
kubeadm reset. These are both ways of tearing down the installation so that we can start again, or revert from an unrecoverable installation.
- Creation of a 1-master 0-worker cluster
- kubeadm is a fairly mature tool, getting to this stage is straightforward. Low frustration value!
- The correct state is one host service (kubelet), 5 “Running” pods, and one “Pending” pod (kube-dns). The reason for the “Pending” pod is that we have no network add-on installed.
- A list of all the docker images that kubeadm has pulled
Note: whether we have 5 or 6 “Running” pods is dependent on the invocation of kubeadm and the intended SDN (network add-on). In a future transcript, when we get to try flannel (another SDN), we will see 6 “Running” pods, and kube-dns has a pod IP address.
Even at this early stage it is instructive to compare what kubeadm has done with a hand-crafted production cluster.
- etcd to kubernetes is what mysql is to the LAMP stack. kubernetes requires a production grade etcd cluster. kubeadm solves this problem by creating a single-node etcd cluster running on the master, and accessible at 127.0.0.1:2379. This is equivalent to spinning up a local instance of mysql or postgresql on your development workstation when developing web applications.
- kubelet is run as a host service: this is standard in kubernetes
- kube-proxy can be run as a host service; kubeadm however sets up a pod for it
- Apiserver, Controller Manager, and Scheduler, are meant to run as pods in kubernetes, and this what kubeadm does
- Tear down the cluster using
kubeadm resetand rebuild it.
- Tear down the cluster using the
/opt/install/manage.shscript and rebuild it.
- Custom Cluster from Scratch: describes the parts of kubernetes for hand-built clusters