Previously we had completed the installation of an SDN add-on, weave, in our kubernetes cluster. At this stage we have only one application pod running on the pod network, viz., kube-dns. In this transcript, 02-sock-shop.md, we will add many more application pods by trying out the the sock-shop demo.
The sock-shop demo is a mock-up of a web application made of microservices. It is a distributed application running the whole gamut of components: web frontend, persistence(mongodb), message queue, and comprising 13 pods. Since the data persistence layer is in ephemeral pods, it is not truly representative of real-world stateful applications.
Quick recap — this is our start state with our pods and network running:
Go ahead and apply the spec file for the sock-shop demo:
[root@kube0 install]# sudo -u centos kubectl create namespace sock-shop namespace "sock-shop" created [root@kube0 install]# sudo -u centos kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true" deployment "carts-db" created service "carts-db" created deployment "carts" created ...
This will take some time as it needs to download the docker images and spin up the pods. Here is the installing phase:
This is desired end state:
Notice a few things:
- The 13 pods have been scheduled fairly evenly across all the workers. By default the master node does not run any user pods.
- All the pods have IP addresses in the 10.32.0.0/12 range assigned by the SDN. All these IP addresses should be ping-able
To view the demo from the browser, we need the port that is exposed to the external world. All the nodes, function as load balancers, so with port PPPP, the web application will be accessible at 192.168.125.XX:PPPP where XX represents any of the nodes (master and workers).
[root@kube0 install]# sudo -u centos kubectl get svc -n sock-shop NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE carts 10.106.135.200 80/TCP 8m carts-db 10.105.62.154 27017/TCP 8m catalogue 10.106.108.10 80/TCP 8m catalogue-db 10.96.93.124 3306/TCP 8m front-end 10.101.181.160 80:30001/TCP 8m orders 10.105.111.140 80/TCP 8m orders-db 10.108.211.174 27017/TCP 8m payment 10.96.196.193 80/TCP 8m queue-master 10.110.42.165 80/TCP 8m rabbitmq 10.99.189.36 5672/TCP 8m shipping 10.107.227.218 80/TCP 8m user 10.111.252.48 80/TCP 8m user-db 10.101.200.103 27017/TCP 8m
The external port is obtained from
80:30001 and is port 30001. If you inspect the spec file that created this application from here you will see this snippet:
spec: type: NodePort ports: - port: 80 targetPort: 8079 nodePort: 30001 selector: name: front-end
This declares the port, 30001, that the outside world should access the application on the nodes 192.168.125.100-103. Picking 192.168.125.103:30001 to access from the browser — and voila! Socks for purchase! You should test this web page, 192.168.125.X:30001, on all your nodes.
- We have deployed a non-trivial application comprising 13 pods
- We have observed the scheduler and SDN at work
- Applications can declare
nodePorts and al the nodes, function as load balancers
- In a production environment, it is expected, that there will be external load balancers directing traffic to the nodes
- Microservices Demo by Weaveworks