How to bootstrap multi node Kubernetes cluster on Azure using Kubeadm

 Background

In the earlier post, I shared my experience with preparing and passing the Certified Kubernetes Administrator (CKA). Currently, I am preparing for the Certified Kubernetes Application Developer (CKAD). In order to prepare for this certification, I need a Kubernetes cluster where I can practice. I decided to create a 3 node Kubernetes cluster on Microsoft Azure using Kubeadm. This post is a step by step process of creating a Kubernetes cluster with version 1.22 on Azure.

Bootstrap k8s cluster on Azure using kubeadm

As of this writing, the CKAD certification is based on the 1.22 version of Kubernetes. The latest stable version of Kubernetes is 1.23. So I need a way to provision the cluster with the earlier version and not the latest version of Kubernetes. That is the reason I decided to provision Ubuntu Vms on Microsoft Azure and bootstrap the Kubernetes cluster with 1.22 version. 

Here are the high level steps required to set up a brand new cluster on Azure:

  • Provision 3 Ubuntu VMs
  • Setup Docker, kubeadm, kubectl and Kubelet on each node
  • Install Control plane components on the master node 
  • Setup pod network
  • Join worker nodes to the cluster

Provision Ubuntu VMs on Azure

There are multiple ways in which the VMs can be provisioned on Azure using the portal, Azure CLI, ARM template etc. I prefer to use Azure CLI commands to provision the VMs. Over the years I have used a bit of Powershell scripts to provision Azure resources. In this case, I have created a small Powershell script that provisions the 3 Ubuntu VMs on Azure with public IP. You can find the script in the Github repository. There is also an accompanying markdown file that walks through the different steps to set up the cluster. The script is parameterized and the following parameters can be overridden while executing the script

  • Subscription name 
  • Resource group name
  • Resource group location

Setup Docker, kubeadm, kubectl and Kubelet on each node

Once the VMs are provisioned, we need to set up Docker, Kubeadm, Kubectl and Kubelet on each of the nodes. Using the RSA key provided during the VM provisioning, I ssh into each node and setup these prerequisites. Here is an example of how to login to the master node

ssh -i ~/.ssh/id_rsa [email protected]

You can replace the hostnames for worker-1 and worker-2 to login to these VMs respectively. 

Install Docker and enable the daemon so that Docker is restarted if the VM is rebooted.

sudo apt install docker.io -y

sudo systemctl enable docker

The current version of kubeadm, kubectl and kubelet are 1.23 but we need 1.22. We specify the version in the install command as follows

sudo apt install kubeadm=1.22.0-00 kubectl=1.22.0-00 kubelet=1.22.0-00 -y

With these prerequisites installed, we are ready to install the Kubernetes cluster.

Install Control plane components on the master node 

Before we initialize the control plane components using Kubeadm init command, we need to make some modifications to the Docker and Kubelet settings. This is to set up systemd as the group driver. This setting has to match the container runtime and kubelet. Refer to the Kubernetes documentation for more details.

We override the Docker daemon configuration as shown below

cat <<EOF | sudo tee /etc/docker/daemon.json

{

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF

This needs to be done on all the nodes. On the master node, we also override the configuration for the Kubelet. You can find the kubeadm config yaml file in the repo. We use the file as overridden config along with the kubeadm init command as follows

sudo kubeadm init --config kubeadm-config.yaml

With these overrides, we should have the master node configured.

Setup pod network

Kubernetes does not come with built in pod network by default. We need to setup our own Container Networking Interface (CNI) based pod network so that pods can communicate with one another. CoreDNS will not start until this step is completed. We will use weavenet from Weaveworks. 

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Make sure that the master node is in a ready state after applying the above manifest file.

Join worker nodes to the cluster

The steps are exactly the same on the worker node for installing Docker, Kubeadm, Kubectl and Kubelet. The difference lies in the Kubelet configuration override. We do not need to do it on the worker node. However, we still need to override the Docker daemon configuration. 

The last step is to join the worker node to the cluster. Whe the control plane is ready on the master node, the join command will be output along with the steps to copy the Kubeconfig file. Run the join command on each node and verify that all the 3 nodes are shown in ready state.

sudo kubeadm join 172.0.0.4:6443 --token pyyfr7.j97w0dcdpsajkgjl \
        --discovery-token-ca-cert-hash sha256:e096df8d88421a8e1f1fae731e424deb21a8e055e80b0168f86f3661797fbe88

Note that the values will be different in your environment for the IP, token and the ca cert hash.

Youtube video

All the steps mentioned above are demonstrated in the Youtube video along with additional details like setting up the bash profile and testing the cluster with a test deployment. Watch the video to see it in action.

 

Conclusion

The Kubernetes 1.22 version requires some overrides to the Docker and Kubelet configuration to bootstrap using Kubeadm. The steps demonstrated here can be used on other cloud providers to set up a similar cluster once you have the Ubuntu based VMs ready. Hope this helps in your journey to learn more about Kubernetes.

Until next time, Code with Passion and Strive for Excellence.
Share:
spacer

No comments:

Post a Comment