How to install Metrics Server on Kubernetes cluster

 Background

This post is about installing the Metrics Server on a Kubernetes cluster. We had provisioned a 3 node Kubernetes cluster in the previous post. In a multi-node cluster, the workloads will get scheduled on different nodes. If we want to find out which pods are consuming more resources or which nodes are having more resources, we need some way to aggregate the pod level metrics. This is where Metrics Server comes into the picture.

How to install Metrics Server on Kubernetes cluster

Kubernetes does not provide a default metrics aggregator. We need to install Metrics Server which helps to collect the container level metrics like CPU or RAM usage. These metrics can then be used by other Kubernetes APIs and objects like the Horizontal Pod Autoscaler (HPA) or Vertical Pod Autoscaler (VPA).

We will use a manifest file to install the required components onto our Kubernetes cluster. The manifest file is available in the Github repo for the Metrics Server project. Run the following command to apply the manifest

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

This will create the service account, RBAC for cluster role, cluster role binding, expose the metrics server as service etc. Verify these by listing the pods in the kube-system namespace


We can see that the metrics-server pod is running but the container is not ready. Use the describe command of Kubectl to identify the reason.

kubectl -n kube-system describe pod metrics-server-<<dynamic-name>>


In the events section, we can see there is a warning which suggests that the readiness probe failed. This happens because the pod is unable to establish communication with the Kube API server. We need to override some defaults to fix this issue.

Override the metrics server command

We will go and edit the metrics-server deployment. Use the following command to edit the deployment

k -n kube-system edit deploy metrics-server

scroll down into the section where we define the settings for the container. Add the following lines just before the image tag.

command:

        - /metrics-server

        - --kubelet-insecure-tls

        - --kubelet-preferred-address-types=InternalIP

Save the changes. The deployment will kill the older version of the pod and recreate a new pod for the metrics server. After making the above changes we will be able to successfully run the metrics server on our Kubernetes cluster.

Verify Metrics Server functionality

Run the top command with node and pods subcommands to find out the resource usage at node & pods levels respectively. We can also examine the resource usage at the individual container level where there are multi-container pods by passing the --containers flag to the kubectl top pods --containers. Below screenshots show the output of these commands




Youtube video

All the steps mentioned here are demonstrated in the Youtube video.


Conclusion

Metrics Server helps to aggregate the CPU & RAM related metrics across nodes in a multi-node Kubernetes cluster. Hope you found this post useful. The default settings need to be overridden to allow insecure traffic between the kubelet and the metrics server. 

Until next time, Code with Passion, Strive for Excellence

spacer

How to bootstrap multi node Kubernetes cluster on Azure using Kubeadm

 Background

In the earlier post, I shared my experience with preparing and passing the Certified Kubernetes Administrator (CKA). Currently, I am preparing for the Certified Kubernetes Application Developer (CKAD). In order to prepare for this certification, I need a Kubernetes cluster where I can practice. I decided to create a 3 node Kubernetes cluster on Microsoft Azure using Kubeadm. This post is a step by step process of creating a Kubernetes cluster with version 1.22 on Azure.

Bootstrap k8s cluster on Azure using kubeadm

As of this writing, the CKAD certification is based on the 1.22 version of Kubernetes. The latest stable version of Kubernetes is 1.23. So I need a way to provision the cluster with the earlier version and not the latest version of Kubernetes. That is the reason I decided to provision Ubuntu Vms on Microsoft Azure and bootstrap the Kubernetes cluster with 1.22 version. 

Here are the high level steps required to set up a brand new cluster on Azure:

  • Provision 3 Ubuntu VMs
  • Setup Docker, kubeadm, kubectl and Kubelet on each node
  • Install Control plane components on the master node 
  • Setup pod network
  • Join worker nodes to the cluster

Provision Ubuntu VMs on Azure

There are multiple ways in which the VMs can be provisioned on Azure using the portal, Azure CLI, ARM template etc. I prefer to use Azure CLI commands to provision the VMs. Over the years I have used a bit of Powershell scripts to provision Azure resources. In this case, I have created a small Powershell script that provisions the 3 Ubuntu VMs on Azure with public IP. You can find the script in the Github repository. There is also an accompanying markdown file that walks through the different steps to set up the cluster. The script is parameterized and the following parameters can be overridden while executing the script

  • Subscription name 
  • Resource group name
  • Resource group location

Setup Docker, kubeadm, kubectl and Kubelet on each node

Once the VMs are provisioned, we need to set up Docker, Kubeadm, Kubectl and Kubelet on each of the nodes. Using the RSA key provided during the VM provisioning, I ssh into each node and setup these prerequisites. Here is an example of how to login to the master node

ssh -i ~/.ssh/id_rsa [email protected]

You can replace the hostnames for worker-1 and worker-2 to login to these VMs respectively. 

Install Docker and enable the daemon so that Docker is restarted if the VM is rebooted.

sudo apt install docker.io -y

sudo systemctl enable docker

The current version of kubeadm, kubectl and kubelet are 1.23 but we need 1.22. We specify the version in the install command as follows

sudo apt install kubeadm=1.22.0-00 kubectl=1.22.0-00 kubelet=1.22.0-00 -y

With these prerequisites installed, we are ready to install the Kubernetes cluster.

Install Control plane components on the master node 

Before we initialize the control plane components using Kubeadm init command, we need to make some modifications to the Docker and Kubelet settings. This is to set up systemd as the group driver. This setting has to match the container runtime and kubelet. Refer to the Kubernetes documentation for more details.

We override the Docker daemon configuration as shown below

cat <<EOF | sudo tee /etc/docker/daemon.json

{

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF

This needs to be done on all the nodes. On the master node, we also override the configuration for the Kubelet. You can find the kubeadm config yaml file in the repo. We use the file as overridden config along with the kubeadm init command as follows

sudo kubeadm init --config kubeadm-config.yaml

With these overrides, we should have the master node configured.

Setup pod network

Kubernetes does not come with built in pod network by default. We need to setup our own Container Networking Interface (CNI) based pod network so that pods can communicate with one another. CoreDNS will not start until this step is completed. We will use weavenet from Weaveworks. 

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Make sure that the master node is in a ready state after applying the above manifest file.

Join worker nodes to the cluster

The steps are exactly the same on the worker node for installing Docker, Kubeadm, Kubectl and Kubelet. The difference lies in the Kubelet configuration override. We do not need to do it on the worker node. However, we still need to override the Docker daemon configuration. 

The last step is to join the worker node to the cluster. Whe the control plane is ready on the master node, the join command will be output along with the steps to copy the Kubeconfig file. Run the join command on each node and verify that all the 3 nodes are shown in ready state.

sudo kubeadm join 172.0.0.4:6443 --token pyyfr7.j97w0dcdpsajkgjl \
        --discovery-token-ca-cert-hash sha256:e096df8d88421a8e1f1fae731e424deb21a8e055e80b0168f86f3661797fbe88

Note that the values will be different in your environment for the IP, token and the ca cert hash.

Youtube video

All the steps mentioned above are demonstrated in the Youtube video along with additional details like setting up the bash profile and testing the cluster with a test deployment. Watch the video to see it in action.

 

Conclusion

The Kubernetes 1.22 version requires some overrides to the Docker and Kubelet configuration to bootstrap using Kubeadm. The steps demonstrated here can be used on other cloud providers to set up a similar cluster once you have the Ubuntu based VMs ready. Hope this helps in your journey to learn more about Kubernetes.

Until next time, Code with Passion and Strive for Excellence.
spacer

How to prepare for CKA certification

 Background

A few days back, I cleared the Certified Kubernetes Administrator (CKA) certification. This was on the second attempt that I managed to clear the exam. It is the best online certification exam I have answered so far. This post is about my experience with the exam and how to prepare for it. I also share some information that might be helpful during the exam itself.

Exam Curriculum

The exam is completely hands on. There are no multiple choice questions. We are given a set of 6 different Kubernetes clusters. Each cluster has a different configuration and the questions are related to administering the Kubernetes environment. There are five different broad areas which are assessed as shown below. These include

  • Storage 10%
  • Troubleshooting 30%
  • Workloads & Scheduling 15%
  • Cluster Architecture, Installation & Configuration 25%
  • Services & Networking 20%

The exam is conducted by Linux Foundation in collaboration with Cloud Native Computing Foundation (CNCF). You can find out more about the exam on the Linux Foundation training website.


The duration of the test is 2 hours with 66% as the minimum passing score. The certificate is valid for 3 years. In case you are not able to clear on the first attempt, you get 1 free retake.

References for exam preparation

I like to use multiple resources while preparing for any certifications. For CKA as well I referred to multiple sources of information. Here is a quick list of different references I used.

Kubernetes docs

We are allowed to refer to the Kubernetes docs during the exam. It is like an open book test. The following links are helpful for getting started with Kubernetes and understanding the different concepts.

  • Kubernetes docs - https://kubernetes.io/docs/home/
  • Kubernetes.io tasks - https://kubernetes.io/docs/tasks/
  • kubectl cheatsheet - https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Learning course

Along with the Kubernetes docs, I supplemented the knowledge by registering for the eLearning course. There are many courses available online. Almost every eLearning platform will have one or more course related to the CKA exam preparation. Some also offer mock test. I took the CKA exam plus the Kubernetes fundamentals bundle from Linux foundation



The Certified Kubernetes Administratr (CKA) with practice tests on Udemy by Mumshad Mannambeth is also one of the most popular courses.



Github Repos

Here is a list of Github repositories I found useful

  • Shubham Tatvamasi CKA - https://github.com/ShubhamTatvamasi/cka
  • Walid Shaari Kuberentes Certified Administrator - https://github.com/walidshaari/Kubernetes-Certified-Administrator
  • Nilesh Gule CKA prep - https://github.com/NileshGule/cka-prep
  • Denny Zhang kubectl cheatsheet - https://github.com/dennyzhang/cheatsheet-kubernetes-A4
  • Ahmet Alp Balkan Kubernetes network policy recipes - https://github.com/ahmetb/kubernetes-network-policy-recipes

Blogs

These blog posts provide useful tips for preparation as well as exam

  • Be fast with kubectl - https://faun.pub/be-fast-with-kubectl-1-18-ckad-cka-31be00acc443
  • How to nail Kubernetes certification exams - https://www.infoworld.com/article/3631108/how-to-nail-the-kubernetes-certification-exams.html
  • Codefresh Kubernetes cheatsheet - https://codefresh.io/kubernetes-tutorial/kubernetes-cheat-sheet/
  • CKA practical challenge series - https://levelup.gitconnected.com/kubernetes-cka-example-questions-practical-challenge-86318d85b4d
  • Rudi Martinsen CKA resources - https://rudimartinsen.com/cka-resources/

Tips 

Watch out for discounts

You will find discounts during Black Friday, Cyber Monday, Christmas, New Year, Diwali, Eid etc. Make use of these discounts on learning courses as well as the actual test.

Learn Imperative commands

Due to the time limit of 2 hours, you need to be quick in finishing the tasks related to completing the exam. While you can copy and paste yaml from the Kubernetes documentation, editing it could take some time. The test will have specific requirements like naming Kubernetes objects with specific names and deploying them to a specific namespace. Usually, Kubernetes docs will deploy the objects in default namespace. It will save you quite some time if you know how to work with imperative commands. The output of these commands can be piped to yaml files and then you can make the required modifications instead of handcrafting the complete yaml from scratch.

Learn basic vi or nano editor commands

The test environment is Linux based. If you are coming from Windows background like me, you need to be familiar with vi or nano code editors. Be familiar with editing files and command like deleting lines of code, navigating to specific lines, updating a word, saving files etc.

Practice, Practice, practice

Whoever has cleared the CKA, CKAD or CKS exams will tell you that practice is the most important part of exam preparation. If you don't know how to administer the Kubernetes cluster there is no way you are going to clear this test. There could be multiple ways of achieving the same task. You need to find the one which is the most time efficient. 

Bookmark specific links

Bookmarking the links from Kubernetes documentation is helpful to find the resource quickly. Instead of bookmarking the top level pages, I recommend bookmarking specific sections to avoid scrolling through the lengthy page looking for information.

Practice exam

If you book the Kubernetes related certification exam after June 2021, you will have access to the practice exam from Killer.sh. This exam gives you two attempts at the same test. The environment is available for 36 hours and answers along with a detailed explanation of the steps is provided to help you prepare better for real test.

Tips during the exam

  • Use external monitor (recommended)
  • Enable kubectl autocompletion
  • create aliases
  • Update vimrc / editor profile
  • Time management
    • use copy feature instead of typing
    • use the notepad feature to make quick notes
    • flag questions for review
    • don't spend too much time on a single question
  • verify solution after task completion

All the topics mentioned in this post are explained in more detail in the Youtube video.

Conclusion

CKA certification is meant to gauge the skills of a Kubernetes administrator. If you do not know how to administer the Kubernetes cluster, this certification is not for you. Practice well before taking the exam. There are 17 questions with different weights. Hope this post is helpful for those preparing for the exam in future.

Until next time, Code with Passion and Strive for Excellence.

spacer

How to manage Kubectl plugins using Krew package manager

  

Background

Currently, Kubernetes is the most popular container orchestration platform. Kubectl is the most popular command-line tool to interact with Kubernetes cluster. Kubernetes is quite extensible. In this post, we are going to see how to make use of kubectl plugin manager Krew to extend the functionality of Kubectl.

Krew plugin manager

You might have heard about package managers such as Homebrew for Mac, Chocolatey or Winget for Windows or apt-get for Linux. These package managers help us to search, install, uninstall and upgrade packages or software on our operating system. Similar to the package managers, there are plugins or extensions. These help to extend the functionality of a tool or program. The most common examples of plugins or extensions are found with browsers and also Integrated Development Environment(IDE) such as Visual Studio, IntelliJ Idea, Eclipse etc. 

I hope you got the context now with regards to the plugins and package manager. Coming back to the topic of this blog post, we will look at Krew which is a package manager specifically for kubectl. As a plugin manager for kubectl, Krew performs 3 main activities 

  1. discover kubectl plugins
  2. install them on your machine
  3. keep the plugins up-to-date

As of this writing, there are 151 kubectl plugins that are distributed by krew. Krew is cross-platform and works on Windows, macOS and Linux. Note that there are some plugins that work only on a particular operating system.

In the video, we can see how to install Krew and work with different plugins.

Krew commands

Here are some of the commands demonstrated in the video related to krew
  • krew install krew - install krew on the system
  • kubectl krew list - list installed plugins
  • kubectl krew search - search available plugins
  • kubectl krew install [plugin name] - install a plugin
  • kubectl krew info [plugin name] - get info about the plugin
  • kubectl krew uninstall [plugin name] - uninstall a plugin
  • kubectl krew update - update the index of plugins
  • kubectl krew upgrade - upgrade plugin version

Useful Kubectl plugins

In the video, these plugins are demonstrated.

pod-lens

Pod-lens plugin shows the hierarchy of different objects related to a pod such as the namespace, container, services that interact with the pods, the associated persistent volume claim (PVC), secrets etc.

The output shows an interactive option to select the pods in a namespace. By default, it shows the summary for the selected pod and once we select an individual pod the plugin lists all the dependencies.


pod-dive

Pod dive gives a minimal view of pod. As compared to the pod-lens it shows a hierarchical view of pod along with the siblings which are pods running on the same node. This can be quite handy to know what other pods are running on the node where a specific pod is currently scheduled to run.


pod-inspect

Pod inspect is somewhat similar to pod-dive. Instead of showing the hierarchy, it depicts the information in a tabular format.


images

I particularly like the images plugin. It gives information about the container images in the Kubernetes cluster. This can be quite handy to know which all images are running in the cluster and what is the source of those images. There is also a quick summary presented at the top of the output which lists the number of namespaces, pods, containers and different versions of images.


rbac-view

The rbac-view plugin gives details about the ClusterRoles and Roles.





Conclusion

Krew is quite handy when it comes to extending the functionality of kubectl using plugins. It makes life easier for searching, installing and managing the plugins. Without Krew, it would be difficult to know what plugins are available and how to keep them updated. I demonstrated only a handful of plugins here. If you have used any other kubectl plugin and find it useful please recommend using the comments either on Youtube video or this blog post. I would love to hear feedback from viewers/readers about krew and different kubectl plugins.

Until next time, Code with Passion and Strive for Excellence.

spacer

Improve productivity while working with Kubernetes using kubectl power tools Kubectx and kubens

 

Background

If you are a DevOps or Site Reliability Engineer (SRE) engineer, most probably you will be working with multiple clusters. You might also be working with multiple namespaces across these clusters. It can be quite time consuming to switch between contexts and namespaces while working with the kubectl command-line tool.

kubectl power tools

Kubernetes is quite popular nowadays and it is very common for people to work with Kubernetes clusters using kubectl as command-line interface. Most people prefer to work with kubectl instead of working with GUI based tools.

For SRE and DevOps engineers as well as developers who are working with multiple clusters say for example Dev, staging / UAT and Production environments, they need to switch the context. In many enterprises, the Kubernetes clusters are shared by different teams. In order to isolate workloads, kubernetes namespaces are used. You need to provide the namespace everytime any command is executed.

In such scenarios switching between namespaces on the same cluster and also switch the whole context to point to a different cluster can be a cumbersome task. That is where I came across two power tools while working with kubectl, kubectx and kubens.

In the video, we can see how to install kubectx and kubens along with Fuzzy Search (FZF) tool for interactive mode.

Conclusion

Using kubectx and kubens helps to improve productivity while working with kubectl. We don't need to remember lengthy context names and also names of each and every namespace in the cluster. Get more productive by using these power tools.

Until next time, Code with Passion and Strive for Excellence.

spacer