How to manage Kubectl plugins using Krew package manager

  

Background

Currently, Kubernetes is the most popular container orchestration platform. Kubectl is the most popular command-line tool to interact with Kubernetes cluster. Kubernetes is quite extensible. In this post, we are going to see how to make use of kubectl plugin manager Krew to extend the functionality of Kubectl.

Krew plugin manager

You might have heard about package managers such as Homebrew for Mac, Chocolatey or Winget for Windows or apt-get for Linux. These package managers help us to search, install, uninstall and upgrade packages or software on our operating system. Similar to the package managers, there are plugins or extensions. These help to extend the functionality of a tool or program. The most common examples of plugins or extensions are found with browsers and also Integrated Development Environment(IDE) such as Visual Studio, IntelliJ Idea, Eclipse etc. 

I hope you got the context now with regards to the plugins and package manager. Coming back to the topic of this blog post, we will look at Krew which is a package manager specifically for kubectl. As a plugin manager for kubectl, Krew performs 3 main activities 

  1. discover kubectl plugins
  2. install them on your machine
  3. keep the plugins up-to-date

As of this writing, there are 151 kubectl plugins that are distributed by krew. Krew is cross-platform and works on Windows, macOS and Linux. Note that there are some plugins that work only on a particular operating system.

In the video, we can see how to install Krew and work with different plugins.

Krew commands

Here are some of the commands demonstrated in the video related to krew
  • krew install krew - install krew on the system
  • kubectl krew list - list installed plugins
  • kubectl krew search - search available plugins
  • kubectl krew install [plugin name] - install a plugin
  • kubectl krew info [plugin name] - get info about the plugin
  • kubectl krew uninstall [plugin name] - uninstall a plugin
  • kubectl krew update - update the index of plugins
  • kubectl krew upgrade - upgrade plugin version

Useful Kubectl plugins

In the video, these plugins are demonstrated.

pod-lens

Pod-lens plugin shows the hierarchy of different objects related to a pod such as the namespace, container, services that interact with the pods, the associated persistent volume claim (PVC), secrets etc.

The output shows an interactive option to select the pods in a namespace. By default, it shows the summary for the selected pod and once we select an individual pod the plugin lists all the dependencies.


pod-dive

Pod dive gives a minimal view of pod. As compared to the pod-lens it shows a hierarchical view of pod along with the siblings which are pods running on the same node. This can be quite handy to know what other pods are running on the node where a specific pod is currently scheduled to run.


pod-inspect

Pod inspect is somewhat similar to pod-dive. Instead of showing the hierarchy, it depicts the information in a tabular format.


images

I particularly like the images plugin. It gives information about the container images in the Kubernetes cluster. This can be quite handy to know which all images are running in the cluster and what is the source of those images. There is also a quick summary presented at the top of the output which lists the number of namespaces, pods, containers and different versions of images.


rbac-view

The rbac-view plugin gives details about the ClusterRoles and Roles.





Conclusion

Krew is quite handy when it comes to extending the functionality of kubectl using plugins. It makes life easier for searching, installing and managing the plugins. Without Krew, it would be difficult to know what plugins are available and how to keep them updated. I demonstrated only a handful of plugins here. If you have used any other kubectl plugin and find it useful please recommend using the comments either on Youtube video or this blog post. I would love to hear feedback from viewers/readers about krew and different kubectl plugins.

Until next time, Code with Passion and Strive for Excellence.

spacer

Improve productivity while working with Kubernetes using kubectl power tools Kubectx and kubens

 

Background

If you are a DevOps or Site Reliability Engineer (SRE) engineer, most probably you will be working with multiple clusters. You might also be working with multiple namespaces across these clusters. It can be quite time consuming to switch between contexts and namespaces while working with the kubectl command-line tool.

kubectl power tools

Kubernetes is quite popular nowadays and it is very common for people to work with Kubernetes clusters using kubectl as command-line interface. Most people prefer to work with kubectl instead of working with GUI based tools.

For SRE and DevOps engineers as well as developers who are working with multiple clusters say for example Dev, staging / UAT and Production environments, they need to switch the context. In many enterprises, the Kubernetes clusters are shared by different teams. In order to isolate workloads, kubernetes namespaces are used. You need to provide the namespace everytime any command is executed.

In such scenarios switching between namespaces on the same cluster and also switch the whole context to point to a different cluster can be a cumbersome task. That is where I came across two power tools while working with kubectl, kubectx and kubens.

In the video, we can see how to install kubectx and kubens along with Fuzzy Search (FZF) tool for interactive mode.

Conclusion

Using kubectx and kubens helps to improve productivity while working with kubectl. We don't need to remember lengthy context names and also names of each and every namespace in the cluster. Get more productive by using these power tools.

Until next time, Code with Passion and Strive for Excellence.

spacer

DP-201 Designing an Azure Data Solution Exam Preparation Guide

 Background

As part of Azure role based certifications, DP-201 Designing an Azure Data Solution certification exam is the 2nd in the list to get the Azure Data Engineer Associate certificate. I had cleared DP-200 exam earlier. Yesterday, I cleared DP-201 and achieved the Associate certification.

DP-201 Designing an Azure Data Solution Exam

This exam focusses on assessing the candidate in the following areas:

  • Design Azure Data Solutions (40-45%)
  • Design Data Processing Solutions (25-30%)
  • Design for Data Security and compliance (25-30%)

The primary skills are tested against the following core services from Azure

  • Azure Cosmos DB
  • Azure Synapse Analytics
  • Azure Data Lake Storage
  • Azure Data Factory
  • Azure Stream Analytics
  • Azure Databricks
  • Azure Blob Storage

Note: There are changes to the contents of the exam as of 31st July 2020. One notable exclusion from the list of services is the Azure SQL Database.

The test focusses on assessing the design skills of the candidate. It is not important to know so much about the code. We need to make the right design choices when selecting the services. Design options related to batch processing and stream processing are important. We also need to understand the different options related to disaster recovery and high availability.

All the questions are multiple-choice questions (MCQ). There are  2 case studies at the beginning of the test consisting of 9 questions. We can go back and forth on the questions within each case study and revise the answers. Once we mark the case studies as completed, we cannot go back and change the answers. The two case studies are followed by 30 questions. These questions are related to different services. Finally, there is one more case study with 3 questions towards the end. The trick here is that once a question answered, we cannot go back online the 2 earlier case studies.

Refer to the below Youtube video for more details about how I prepared for the test


Visual Notes taking

During the preparations, I also started experimenting with the visual notes taking approach. Instead of taking notes in the plain text, I started to make them more visual. Here are some examples of this approach.





I have published these visual notes to a GitHub repository in PDF as well as OneNote format. I hope people will find it useful in their preparations.

Conclusion

I found this test easier compared to DP-200. The focus of this test is on design skills and its important to understand the differences in different options available with the services. I hope you find this useful.

until next time, Code with Passion and Strive for Excellence.
spacer

Hacktoberfest DevOps with GitHub Actions

 Background

In the month of October, DigitalOcean in partnership with Intel and Dev are celebrating the Hacktoberfest event across the globe. This is an annual event which aims to increase awareness of open source in communities all over the world. There are meetups and events scheduled all over in this regards. If you are an open-source contributor, you can get a free t-shirt by registering for the event and submitting at least 4 pull requests during the month of October. You can find out more about the event from the website

DevOps with GitHub Actions

The Hacktoberfest Singapore meetup was scheduled for Saturday 10 October with half-day event. I had the opportunity to present a topic on building DevOps pipelines with GitHub Actions. GitHub Actions allows us to automate workflows when certain events trigger on our repository.

In this session, we demonstrated 3 different scenarios

  • A simple workflow for linting codebase with Github Superlinter
  • A workflow involving third party integration with SonarCloud for static code analysis
  • CI CD workflow for deploying a containerized app with Azure Container Registry (ACR) and Azure Kubernetes Service (AKS)

The recording of the session is available on YouTube


Slides

The slides used during the session are available online

Slideshare - https://www.slideshare.net/nileshgule/devops-with-github-actions

Speakerdeck - https://speakerdeck.com/nileshgule/devops-with-github-actions


Source code

The source code of the demo used during the session is available on the GitHub repository

Conclusion

GitHub Actions provides an excellent option for automating workflows to run specific tasks when some event like code push or a release is triggered on the repository. The marketplace offers an opportunity for third-party vendors to provide actions for their products to integrate into automated workflows on repositories.

Until next time, Code with Passion and Strive for Excellence.
spacer

Scaling .Net containers with Azure Kubernetes Service and Serverless ACI

 Background

Following my virtual session to the Microsoft Cloud South Florida user group last week, I was invited to deliver a similar virtual event. This time it was for Dear Azure user group in India. I am thankful to Kasam Shaikh the organizer of this event. The event was scheduled for Sunday afternoon. Given the relaxed time of 12:30 PM IST, we decided to have a 2 hours session. This was the first time I was present to the vibrant community in India. 

Scaling .Net Containers with AKS and Serverless ACI

The theme of the session is similar to the last one. The main difference here was the duration. We scheduled it for 2 hours and that provided me with an opportunity to do a deep dive on some of the topics. We started off by looking at the components of the application consisting of a Web API as Producer for RabbitMQ. Then we looked at the consumer which is built as .Net Core executable. We went through the steps of building Docker images using Docker compose. We also looked at the benefits of Docker. 

Next, we looked at private and public container registries. Kubernetes was our next logical step and we started by looking at the main features of Kubernetes.
RabbitMQ and KEDA were installed on Kubernetes cluster and the demo application was deployed using Kubernetes manifest files. In the closing stages of the demo, we looked at different options to scale a Kubernetes cluster. These included the manual scaling, Horizontal Pod Autoscaling (HPA) based on CPU or RAM utilization. There are cases when we need to scale not just on the resource usage but on some external factors. This is where we explored Kubernetes based Event Driven Autoscaling (KEDA). KEDA allows us to scale based on events.

Slight glitch

Usually, due to the time constraint, I prepare the demo environment beforehand and show the relevant bits during the live demo. This time since we had 2 hours at our disposal, I created the AKS clusters right at the start of the session. Most of the things which I wanted to show worked fine, except for a scenario of scaling onto the serverless Azure Container Instances (ACI). My backup cluster also had some problem. Lesson learnt that next time if I do a live cluster setup, the backup cluster needs to be tested thoroughly. I have done a similar demo at least 8-10 times in different forums. Maybe I became a bit overconfident that it would work.

YouTube video recording


The recording of this session is available on YouTube


Slides

The slides used during the session are available online
Slideshare - https://www.slideshare.net/nileshgule/scaling-containers-with-aks-and-aci

Conclusion

It was a wonderful experience to present to the vibrant developer community in India. The questions which were asked during the session prompted me to make changes to my demo which will be helpful for the future sessions. 

Until next time, Code with Passion and Strive for Excellence.
spacer

Automate SonarCloud code scans using GitHub Actions

 Background

In an earlier post / YouTube video, I had demonstrated how to automate code lining process using the GitHub Super Linter and GitHub action. In this post, we will explore how to use GitHub actions to automate the static code analysis using SonarCloud.

SonarCloud

You might have heard about SonarQube. It offers scanners for different programming languages. SonarCloud is cloud service which scans codebases for bugs, vulnerabilities and code smells. At the point of this writing, there are 24 mainstream programming languages supported which include:

  • C#
  • Java
  • Python
  • JavaScript
  • TypeScript
  • Go
  • Kotlin and others
SonarCloud provides detailed analysis across multiple dimensions of the code. These are helpful in identifying common mistakes done by developers and ensure that the code is of high quality. SonarCloud will also give an indicator of how much time is required to fix all the reported issues and remove the technical debt. The different dimensions are 
  • Reliability
  • Security
  • Maintainability
  • Coverage
  • Duplications

SonarCloud also has Quality Gates and Quality Profiles. Quality profiles can be used to refine the rules which are applied to the code while scanning the files.

Automate code scan with GitHub action

In the video, we can see how to use GitHub Action to automate the code scan using SonarCloud GitHub Action.


Conclusion

SonarCloud offers a very good analysis of codebase by performing static code analysis. The ruleset can be customized as per the language and also based on organization policies. GitHub Actions make it very easy to automate the workflows. Combining the power of GitHub action and the SonarCloud we get an up to date insights about our code in an automated manner. I hope you found this post useful.

Until next time, Code with Passion and Strive for Excellence
spacer

Scaling .Net Core containers with Event Driven Workloads

Background 

Due to the COVID-19 pandemic, many of the developer communities and user groups have been forced to conduct their regular session in a virtual manner. This has provided a great opportunity for organizers and speakers from across the globe to speak at community events and rope in speakers from different parts of the world. This might not have been possible in case of physical events. 

I have been speaking at the local community events in Singapore as well as other parts of Asia for the past 3-4 years. Recently, I got opportunity to speak at the virtual meetup across the globe for the Microsoft Cloud South Florida user group. 

 It started off with a Tweet from Julie Larman that she is getting multiple requests for speaking opportunities but could not fulfil all. She suggested the organizers can extend the opportunities to others who might be interested and available to speak. I thought it was a good opportunity and replied to her tweet. The thread got picked up by Dave Noderer and we managed to set up a virtual meetup in no time. 

Scaling .Net Core Containers with Event Driven Worksloads 

I have presented the topic of autoscaling container using KEDA on multiple occasions in the past for different meetups and events in Asia. I also have a 3 part series on my recently launched YouTube channel about this. The duration of the meetup was 90 minutes and that provided me with an opportunity to do a deep dive on some of the topics which are not possible in a 45 minutes or 1 hour session. 

The application I used in the demo is a dummy events management application called Tech Talks. There is a ASP.Net Core WebAPI which exposes a method to generate random events. These events are pumped into a RabbitMQ queue. We have a .Net Core exe which consumes these messages in a batch. It is the consumer which we use to showcase the autoscaling capabilities using an upcoming project called Kubernetes-based Event Driven Autoscaling (KEDA)




During the session, I demonstrated the following features 
  • Containerize .Net Core Web API and executable using Dockerfile 
  • Build and Publish docker images to a private container registry (Azure Container Registry) 
  • Use Docker-compose to multiple services
  • Use YAML files to describe Kubernetes deployments 
  • Provision AKS cluster using an idempotent Powershell script 
  • Deploy RabbitMQ cluster using Helm charts 
  • Deploy application containers to Kubernetes 
  • Auto scale RabbitMQ consumer using KEDA 
  • Extend the scaling capabilities to serverless Azure Container Instances (ACI) using Virtual Node

By the end of the session, we have expanded the containers to be auto scaled on to serverless Azure Container Instances (ACI) using Virtual Node.


YouTube video recording

The recording of this talk is now available on YouTube


Slides

The slides used during the session are available on 

Source code

The source code is available in GitHub repository.

Conclusion

The session provided me an opportunity to speak for the first time across the globe. I like to attend in-person events as it helps a great deal to network with people. In a virtual event sometimes you feel like you are talking to a screen. It is difficult to gauge the reaction of the audience in virtual event. 

One of the benefits of a virtual event is that we can focus more on the content delivery without getting distracted which could sometimes happen in a in-person event. Depending on which platform or communication tool is used (YouTube live stream / MS Teams/ Zoom etc) the question and answers can be handled separately. Aonther great advantage of virtual event is the ability to record it and share it on platforms like YouTube. People who could not attend due to timezone differences or due to emergencies can find these recording useful.

Until next time, Code with Passion and Strive for Excellence.
spacer