Dependabot - Update Project dependencies automatically

Background

Maintaining project dependencies can be a difficult task, especially for larger projects with many dependencies. It is important to keep track of which dependencies are being used and which versions are compatible with the project. However, this can become a challenge as new versions of dependencies are released and there may be conflicts with existing code. Additionally, it can be time-consuming to resolve these conflicts and update the dependencies, which can impact the overall development timeline of the project. It is important to have a clear strategy in place for managing dependencies to avoid these issues. In this post, we will see how to use Dependabot to automatically update dependencies for project hosted in a GitHub repository.

What is Dependabot?

Dependabot is a tool that automatically detects and fixes vulnerabilities in a project's dependencies. It works by regularly checking for updates to the dependencies in a project and submitting pull requests with the updated dependencies when they are available. This can help reduce the risk of security vulnerabilities in a project and save developers time by automating the process of updating dependencies. Dependabot supports a variety of languages and package managers, including npm, pip, Maven, and Gradle.

It does this by regularly checking for new versions of dependencies and alerting developers to any vulnerabilities. Dependabot supports two modes of operation: security updates and version updates.

Security updates mode: In this mode, Dependabot monitors dependencies for known vulnerabilities and submits pull requests to update to a secure version when a vulnerability is detected. This helps developers ensure that their project is using secure dependencies and reduces the risk of security breaches.

Version updates mode: In this mode, Dependabot checks for new versions of dependencies and submits pull requests to update to the latest version when it is available. This can help developers take advantage of bug fixes and performance improvements in the latest versions of dependencies.

By using Dependabot, developers can automate the process of keeping their dependencies up-to-date and secure, saving time and effort.

Enable Dependabot for GitHub repository

In order to configure Dependabot for a GitHub repository, go to the settings for the Github repository which needs to be integrated with Dependabot. Under the code security and analysis section, there are multiple configurations related to Dependabot as shown below


We can configure alerts, Security Updates and Version Updates separately. 

Alternatively, you can also configure Dependabot by adding a .github/dependabot.yml file to your repository and specifying the desired configuration in the file. For more information on this method of configuration, see the Dependabot documentation.

Configure Package ecosystem

In the dependabot.yml file we need to configure the package ecosystem. This helps Dependabot to monitor the packages for any updates to the dependencies. Within the same project it is possible that there are more than one package manager used to manage dependencies like Node Package Manager (NPM) for Node package and Maven or Gradle for Java packages used for backend processing.

In the file, specify the package ecosystems that you want Dependabot to monitor under the package_ecosystems key. By configuring the package ecosystem setting for Dependabot, you can specify which package managers and package ecosystems Dependabot should monitor for updates. This can help ensure that Dependabot is monitoring the dependencies that are relevant to your project.

YouTube video demonstrating Dependabot usage

Check out this YouTube video which explains in detail how to configure the Dependabot for a GitHub repository.

Dependabot Pull Request for updated dependencies

When Dependabot detects that an update is available for a dependency in your project, it will create a pull request with the updated dependency. This pull request will contain a summary of the changes being proposed, including the current and new versions of the dependency, and a list of the files that will be modified.

It is up to the repository maintainer to review the pull request and decide whether to merge it. If the pull request is accepted, the updated dependency will be included in the project. If there are any conflicts with existing code, Dependabot will attempt to resolve them automatically, but in some cases, manual resolution may be required.

Creating pull requests for updated dependencies helps ensure that projects are using the most up-to-date and secure versions of their dependencies. It also allows for an easy way for repository maintainers to review and control changes to the dependencies in their project.

Below screenshots show different details reported in the Pull Request created by Dependabot for every dependency that is outdated in the project.

Summary of changes showing Release notes, Changelog, Commits, Compatibility issues and also commands and options.


Release notes


Changelog


Commits


Commands and Options

Advantages of using Dependabot to automatically update dependencies

There are several advantages to using Dependabot to automatically update dependencies:

Improved security: By regularly updating dependencies, Dependabot helps reduce the risk of security vulnerabilities in a project. It monitors dependencies for known vulnerabilities and submits pull requests to update to a secure version when a vulnerability is detected.

Improved stability: New versions of dependencies can often include bug fixes and performance improvements. By using Dependabot to update dependencies, developers can ensure that their project is using the most stable versions available.

Improved efficiency: Updating dependencies can be a time-consuming task, especially for larger projects with many dependencies. By automating the process with Dependabot, developers can save time and effort and focus on other tasks.

Improved control: Dependabot creates pull requests for updated dependencies, allowing repository maintainers to review and control changes to the dependencies in their project. This can help ensure that updates are thoroughly tested and do not cause conflicts with existing code.

Conclusion

Overall, Dependabot is a useful tool for developers to keep their project dependencies up-to-date and secure. By automating the process of updating dependencies, Dependabot helps developers improve the security, stability, efficiency, and control of their projects. It is a valuable resource for repository maintainers to ensure that their project is using the most recent and secure versions of dependencies and to easily review and control changes to those dependencies. As such, it is a tool that developers should consider using in their projects.

Until next time, Code with Passion and Strive for Excellence.
spacer

Scan OpenSource libraries free with Mend

Background

Using free and open-source libraries can foster innovation in a number of ways. For one, it allows developers to easily incorporate pre-existing code into their projects, saving them time and effort that they can then redirect towards developing new features and functionality. Additionally, the open-source model promotes collaboration and sharing, which can lead to the creation of more powerful and sophisticated libraries as developers from around the world contribute their skills and knowledge. Finally, open-source libraries often have large and active communities of users and developers, which can provide valuable support, inspiration, and feedback for those looking to push the boundaries of innovation.

There are a few potential risks to be aware of when using open source libraries. One risk is the potential for vulnerabilities or security issues in the library code. It's important to regularly check for and update to the latest versions of the library to minimize this risk. We will see how to use Mend to scan open-source libraries for free. It integrates with GitHub making it very easy for developers to get real-time updates related to the open source dependencies.

Quick overview of Mend 

Mend is an open source library management platform that helps developers identify, track, and secure the open source libraries used in their projects. It provides real-time alerts for any potential security vulnerabilities or license compliance issues, and offers integration with a variety of tools such as GitHub and Azure DevOps. With Mend, developers can easily see the open source libraries they are using and ensure that they are up to date and secure, reducing the risks associated with using open source software. Mend is the next version of WhiteSource Bolt, and aims to provide a simple and effective way for developers to manage their open source dependencies.

Integrate Mend with GitHub

Mend helps developers get real-time security alerts and compliance issues on open source dependencies within GitHub. Once Mend is configured to a GitHub repository, it will continuously scan the codebase and identify any open source libraries that are being used. It will then check for any known vulnerabilities or license compliance issues associated with those libraries, and provide alerts to the developers through the GitHub interface. Apart from notifications within the GitHub interface, Mend is configured to automatically create issues related to the vulnerabilities. 

This allows developers to quickly and easily see any potential issues with the open source dependencies they are using, and take steps to address them. Mend is configured to automatically open issues with suggested fixes for any identified vulnerabilities, streamlining the process of addressing them. 

Enable Mend Bolt for GitHub

It is very easy to integrate Mend with GitHub. There is a GitHub application available named Mend Bolt for GitHub. We can configure it for all the repositories under a GitHub organisation or selectively configure Mend for individual repositories.

YouTube video showing Mend configuration and usage

Refer to the YouTube video which explains in detail the different steps required for configuring Mend for a GitHub repository.




Details shown in Mend issue related to vulnerabilities

For each of the vulnerabilities reported by mend, it provides detailed information with a summary, CVE information, transitive dependencies and recommended fixes if any. The screenshots below show these different characteristics of the Mend issue




How to fix vulnerabilities reported by Mend

If you have received a report of vulnerabilities from Mend, it is important to address them as soon as possible in order to secure your system and prevent potential attacks. The first step is to carefully review the report and understand the nature of the vulnerabilities and the potential risks they pose. Next, you should determine the best course of action to fix the vulnerabilities. It is also a good idea to test the fixes to ensure that they effectively address the vulnerabilities and do not cause any unintended issues. It is important to keep in mind that vulnerabilities can be exploited quickly, so it is crucial to prioritize and address them in a timely manner.

In the YouTube video above, I showed how to fix the reported vulnerability by upgrading the dependency version to a higher version with a fix. This is a manual step. In another video, I took this a step further and used another open source tool which integrates with GitHub named Dependabot to automatically update the outdated dependencies. Check out this video if you are interested to know more about it.

Advantages of using Mend

Mend helps organizations identify and address vulnerabilities in their systems. Some of the advantages of using Mend include:

Improved security: By regularly scanning for vulnerabilities, Mend can help organizations identify and fix potential security weaknesses before they can be exploited by attackers.

Risk management: Mend can help organizations prioritize vulnerabilities based on their severity and the potential risks they pose, allowing them to focus their efforts on the most critical issues first.

Compliance: Many organizations are required to meet certain security standards in order to comply with regulations or industry standards. By using Mend, organizations can ensure that their systems are in compliance with these requirements.

Time and cost savings: Fixing vulnerabilities can be time-consuming and costly, especially if they are discovered after an attack has occurred. By using Mend to proactively identify and address vulnerabilities, organizations can save time and resources that would otherwise be spent on remediation.

Conclusion

Mend can help organizations improve their security posture and mitigate the risk of cyber attacks by identifying and fixing vulnerabilities in a timely and cost-effective manner. Cyber attacks are becoming more sophisticated and frequent, and it is important to have tools in place to protect yourself and your organization from these threats. Mend is a powerful tool that can help you identify and address vulnerabilities in your systems before they can be exploited by attackers. By using Mend, you can improve your security posture, mitigate the risk of cyber attacks, and save time and resources that would otherwise be spent on remediation. Protecting your systems and data is crucial in today's digital world, and Mend is an excellent tool to help you do so.

Until next time, Code with Passion and Strive for Excellence.




spacer

Code faster with AI using tabnine

 Background

Artificial Intelligence (AI) is getting infused into every sphere of our life. We find AI-powered smartphone cameras, devices, and voice assistants like Cortana, Alexa, Bixby, and various other alternatives. AI and Machine Learning (ML) is used to predict thing based on past patterns. In every industry, there are use cases related to AI and ML which can help improve people's lives in one way or the others.

One such area which is of keen interest is software developers. There are various studies done across the globe to improve developer productivity. Software developers spend a lot of time writing code and looking for information over the internet. We use things like code editors such as Notepad++, Sublime, Visual Studio Code, a full-blown integrated development environment such as Visual Studio Code, IntelliJ, PyCharm, Goland, Eclipse, Rider, etc., database connectivity tools, and many other things to get our work done. This involves context-switching between different tools depending on the type of application and the programming language or backend databases used for the application development. 

AI-powered assistant for developers

Many of the code snippets that we write are following a specific pattern. Some of the common examples are listed below

  • Query database table and fetch top 100 Orders, sort by highest order value
  • Perform sentiment analysis on a tweet / social media post
  • Make an API call to query some data
  • Write a function to find the average size of the basket from a given carts
These are just a few examples. But you can get a sense of the pattern that is being used. What varies is the programming language used and the syntax of the language. Nowadays it is possible to use Natural Language Processing (NLP) to specify the intent of what the code intends to do and let AI generate the code.

One such tool I came across is called tabnine. The tagline for tabnine says that it is an AI assistant for software developers. It helps developers to write code faster. It has two modes of operation. First is the single line mode where it recommends code for one single line. Ans second is the whole function code.

It integrates with the most common Integrated Development Environments (IDE) and supports most of modern-day programming languages and frameworks. The screenshots below list the IDEs and programming languages that it supports.


Refer to the install section to get the most updated list of IDEs and programming languages. If the language you are using or the IDE you are using is not on the list, most probably you are not working with a modern tech stack :)

My experience with Tabnine

I used Tabnine with my favorite code editor Visual Studio Code. I used it to generate C# code, document the project using Markdown files, and to write Powershell scripts. It was quite accurate in terms of its suggestions and recommendations with an accuracy rate of about 75-80%. It saved me quite a lot of time while writing the docs for a project. 

Although Markdown and Powershell are not listed in the supported languages, tabnine did a fantastic job in terms of proving context-aware recommendations.

I shared my experience of using Tabnine in a YouTube video.

Summary / Conclusion

Personally, I found Tabnine very useful. It avoided the context switching for me to look for information. It helped me to generate code faster. Documenting the code or project with markdown files was a breeze saving me at least 70% of the typing efforts. 

I think a tool like Tabnine can be very handy when we are learning a new programming language. It can help us quickly understand code syntax and best practices. The machine learning model used to train the assistant can scan different types of repositories within the organization as well as public repositories. This can help to implement best practices.

Hope you find this useful. I would love to hear about your experience.

Until next time, Code with Passion and Strive for Excellence.


spacer

Zero downtime deployment with Kubernetes using Rolling update Strategy

 Background

Nowadays modern cloud-native applications are developed and deployed at a rapid scale and pace. The legacy techniques used for upgrading applications and services are no longer relevant. We want to give the best user experience to end users. One of the things which are gaining popularity and is already a mainstream technique is progressive delivery. This allows us to upgrade the application version in a progressive manner by reducing the blast radius. In this post, we will look at how to achieve zero downtime deployment using the Rolling Update strategy with Kubernetes.

Quick overview of the rolling update strategy

The rolling update strategy allows applications with multiple instances or replicas to upgrade from one version to another in a controlled manner. Instead of upgrading all the replicas together, this strategy applies changes in batches and replaces the older version with a new one. As a result of this, the application can continue to run with the reduced number of instances while the upgrade is progressing. 

One of the pre-requisite for implementing a rolling update strategy is that the application needs to have multiple replicas or instances running. If there is only a single instance of the application, there will be a slight downtime. Kubernetes supports the rolling update strategy natively. We do not need any additional things to implement it with Kubernetes. When we use Kubernetes Deployment to deploy our pods, the rolling update is the default strategy. 

If you have been using Kubernetes deployments, you would have unknowingly used the rolling update strategy when upgrading the application version. 

Use the Rolling update strategy for zero downtime deployment

If we don't override any settings related to rolling updates, by default Kubernetes will apply 25% MaxSurge and 25% MaxUnavailable settings. What this means is that at the time of the upgrade maximum of 25% of the replicas will be added. Same way, when the old version of the replica is replaced with the new one, a maximum of 25% would be replaced at a time.

By doing this, it ensures that there are 80% of replicas in working condition during the upgrade process and the end users are not impacted. If we have exposed the deployment outside of the Kubernetes cluster using a Service, then the LoadBalancer will route the traffic accordingly between the new and older versions.

Youtube video demo

Refer to the YouTube video to see this in action.


Other alternatives to Rolling Update Strategy

There are other alternatives to the rolling update strategy. Modern-day cloud-native applications support advanced deployment strategies or techniques such as Blue-Green deployment or Canary deployment which also allow zero downtime deployment with progressive delivery. Refer to the links to watch YouTube videos on my channel to understand how these can be applied to your Kubernetes applications.

Conclusion

Gone are the days of bringing down servers and applications to upgrade them. With the cloud, we can easily upgrade applications and services with minimal impact on end users. Use modern techniques such as rolling update, Blue Green deployment or Canary deployment to achieve 100% availability for your application.

Until next time, Code with Passion and Strive for Excellence.


spacer

Cloud native Fifteen Factor Apps

 Background

A few years back Heroku published a set of guidelines for building and deploying applications which became known as 12 Factor App. These are guiding principles for developing, operating and deploying Web apps or software-as-a-service (SaaS). Over a period of time, as cloud adoption increased and microservices became popular there are additional factors which got added. We also saw changes in the application integration space with more real-time processing gaining popularity. We now have event driven microservices which empower organisations to leverage data to get more value using event driven architecture.

15 Factor Apps series

In order to understand the 15 factors better, I am creating this series of YouTube videos. We will cover all the 15 factors in different parts. Below is the picture showing the initial 12 factors as described in the 12Factor.net and then we also have the next picture showing the additional 3 factors as part of the 15 factors. 

12 Factor Apps

Here is a list of 15 Factors which help to make the modern cloud-native applications highly scalable, resilient and portable:

  • Codebase - one codebase tracked in version control, many deploys
  • Dependencies - Explicitly declare and isolate dependencies
  • Config - store config in environment
  • Backing services - treat backing service as an attached resource
  • Build, release and run - strictly separate build and run phases
  • Processes - execute app as one or more stateless processes
  • Port binding - expose services via port binding
  • Concurrency - scale-out via process model
  • Disposability - maximize robustness with fast startup and graceful shutdown
  • Dev/Prod parity - keep development, staging and production as similar as possible
  • Logs - treat logs as stream of events
  • Admin processes - Run admin/management tasks as one-off process
  • API first - define service contract
  • Telemetry - monitor process performance
  • Authentication - secure applications across hardware, network and software layers

Youtube videos

Below is the list of videos covering each of these factors







You can read the ebook by Kevin Hoffman about the 15 Factor Apps named Beyond the Twelve-Factor App (oreilly.com).

Until next time, Code with Passion and Strive for Excellence.


spacer

Kubernetes Blue-Green Deployment


Blue-Green deployment is a strategy to achieve zero-downtime deployment. With cloud-native applications, it is common practice to reduce downtime when upgrading the application version. Due to the ease with which we can provision new infrastructure on the cloud, it becomes easier to adopt new ways of improving the deployment.

Blue Green deployment definition

In this post, we will look at how to use Blue-Green deployment with Kubernetes. One of the best places to look for definition is the glossary of terms provided by Cloud Native Computing Foundation (CNCF). The CNCF defines Blue Green deployment as

"A strategy for updating running computer systems with minimal downtime. The operator maintains two environments, dubbed “blue” and “green”. One serves production traffic (the version all users are currently using), whilst the other is updated. Once testing has concluded on the non-active (green) environment, production traffic is switched over (often via the use of a load balancer)."

The main point here is two identical environments named blue and green and the traffic switching using load balancer. Lets see how this can be applied to the Kubernetes environment.

How to perform Kubernetes Blue Green Deployment

We start by creating two versions of the application. Let's say V1 which is deployed to the live environment. This forms our Blue environment. To keep things simple, I have created a simple ASP.Net Core application. It is packaged into a Docker image with two tags. One tag has Blue background and will be used to deploy to the Blue environment. The other container image with green tag will be used to deploy to the Green environment. The two docker container images are published to Dockerhub as

  • nileshgule/blue-green-demo:blue
  • nileshgule/blue-green-demo:green

Step 1 : Setup Blue environment

Once the container images are pushed to the container registry, we can use Kubernetes Deployment to deploy to the Kubernetes cluster. We create a blue-deployment using the image tagged as blue. The Kubernetes deployment will internally create a replicaset and related pod. In the pod deployment template we set the label as app: blue.

The deployment is exposed outside the Kubernetes cluster using a Kubernetes service of the type loadbalancer. The service uses the selector with app: blue value. 

Browse the public IP of the loadbalancer service and we will be presented with the web page with Blue background.

Step 2:  Setup Green environment

The second step of the Blue-Green deployment is to create the Green environment. This is done by creating a 2nd Kuberentes deployment using the image nileshgule/blue-green-demo:green tag. We also set the label for the deployment as app: green. The rest of the attributes in the manifest file are almost same as the blue deployment. After applying the Green deployment, the service is still serving 100% traffic to Blue environment. We can perform test on the next version or release of the application which is deployed to the green environment. If the tests are successful, we can switch the traffic to green environment.




Step 3: Route traffic to Green environment

The final step in the Blue-Green deployment is to switch the traffic to the Green environment. We can do this by editing the service manifest. We update the selector to select the pods with app: green label. Apply this change to the Kubernetes service. All the traffic is now redirected to the Green environment. 


Note that the Blue environment is still there, but the active or live traffic is served by the green environment. If there is any problem with the Green environment, we can change the selector in the service again to point back to the Blue environment quickly allowing us to revert the changes with minimal impact on the end users.

Youtube video

The steps explained in this blog post are demonstrated in the Youtube video in detail. Watch the video for a live demo and feel free to provide feedback in the video comments.



Conclusion

It is very easy to use the Blue-Green deployment strategy with Kubernetes using Service. This strategy is quite useful to reduce the downtime for stateless applications. Stateful applications are a bit more involved. This reduces the risk during application upgrades. By having a standby environment we can easily fallback to an older version with no downtime. Hope you found this useful.

Until next time, Code with Passion and Strive for Excellence.


spacer

Improve your productivity with Kubernetes using aliases

 Background

Kubernetes is the de-facto standard for orchestrating containers. If you are working on a daily basis with Kubernetes, you are most likely using the kubectl to interact with the Kubernetes cluster for performing different operations like listing pods, checking for deployments, viewing logs etc. Working with kubectl using a terminal is the most common approach in such a scenario.  

Earlier I had demonstrated how to use two power tools kubectx and kubens. In this post, we will look at how to give kubectl a huge productivity boost by using programmatically generated aliases. 

A quick word on the alias

If you come from a scripting background with experience in bash or shell scripting or even Powershell scripts, you might be familiar with aliases. These are like shortcuts that we can add to the terminal session to execute a command or piece of code. One of the most commonly used alias while working with kubectl is k. Instead of typing kubectl every time, most people create an alias with k and we can replace/substitute kubectl with k. 

We can persist the aliases across sessions and store them permanently. In the case of bash aliases on Linux or Unix systems, mostly these are stored in the .bashrc profile file under the home directory of the user. The same thing can be done on a Windows environment by creating a function as an alias in a PowerShell profile document. 

Need for kubectl aliases

Now that we understand a bit about aliases, let us see how we can use them with Kubernetes and kubectl to be specific. Kubernetes has various objects like Pods, Deployments, Replicasets, Nodes, Endpoints, PersistentVolume, PersistentVolumeClaims etc. Each of these objects has support for different operations like create, update, delete, edit, get. Along with the different types of operations, there are also different flags that can be used while working with these objects using kubectl like -A or --all for listing objects from all namespaces.

Kubectl also supports displaying information in different formats like JSON or YAML. In order to improve efficiency and productivity while working with kubectl, we can alias the commonly used commands as shown below:

  • alias kg="k get"
  • alias kgpo="kg po" or alias kgpo="kg pods"
  • alias kgno="kg no" or alias kgno="kg nodes"
  • alias kd="k describe"
  • alias kaf="k apply -f"
  • alias kdf="k delete -f"

Note that some Kubernetes objects have short forms like po for pods and no for nodes. We can get the list of resources supported by Kubernetes along with their short names using the following command

kubectl api-resources


The above screenshot shows a partial list of resources, their short names, API versions etc.

If we were to create aliases for each of the resources and different operations supported for that resources, it will be a very time consuming and cumbersome process. This is where a smart guy names Ahmet Alp Balkan came up with a programmatic way to generate aliases for kubectl.

kubectl-aliases

The Github repository contains the list of over 800 aliases that are programmatically generated. It also contains instructions about how to set up these aliases. We need to download the file and source it in our bash profile which makes these aliases available for us to use with the terminal session.

There is a set of conventions followed while naming or generating these aliases. 

  • k = kubectl
    • sys = --namespace kube-system
  • commands
    • g = get
    • d = describe
    • rm = delete
    • a : apply -f
    • ak : apply -k
    • k : kustomize
    • ex : exec -t -t
    • lo : logs -f
  • resources
    • po = pod
    • dep = deployment
    • ing = ingress
    • svc = service
    • cm = configmap
    • sec = secret
    • ns = namespace
    • no = node
  • flags
    • output formats : oyaml, ojson, owide
    • all : -all or --all-namespaces depending on command
    • sl : --show-labels
    • w : -w/--watch
  • value flags : should be at the end
    • n = -n/--namespace
    • f = -f/--filename
    • l = -l/--selector

Using these conventions it is quite convenient to work with kubectl. This reduces typing errors and improves our efficiency greatly. Here are a few examples

  • k get pods --> kgpo
  • k get pods --all-namespaces --> kgpoall
  • k get pods -n kube-system --> ksyspo
  • k get pods -o yaml -> kgpooyaml
  • k get configmaps -n keda --> kgcmn keda

If you are on Windows machine, you need not be left behind. There is a PowerShell version of these aliases. Here is the link to the Github repository created by Shanoor. 

Youtube video

I created a short video demonstrating these in much more detail. Catch all the action in the below Youtube video.

 


Conclusion

We all want to improve our daily ways of working. I am sure you will benefit from this little tip if you are a regular user of Kubectl. Hopefully in your role as a developer or DevOps engineer or SRE role you find this useful. Do let me know in the comments of this post or in the Youtube video if you have any feedback.

Until next time, Code with Passion and Strive for Excellence.

spacer

How to prepare and clear CKAD certification exam

 

 Background

The new year started on a happy note for me. A few days back, I cleared the Certified Kubernetes Application Developer (CKAD) certification. This was on the second attempt that I managed to clear the exam. It is the best online certification exam I have answered so far. This post is about my experience with the exam and how to prepare for it. I also share some information that might be helpful during the exam itself.

Exam Curriculum

The exam is completely hands on. There are no multiple choice questions. We are given a set of 6 different Kubernetes clusters. Each cluster has a different configuration and the questions are related to designing and developing cloud native applications in the Kubernetes environment. There are five different broad areas which are assessed as shown below. These include

  • Application Design and Build 20%
  • Application Deployment 20%
  • Application Observability and Maintenance 15%
  • Application Environment, Configuration and Security 25%
  • Services & Networking 20%

The exam is conducted by Linux Foundation in collaboration with Cloud Native Computing Foundation (CNCF). You can find out more about the exam on the Linux Foundation training website.


The duration of the test is 2 hours with 66% as the minimum passing score. The certificate is valid for 3 years. In case you are not able to clear on the first attempt, you get 1 free retake.

References for exam preparation

I like to use multiple resources while preparing for any certifications. For CKAD as well I referred to multiple sources of information. Here is a quick list of different references I used.

Kubernetes docs

We are allowed to refer to the Kubernetes docs during the exam. It is like an open book test. The following links are helpful for getting started with Kubernetes and understanding the different concepts.

  • Kubernetes docs - https://kubernetes.io/docs/home/
  • Kubernetes.io tasks - https://kubernetes.io/docs/tasks/
  • kubectl cheatsheet - https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Learning course

Along with the Kubernetes docs, I supplemented the knowledge by registering for the eLearning course. There are many courses available online. Almost every eLearning platform will have one or more course related to the CKA exam preparation. Some also offer mock test. I took the CKAD exam plus the Kubernetes fundamentals bundle from Linux foundation





The Kubernetes Certified Application Developer (CKAD) with Tests on Udemy by Mumshad Mannambeth is also one of the most popular courses.



I liked the details and the depth this course covers the contents. There are hands on labs or mini tests after every major topic. The course is filled with lots of gems in the form of tips and tricks related to the actual exam.

Github Repos

Here is a list of Github repositories I found useful

  • CKAD Excercises - https://github.com/dgkanatsios/CKAD-exercises
  • CKAD resources - https://github.com/lucassha/CKAD-resources
  • Nilesh Gule CKAD exam prep - https://github.com/NileshGule/ckad-exam-prep
  • Denny Zhang kubectl cheatsheet - https://github.com/dennyzhang/cheatsheet-kubernetes-A4
  • Ahmet Alp Balkan Kubernetes network policy recipes - https://github.com/ahmetb/kubernetes-network-policy-recipes

Blogs

These blog posts provide useful tips for preparation as well as exam

  • Be fast with kubectl - https://faun.pub/be-fast-with-kubectl-1-18-ckad-cka-31be00acc443
  • How to nail Kubernetes certification exams - https://www.infoworld.com/article/3631108/how-to-nail-the-kubernetes-certification-exams.html
  • Codefresh Kubernetes cheatsheet - https://codefresh.io/kubernetes-tutorial/kubernetes-cheat-sheet/
  • CKAD practical challenge series - https://codeburst.io/kubernetes-ckad-weekly-challenges-overview-and-tips-7282b36a2681Tips 

Watch out for discounts

You will find discounts during Black Friday, Cyber Monday, Christmas, New Year, Diwali, Eid etc. Make use of these discounts on learning courses as well as the actual test.

Learn Imperative commands

Due to the time limit of 2 hours, you need to be quick in finishing the tasks related to completing the exam. While you can copy and paste yaml from the Kubernetes documentation, editing it could take some time. The test will have specific requirements like naming Kubernetes objects with specific names and deploying them to a specific namespace. Usually, Kubernetes docs will deploy the objects in default namespace. It will save you quite some time if you know how to work with imperative commands. The output of these commands can be piped to yaml files and then you can make the required modifications instead of handcrafting the complete yaml from scratch.

Compared to CKA exam, I found the questions in CKAD to be lengthy. There are more subtasks for each question. You end up solving more subtasks and you need to be really fast.

Learn basic vi or nano editor commands

The test environment is Linux based. If you are coming from Windows background like me, you need to be familiar with vi or nano code editors. Be familiar with editing files and command like deleting lines of code, navigating to specific lines, updating a word, saving files etc.

Here is a handy cheatsheet I found to work with the vi editor more effectively.

Practice, Practice, practice

Whoever has cleared the CKA, CKAD or CKS exams will tell you that practice is the most important part of exam preparation. If you don't know how to work with the Kubernetes resources or objects using kubectl, there is no way you are going to clear this test. There could be multiple ways of achieving the same task. You need to find the one which is the most time efficient. 

Bookmark specific links

Bookmarking the links from Kubernetes documentation is helpful to find the resource quickly. Instead of bookmarking the top level pages, I recommend bookmarking specific sections to avoid scrolling through the lengthy page looking for information.

Practice exam

If you book the Kubernetes related certification exam after June 2021, you will have access to the practice exam from Killer.sh. This exam gives you two attempts at the same test. The environment is available for 36 hours and answers along with a detailed explanation of the steps is provided to help you prepare better for real test.

Tips during the exam

  • Use external monitor (recommended), minimize screen switching
  • External keyboard and mouse can be used
  • Enable kubectl auto completions
  • create aliases (before starting test)
  • Update vimrc / editor profile
  • Time management
    • use imperative commands (huge time saver)
    • use copy feature instead of typing
    • use the notepad feature to make quick notes
    • flag questions for review
    • don't spend too much time on a single question (avg 5-6 mins)
  • verify solution after task completion

Aliases in bash profile

I added the following aliases in the bash profile to reduce typing kubectl commands

  • alias cls=clear

  • alias kg='k get'
  • alias kgpo='kg po'
  • alias kgno='kg no'

  • alias kdes='k describe'

  • alias kaf='k apply -f'
  • alias kdf='k delete -f'
Along with these aliases, I had also exported to handy commands
  • export do='--dry-run=client -o yaml'
  • export fg='--force --grace-period=0'

vi editor settings

Last but not the least, I also updated the settings for vi editor so that text can be aligned/indented properly when we tab in the editor. This also helps when we copy and paste the code snippets from Kubernetes docs.

  • set tabstop=2
  • set shiftwidth=2
  • set expandtab


Youtube video

All the topics mentioned in this post are explained in more detail in the Youtube video. 

Conclusion

CKAD certification is meant to gauge the skills of Kubernetes developers. If you do not know how to build and deploy cloud-native applications which are deployed on the Kubernetes cluster, this certification is not for you. Practice well before taking the exam. There are 17 questions with different weights. Hope this post is helpful for those preparing for the exam in future.

Until next time, Code with Passion and Strive for Excellence.

spacer