Install and upgrade software on Windows using Chocolatey


There are multiple ways of installing software packages. These can include common utilities such as 7-zip or Winzip, browsers such as Google Chrome, Firefox, Brave etc. We also install editors like Notepad Plus Plus, Visual Studio Code, terminals such as Fluent Terminal, cmder etc. The list can just go on and on. Things get complicated when you move from one operating system to another like MacOS or Linux. Lets add one more twist by adding the processor architecture or type 32 bit or 64 bit, Intel or AMD processors.

The way we install software could vary based on many of the factors listed above. Even for technical people, it can become a challenge sometimes to identify the right version of the software to download and install. This is where a package manager can be quite handy.


In this post, we will focus the attention to a package manage specific to the Windows operating system. A package manager helps to search for packages, install the package along with its dependencies, identify outdated packages, uninstall the package, pin the version of the software and many other features. 

I have been using Chocolatey to install and upgrade the versions of more than 75+ software. I also managed to automate the setting up of a new Windows machine using chocolatey. The source code for this can be found in my Github repository.


In the YouTube video below see Chocolatey in action where we use it to search for packages, list all the installed packages, find information about the packages and upgrade packages as well as extensions for Visual Studio Code.


Using package manager to install software can make out life much easier. We do not need to visit the websites to look for the appropriate package, the dependencies get resolved automatically and we can identify the outdated packages easily. I hope you found this tip useful.

Until next time, Code with Passion and Strive for Excellence

How to Manage Kubernetes Secrets with Azure Key Vault


There are different ways in which we can manage environment specific settings related to containerized applications. Kubernetes provides ConfigMaps and Secrets as two options to manage environment configuration. ConfigMaps are good for storing key value pairs in plain text. When we are dealing with sensitive information containing connection strings, user name and password, certificates etc. these should be stored in encrypted form. Kubernetes Secret objects stores data in Base64 encoded form. 

Extend Kubernetes Secrets by storing them externally in Azure Key Vault (AKV)

Storing secrets in encrypted form provides first line of defense. As the popularity of Kubernetes increases, the tools surrounding the ecosystem are also improving on regular basis. More and more vendors are providing extensions of their services to work with Kubernetes.

One such area is the way secrets are managed. In an enterprise scenario, we might use a secure Key Vault to store keys externally. Azure Key Vault (AKV) and HashiCorp Vault are examples of such key vaults. In this demo we are going to use Azure Key Vault. The AKV allows us to store
  • Keys
  • Secrets
  • Certificates
The keys, secrets and certificates are stored in a secure manner and we can provide very fine grained access to different users.

The Azure Kubernetes Service (AKS) is used to provision a managed Kubernetes cluster with 1.18.2 Kubernetes version. We are also using Azure Container Registry (ACR) to store the docker images for the application containers. In AKS cluster is created using Managed Identity which assigns an Identity to the VMSS agent pool. We use this managed identity and grant only the Get privilege to retrieve the secrets stored in AKV.

The secrets from AKV are pulled when the pod is created as part of the Kubernetes deployment. We use a Secrets Store Container Storage Interface (CSI) driver. Azure Key Vault Provider  for Secrets Store CSI Driver specifies Azure related properties. The secrets are synched with Kubernetes secret object. These are mounted as Volume Mount in the pod. Finally the data from Kubernetes secret is passed into Environment variables used by the pod.


All the above functionality is demonstrated as part of the YouTube video on Integrating AKV with AKS.


The image below depicts the 5 step process to integrate AKV with AKS.


Integrate Azure Container Registry with AKS in 5 easy steps


When we start working with Docker and Kubernetes, we need a container registry to publish out images. Most people start with the public DockerHub registry. This is good for getting started. however, as we become more proficient with container images and orchestration using something like Kubernetes, we need enterprise grade features. The public container registries do not provide all the enterprise grade features. In such scenario, we need to choose private container registry. In this post we will see how to integrate private Azure Container Registry (ACR) with Azure Kubernetes Service (AKS) cluster.

Advantages  of using a private container registry with ACR

Similar to other private container registries, ACR provides following features:

  • Support for Docker and Open Container initiative (OCI) images
  • Simplify container lifecycle management
    • Build
    • Store
    • Secure
    • Scan
    • Replicate
  • Connect across environments
    • Azure Kubernetes Service (AKS)
    • Azure Redhat OpenShift
    • Azure Services (App Service ,Machine Learning, Azure Batch)

Integrate ACR with AKS using Admin User

In this YouTube video, I demonstrate how to integrate with ACR using 5 easy steps.

Integrate ACR with AKS using Admin User

The 5 steps demonstrated in the video are as follows

2 steps to integrate ACR with AKS

We use Admin user to push images to ACR registry using Docker login. The images are then pulled to AKS cluster using the Managed Identity associated with the AKS cluster. The Managed Identity is granted ACR Pull role when we create the AKS cluster using the --attach-acr flag with az aks create command.

Authenticate ACR with AKS using Managed Identity

Integrate ACR with AKS using AAD identity

After I published the video on YouTube, Sergio Rodrigo shared a blog about Build & Pull Docker images to ACR. I replied to his tweet on Twitter suggesting that the readers of his blog could benefit from the video version of my post. This Tweet caught the eye of Steve Lasker who is the PM on ACR team in Microsoft. Steve suggested that instead of using Admin User to connect to the ACR registry to push images, there is a better way. By default, the Admin User access is disable when we create a new ACR registry.

We can make use of our own identity linked to an Azure Active Directory (AAD) to authenticate with ACR. When we login to ACR using the AAD identity, a token is generated and our local docker config file is updated with the token. We can push the images using this token. This eliminates the need for enabling the Admin User for ACR.

Based on this suggestion I created an updated video.

The change in authentication method is highlighted in the diagram below


Private container registries are helpful in preserving the intellectual property of an organization. We need not have to publish the IP of organization to publicly available container registry. It also helps in improving the security posture by providing role based access control (RBAC). We can separate activities like who can push images to the registry and who can pull them. 

I hope these two videos were helpful in improving the readers understanding about integrating ACR with AKS.

Until next time, Code with Passion and Strive for Excellence.


Youtube–Autoscaling containers with KEDA on AKS


Over the last few months, it has become difficult to dedicate time to blogging. I also wanted to explore the option of doing interactive videos. As a result, I have started YouTube channel and posted few videos so far. First of all I started with a 3 part series on Autoscaling containers with Kubernetes-based Event Driven Autoscaling (KEDA).

KEDA playlist

During last 6 months or so, I happened to deliver multiple sessions at different community events about event driven auto-scaling applications using containers and Kubernetes. KEDA is an interesting project. I built or rather refactored one of my sample application to run with KEDA.

In this example I built a .Net core producer which generates a configurable number of messages and pushes them to a RabbitMQ queue. This is developed as a .Net Core API project. On the consumer side, we can configure the .Net consumer to pick a batch of messages and process them. This is where event driven auto-scaling feature of KEDA comes into the picture. Based on the number of messages present in the queue, KEDA scales the consumer instances.

The producer, consumer and the RabbitMQ broker are all deployed onto a Kubernetes cluster. The Kubernetes cluster is a managed cluster deployed using Azure Kubernetes Service (AKS). In the 3 parts we cover following topics:

Part 1 – Provision AKS cluster

We look at all the software that I use in order to develop and deploy the application. We also look at the generic PowerShell script which automates the AKS cluster provisioning.

Part 2 – Deploy application containers

In this video we look at the code how it interacts with the RabbitMQ broker from the producer and the consumer side. We build the docker images using Docker-compose and publish them to DockerHub public container registry.

Part 3 – KEDA autoscale in action

In the final part of the series we look at the KEDA architecture, steps to deploy KEDA using Helm and deploy autoscale application based on the number of messages.


I hope that the video series provides a much better option compared to taking screenshots and putting them in the blogpost. It also gives me better opportunity to express myself and provides another medium to share the knowledge. Hope you like the series. in case you find the content useful, please share it and subscribe to my YouTube channel.

The complete source code for the series is available on my Github repo named pd-tech-fest-2019. I am very proud to say that this is also one of the official samples listed in the KEDA project.

Until next time, Code with Passion and Strive for Excellence.