Kubernetes Secrets management

kube control describe secret

Background

This is the eighth part of the series on building highly scalable multi-container apps using AKS. So far in the series we have covered following topics:

This post is about managing secrets in a Kubernetes cluster. We will be focussing on following topics during this post.

  • Understand the reasons for using secrets in Kubernetes cluster
  • Create secret using Kubernetes manifest
  • Register secret in the AKS cluster
  • Verify secret in AKS cluster
  • Consume secret from the cluster in TechTalks DB deployment while initializing SQL Server 2017 container
  • Consume secret from cluster in the TechTalks API init container to initialize the database
  • Consume secret from cluster in TechTalks API for database access

Understand the reasons for using secrets in Kubernetes cluster

In enterprise solutions it is quite common to have separation of duties applied to different roles. In Production environments developers are not allowed access to sensitive information. Operations teams are responsible for managing the deployments. It is quite common in such scenarios to distinguish which parts of the application are handled by development teams and which part is handled by operations team. Most common example is the database passwords.

These are managed by operations teams and in most cases encrypted before storing in the target environment. Development team can use these passwords using a pre-configured file path or environment variable or some other means. The development team does not need to know the how the password is generated or the exact contents of it. As long as it can source the password by some means, the application would work fine.

The same approach can be used to externalize the passwords or secrets for different environments like Development / QA / Pre-production / Production etc. Instead of hardcoding the environment specific settings we can externalize them using configurations. Lets see in our case how we can use secrets with Kubernetes.

Create secret using Kubernetes manifest

There are different ways in which secrets can be created. As we had been doing in the earlier parts of this series we will use a Kubernetes manifest file to store the secrets information. First and foremost lets encrypt the password that we have been using for the SA account in TechTalks application.

We need to convert the plaintext password into a base64 encoded string. Run the command shown below to generate the string

echo –n ‘June@2018’ | base64


generate secret

Copy the output of the command. We will store this into the Kubernetes Manifest file.

Notice that we set the kind of Kubernetes object as Secret on line 3. In the metadata section we provide the name as sqlsecret. Finally we provide the data. We can provide multiple elements as part of the same secret in the form of key value pairs. In our case we are specifying only one value for sapassword. With this setup we are ready to store our secrets in the Kubernetes cluster.

Register secret in the AKS cluster

Secrets can be registered into the cluster by running the kubectl create command and specifying the manifest filename. This approach is shown in the Kubernetes Secrets documentation. I use a Powershell script to deploy the complete application and all the files in a directory are used as input at the time of deployment. If you wish to deploy just the single manifest file named sa-password.yml use the command

kubectl apply –f sa-password.yml

Verify Secret in AKS cluster

Once the secret is deployed to the cluster, we can verify it in different ways. First of all lets check using the Kubernetes command line.

kubectl get secrets –namespace aks-part4

kubectl secret

We can see the sqlsecret created about 2 hours back. (Took a long time to take the screenshot after creating the secret Smile) Next we can verify the same using the Kubernetes control plane. Brose to the Kubernetes dashboard and look for secrets at the bottom of the page

kube control secrets

We can see the same information in the UI as well. Click on the  name of the secret and we will get to the details of it as shown below

kube control describe secret

The information is the same as what we had provided in the manifest file. lets verify the same in the terminal by using the kubectl describe command

kubectl describe secret

The information matches with what is shown in the UI except for the Annotations part. Now that we know that the secret is available within the Kubernetes cluster, lets turn our focus towards making use of this secret in the services used by our application.

Consume secret from the cluster in TechTalks DB deployment while initializing SQL Server 2017 container

The first place where the secret is used is when we instanciate the SQL Sever 2017 container. This is done as part of the statefulset definition.

Pay close attention to line numbers 29 to 32. Instead of hardcoding the password, we are now reading it from the secrets. We reference the secret by its name sqlsecret and the value using sapassword as the key. In future if the password expires and the operations team replaces the password, the development team does not need to redeploy the container. The new password will be accessible as part of the environment variable to the container. This solves one problem for us with the creation of SQL Server 2017 container. how about the services which uses this container. In our case, the TechTalks API is the one who is dependent on the database and interacts with it.

Consume secret from cluster in the TechTalks API init container to initialize the database

If you remember from the post on init containers, you would recollect that the API container first initializes the database with master data and few initial records. Lets use the secret while calling the initialization script.

Notice line numbers 26 to 30. We use exactly the same approach to extract the secret and store it in an environment variable. This environment variable is then interpolated with the command on line 34. With this step we have removed the hardcoding of sa password from the initialization script in the init container. We still have the connection string inside the TechTalks API container which has the sa password.

Consume secret from cluster in TechTalks API for database access

Look at the yaml file above from line numbers 42 to 46. We extract the secret and then on line 48 we interpolate it with the connection string using $(SA_PASSWORD). With these modifications in place, we removed all the hardcoding of sa passwords in our code.

I did a quick test by adding a new TechTalk using the applications UI. I can verify that the application is running smoothly.

Data accessed via API

Conclusion

Secrets management is quite a powerful concept in software development. During the course of this post we saw that Kubernetes provides built in support for managing secrets. By externalizing the secrets, we also make our applications more scalable. We do not need to hardcode secrets into application. Same code can be deployed to multiple environments by using different configurations.

Another advantage of externalizing secrets is that multiple containers can share the same secret in the cluster. In our case SQL Server container and the API container are sharing the secret. If we did not share the secret, next time there is a change in the sa password, we will need to redeploy both the containers.

Secrets play a very important role in building secure systems. Modern day applications built using DevOps practices rely on managing secrets efficiently. Most cloud providers provide secrets management as a dedicated service like Azure Key vault. For on premise scenario there are products like Hashi corps Key Vault. Hope by now you realize the importance of secrets and the ease with which we can manage them with Kubernetes cluster.

This post is dedicated to my friend Baltazar Chua who has been quite persistent in telling me that I should use secrets instead of plaintext passwords for quite a long time now.

As always the complete source code for the post and the series is available on Github.

Until next time, Code with Passion and Strive for Excellence.

spacer

Externalize SQL Server Container State using Persistent Volume Claim

Background

This is the seventh part of the series on building highly scalable multi-container apps using AKS. So far in the series we have covered following topics:

This post is about externalizing the state associated with SQL Server 2017 Linux container to an external volume which we provisioned in the previous post. We will be focussing on following topics during this post:

  • Create a StatefulSet Kubernetes object which will be responsible for creating the SQL Server 2017 Linux container
  • Externalize the state by means of volume mapping to utilize the storage which was provisioned in the previous post
  • Query and update data from SQL Server 2017 Linux container
  • Verify data is persisted if the container restarts
  • Verify the volume mount using Kubernetes cli

Lets get the ball rolling.

Create a StatefulSet Kubernetes object

Kubernetes provides an object of type StatefulSet which is often used when deploying database workloads that store their state on external storage providers. Statefulsets are used when data loss is unacceptable. StatefulSets provide a stable volume mounts across the restarts of pods or even cluster.

Lets see how we can create a StatefulSet. We will use the manifest file shown below to  declaratively set the different properties.

We specify the kind as StatefulSet. Compared to the deployment manifest, there is another difference here. For the StatefulSet, we need to specify the serviceName attribute. The rest of the attributes like metadata and spec are similar to other objects that we have created in the series.

Externalize the state by means of volume mapping

Before we can deploy the statefulset to Kubernetes cluster using kubectl, we need to make couple of other changes to ensure that the volume is mapped correctly. In the same yaml file, scroll towards the bottom and pay attention to the volume mounts and volumes sections.

Lets first look at the volumeMounts section. Here we specify the name for the volume which is mounted as mssqldb. The mountPath specifies which path inside the container is mapped to the volume named mssqldb. The MSSQL Server 2017 Linux container stores the data files inside /var/opt/mssql directory. So we have mapped this path inside the container to an external volume.

Next is the link between the volume and the physical storage which is persisted to the Azure Disk. This association is done in the volumes section. We specify exactly the same name which was assigned to the PVC in previous post, techtalks-data.

The beauty of this is that the container does not know what is the underlying storage provider. In future we can swap the Azure Disk to some other provider and the container need not have to worry about it. As long as the PVC name matches with the claimName, the container will continue to run.

We also expos the service via a load balancer. The other manifest files used during this series can be found in the Github repo. Navigate to the TechTalksDB folder and run the kubectl apply command to create the objects in the Kubernetes cluster. Below are the list of commands I executed to deploy the TechTalksDB, TechTalksAPI and TechTalksWeb containers.

kubectl part1

kubectl commands

Query and update data from SQL Server container

Lets query the data using SQL Operations Studio. The initialization script contains 3 records. I added one more before taking the screenshot. So at this point of time, we have 4 records in the TechTalk table.

database initial state

Lets add a new record and query it back.

database after adding new record

Verify Data is persisted on container restart

This is where the things start to get interesting. Lets delete the pod which is running the SQL Server 2017 container. We can delete the pod in multiple ways. Earlier in the series I had demonstrated how to interact with Kubernetes objects using the kubectl cli. To keep things simple, lets use the kubernetes control plane to delete the pod using GUI option.

Delete statefulpod

Click on the Delete option as shown above. We will be asked for confirmation, go ahead and confirm the deletion.

Delete pod

Verify data is persisted if the container restarts

As soon as the pod is deleted, the replication controller starts another instance of the pod. Any guess why? Go back to the manifest file and check the number of replicas. We have specified the number of replicas as 1. This is the desired state that we requested Kubernetes to maintain. As soon as the replica count became zero, the controller initiates the creation of new pod to make the actual state to what is desired.

For few seconds while the old pod is terminating and the new one is initiating, we will see some errors.

statefulset error

Give it a few seconds to stabilize and within a minute we can see that things are back to normal with the new pod successfully running.

statefulset recreated

The pod is restarted, but does that mean we did not loose the data which was persisted? When I queried the table again using SQL operations studio, I got the results with all the 5 rows intact. To save some space, I will not put the same screenshot again. You can give it a try yourself as a home work.

We can inspect the logs of the newly created pod to see what happened in the background.

SQL Server container logs

Verify the volume mount using Kubernetes cli

We can also look at the different events that took place at the cluster level. Run the command kubectl get events –namespace abc2018sg and we can see the list of events. Check the sequence of events in the screenshot which shows what happened after the pod was killed and new one was recreated.

volume mount events 

I have highlighted 3 main areas above. First one is the deletion of sql2017 pod. Second is the recreation of the stateful container. And third is the volume mount which is the real game changer. If the volume is not there, we would not be able to recover the data.

Conclusion

During the course of this post we saw how stateful sets can mount external volumes using the persistent volume claims. Data is persisted across container restarts. StatefulSets provide a stable endpoint for the containers which need persistent storage like in the case of database container. By externalizing the data from container into persistent volumes we can ensure that even of the cluster is restarted, we can still be able to mount the volumes back to container and achieve scalability.

All that I demonstrated here is also covered as part of the AKS learning series. The video recording of the series is available on Youtube. Thanks to Engineers.SG who are helping to record this series of talks.

AKS Part4 Youtube

As always, the code used during this post is available on Github.

Until next time, Code with Passion and Strive for Excellence.

spacer

Manage Kubernetes Storage using Persistent Volume (PV) and Persistent Volume Claim (PVC)

Background

This is the sixth part of the series on building highly scalable multi-container apps using AKS. So far in this series we have covered following topics.

In this post we will learn about a very important concept of Persistent Volumes (PV) and Persistent Volume Claim (PVC). We will use these concepts to manage data for the SQL Server 2017 Linux container.

Why do we need volume in the first place?

Managing data in a containerized scenario is quite tricky. Containers are ephemeral by design. Which means that they can be created, deleted and rescheduled on any host in the cluster. If the container is recreated, what happens to the data stored inside the container? It is oblivious that the data stored locally inside the container will be lost.

This scenario might be ok while testing. But as we start moving to deploying the applications using containers, we would want to persist the data even if the container restarts or a newer version of the image is used to upgrade the older version of container. In such scenario it makes sense to store the data external to the container. It is in such scenario that external volumes play a very important role.

During the course of this post we will focus on following

  • Provision Persistent Volume (PV)  using Azure Disk
  • Create Persistent Volume Claim (PVC) to the provisioned volume

Provision Persistent Volume (PV)  using Azure Disk

Persistent Volume abstracts from users details of how the storage is provided and how it is consumed. It is a cluster level resource provisioned by administrator and available to the pods to use. Persistent volume supports multiple types of storage including network storage, physical disks or cloud storage like Azure Disk. The storage is provisioned using one of the supported Storage Class. Lets look at an example of creating a storage class based on Azure Disk

We will skip the initial metadata, on line 7 we specify the provisioner as azure-disk. We also specify the parameters storage account type as Standard_LRS and kind as Managed. The parameters will be different for different types of provisioner. You can refer to the storage-class provisioner link below to know more about the supported provisioners with Kubernetes.

Lets go and apply this Kubernetes manifest to our cluster. Use kubectl command line to execute the command

kubectl apply –f 01_storage-class.yml

We can browse to the Kubernetes dashboard and see that the storage class is created successfully.

storage class

The creation of Storage Class is an indication of what kind of storage we are going to provision within the Kubernetes cluster. Lets go ahead and create a Persistent Volume Claim.

Create Persistent Volume Claim to the provisioned storage

Lets look at an example of the Kubernetes manifest file to create a Persistent Volume Claim (PVC).

Above we see the definition of PVC. Line 8 links the storage class named azure-disk. Note that this is the name we gave while creating the storage class earlier. From line 9 onwards we specify the requirements for the storage claim. In our example we have specified ReadWriteOnce as the access mode. For the resource request, we specified 1Gb as the disk space. When we apply this manifest, 1Gb of disk space will be provisioned for us. Run the kubectl command

kubectl apply –f 02_PersistentVolumeClaim.yml

The Persistent Volume has different lifecycle. It takes few minutes to provision the storage and bind it to the Kubernetes cluster. Initially the status of the persistent volume claim will be Pending.

PVC Pending

Once the volume is successfully provisioned it is bound to the claim. We can verify this by looking at the bound state.

PVC Bound

At this point we can go to the Persistent Volumes section on the left of the Kubernetes dashboard and find details about the underlying volume.

PVC

In this view we can find details like the name of the volume, its capacity, access modes, status, associated claim etc. Like we had done in earlier parts of this series, we can use the Kubernetes cli to query the PV object to know more details. Execute the following command

kubectl describe pv <<name of the pvc>>

The output will be as shown in the screenshot  below

kubectl describe PVC

It is same as what we see in the Kubernetes dashboard. We will see how to make use of this Persistent Volume in the next part of this series.

References

[1] Kubernetes Storage class provisioner : https://kubernetes.io/docs/concepts/storage/storage-classes/

[2] Persistent Volumes : https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Conclusion

We saw how to provision a Persistent Volume backed by Azure Disk. With the help of 2 Kubernetes Manifest files we were able to get the disk and associate a claim with it.

Persistent Volumes are quite powerful concept in Kubernetes. It allows developers and cluster administrators to work on different parts of the application. Using Persistent volumes, the administrator can provision storage which can be used by developers without knowing the underlying storage mechanism. This can be used to swap the underlying storage without any need to make changes to application code. This may not be very clear right now, but I am sure it will make sense in the next part where we will see how to utilize these volumes

Until next time, Code with Passion and Strive for Excellence.

spacer

Continuous Kubernetes deployments with Skaffold on Docker for Mac

Background

My previous post was about doing continuous deployment of Kubernetes application using Skaffold to a Minikube cluster. In the recent version of Docker for Desktop (Mac & Windows), there is built in support for Kubernetes. We can decide to deploy multi container applications to a single node local cluster and choose the type of Orchestrator. Currently Docker for Mac supports Docker Swarm and Kubernetes. In this post, I will demonstrate how we can use Skaffold to deploy to a Kubernetes cluster instead of Minikube.
We will perform following steps during the course of this post
  • Enable Kubernetes support for Docker for Mac
  • Set context to work with Docker for Mac
  • Use Skaffold with Docker for Mac

Enable Kubernetes support for Docker for Mac

Lets first enable the support for Kubernetes orchestrator for our installation of Docker for Mac. Navigate to the preferences section of Docker. Select the Kubernetes pane, enable Kubernetes support, select default Orchestrator ad Kubernetes. Finally apply the setting as shown with steps 1 to 4 below. In my case I had these settings already enabled. If you are enabling the settings for the first time it can take a while for the Kubernetes to start the single node cluster.
Kubernetes for Docker
Once everything is setup we should get the Docker is running and Kubernetes is running in green status.

Set context to work with Docker for Mac

We will be using the kubectl command line tool to deploy the artifacts to Kubernetes cluster. kubectl can work with multiple clusters at the same time. We need to ensure that the right context is set for the kubectl. Lets first get the list of all the clusters we have at our disposal. This is done by executing the command
kubectl config get-contexts
kubectl get context
We can see two clusters are currently configured. The minikube cluster is the current context as shown by the * sign in front of the cluster name. We need the context to be set to the docker-for-mac context. This can be done by executing the command
kubectl config use-context docker-for-desktop
kubectl use context
We are now ready to deploy the application to this cluster.

Use Skaffold with Docker for Mac

Once the right context is set, there is no change to be done on the skaffold side. We execute the same command skaffold dev which we did during the earlier post. Skaffold does its magic
skaffold dev - part1
The proper context has been selected by skaffold.
skaffold dev - part2
The build speed is just over 1 second to build the required Docker images. Same way the deploy is done in less than 2 seconds. I have not seen these kind of speeds while using docker compose in the past. With this, now skaffold is continuously monitoring for changes to the source code. Let test it by making some changes in the code.

As you can see from the above screen capture, the updates are really fast. It doesn't even take 1 second to update the deployment. I commented and uncommented the same set of lines that we had used during the previous post.

Conclusion

As can be seen from this demo, we can seamlessly move from minikube cluster to the docker for mac cluster while working with skaffold. It does not require any configuration change to the skaffold.yaml file or to the Kubernetes manifest files. Just  by setting the proper context for kubectl we can have the same productivity gains that we saw during the last post. I hope that developers working with Docker and Kubernetes find this tool useful.
Until next time, Code with Passion and Strive for Excellence.
spacer

Continuous Kubernetes deployments with Skaffold

Background

I have been working with Docker containers and Kubernetes for quite some time now. When working with these technologies the usual workflow involves following steps (bare minimum)

  • containerize you application into a docker image
  • push the changes to the container registry like Docker Hub
  • deploy the containers to a cluster using container orchestration tool like Kubernetes

While developing an application, the developer also tests the application. It is quite time consuming to repeat the whole process every time there a change in the code and you need to rebuild the docker container, push it to registry and do the deployment. What if there was a tool which can do all of this for us and we focus on producing the code?

Introducing Skaffold

My dear friend who is a rockstar when it comes to introducing me to new cool tech, Baltazar Chua mentioned to me about Skaffold. Because Bal has this knack of picking the right tool for the right job, I have a hash tag for him #BalIsARockStar.

Skaffold is a command line tool that facilitates continuous deployment for Kubernetes applications.

In  this post we are going to do the following

  • Install Skaffold on Mac using Homebrew
  • Integrate Skaffold with existing multi container application
  • Test Skaffold using Minikube as target deployment environment

Install Skaffold on Mac using Homebrew

On the Skaffold GitHub repo, in the installation sections you find the steps to install on Linux or Mac OS. For Mac you run the following command

curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-darwin-amd64 && chmod +x skaffold && sudo mv skaffold /usr/local/bin

I am a big fan of using Homebrew for installing packages on Mac. Luckily there is a Homebrew package available for Skaffold. I used this approach to install Skaffold on my Mac. Run the following command to get Skaffold installed.

brew install skaffold

Integrate Skaffold with existing multi container application

I will use the codebase from GitHub repo which I am currently working with the Azure Kubernetes Service (AKS) Learning series. This consists of a Web frontend, a web API project and a database running inside SQL Server 2017 Linux container.

Skaffold works on the basis of very little minimal configuration. First of all we need to define the configuration in a yaml file named skaffold.yaml. This file contains 3 phases.

1. Build

In the build phase we define the build artifacts which are docker images. In this step we specify the location of the Docker files that we need to build whenever there is change in the source code.

2. Push

Once the images are built, they are pushed to the container registry referenced in the image name. I am publishing the images to Docker Hub registry. When we use the Kubernetes cluster locally like minikube or Docker for Mac, the push is skipped because the images are available locally.

3. Deploy

The deploy step ensures that the most recent versions of the artifacts are running inside the cluster. I will be using kubectl to deploy. I can reuse the Manifest files that I had created earlier for the demos.

We can see the contents of the skaffold.yaml file

I have defined the build phase on line 3. On line 4 we list all artifacts to be built as part of this phase. Lines 5 to 10 specify the image names and the location of the Dockerfile.

Next is the deploy phase on line 11. We specify kubectl as a mode of deployment and finally the list of manifests are specified on line 13 & 14. I am using the wildcard option to deploy all the manifests under k8s/Minikube directory. Currently there are 8 different manifests as shown below.

lisf of manifests

With the configuration done, lets put the Skaffold to test.

Test Skaffold using Minikube as target deployment environment

We need t run the Skaffold dev command in the directory where the skaffold.yaml file resides. Check the output of the command

In this case you can see that the build was completed in less than 2 seconds and deployment completed in less than 5 seconds. This is a great beginning. For the first time, it can take a bit long to download the respective images and  build them locally. Once the images are available in the cache it is super fast.

Update source code and experience the magic

Lets test Skaffold by making a code change to the source code. In the screenshot below you see me commenting the block of code which shows the corousel control on the homepage. The moment I comment out the code, skaffold triggers a build and deploy. Its so fast that I could not capture it in full screen mode between the VS Code IDE changes, the iTerm terminal showing the automatic build triggering and then the Safari refreshing the final output. That is the reason, I kept the 3 windows open side by side so you can see in real time how the  changes are impacted. I did these commenting and uncommenting couple of times.

It would take minutes to redeploy the changes if we were to do it without Skaffold. The best part I like about Skaffold is it is completely aware of only the parts which are changed. Notice carefully how it triggers the build for only 1 docker image and subsequently deploys only the impacted service. This is freaking cool.

Another interesting point I found out about Skaffold was its cleanup mechanism. After the testing is completed, I can stop the continuous monitoring and deployment part by stopping the dev command (CONTROL + C). Skaffold does a neat job of cleaning up the resources it created itself.

skaffold cleanup

Gotchas & Limitations

Skaffold currently works only with Linux & Mac. There is no Skaffold available for Windows OS.

Looing at the Github page for installation section, it gives an impression that there is only Linux & Mac support for Skaffold. But as rightly pointed out by Baltazar Chua, there is support for Windows OS as well. You can download the Windows executable from

https://storage.googleapis.com/skaffold/releases/latest/skaffold-windows-amd64.exe

Refer to the detailed documentation for more details.

If you are using Minikube as the target cluster, the Minikube has to be up and running. Skaffold will not start the Minikube instance automatically for you.

Conclusion

As we can see from the quick demonstrations in this post Skaffold is quite a handy utility to add to your toolkit if you are developing containerized application and deploying them to Kubernetes cluster. I can see myself using it during the development phase to improve the inner loop workflow. Based on my past experience I can say that it is a huge productivity boost and can easily save few hours a day. I was using docker compose to build multiple images together. With Skaffold taking care of building, pushing and deploying images continuously, I don't feel the need for docker compose for my workflow. In this post we saw how to use skaffold with minikube Kubernetes cluster.

In the future post I will also demonstrate how to use it with the built in Kubernetes cluster with Docker for Mac.

Until next time, Code with Passion and Strive for Excellence.

spacer

My developer toolkit 2018 (Mac)

Background

Last year I had written a post about my developer toolkit 2017 (Mac). There are some modifications this year. For last 1 year, I have used Mac Book Pro as my primary laptop. As such many of the tools that I have been using are more open source. Its an opportunity to list the current state of things. Lets get started by looking at this years developer toolkit.

Terminal utilities

iTerm2 with OhMyZsh  – continued from past

iTerm has become quite integral part of my life on Mac. I love the colorful terminal with the Powerlevel9K theme. Although the code editors and IDE's like Visual Studio Code and IntelliJ Idea offer integrated terminals, I still prefer the dedicated iTerm 2 terminal in standalone mode with full screen.

Addition of Powerlevel9k theme

The modern day terminals are no longer black and white. We can modify them with themes to make them sleek and sexy. The post from Jessica Dean helped me to add more jazz to my terminal window.

Tabset plugin for iterm2 – continued from past

Tabset is still one of my favorite plugin for iterm2. I like to have the tabs with iterm named like one for Github related activities, others for project specific activities. Examples include using a dedicated tab for Kubernetes while working with kubectl or one for Azure CLI, one for Powershell etc. I have seen many people use tmux to split the terminal window into multiple sections. Personally I still feel very comfortable with a single tab view in iTerm.

kubectl apply

Code Editors

Visual Studio Code – continued from past

Visual Studio Code or VS Code also commonly known as code has become my primary code editor over the past year. All the public talks that I have done over the last year have been done using VS Code. I love the simplicity and the minimalistic approach taken by the designers of code. Over the time  I have added some extensions / plugins which have made working with code even more pleasing and improved the overall user experience. Here are the list of extensions I have currently with VS Code

VS Code

I like the dark Monokai theme. On the Retina display the colors really stand out and gives lot more incentive to code with passion Smile.

IntelliJ Idea community edition – continued from past

IntelliJ idea is one of the best IDE for Java development. I use it when I want to do Java or Scala development. I use following plugins with IntelliJ

IntelliJ idea

As compared to VS Code, the Material UI theme along with the file icons make IntelliJ a deadly combination. Gone are the days when we would look at black & white code editors and IDE’s.

GitHub Desktop – continued from past

It is very rare that I switch to the UI of GitHub Desktop. Many of the features of GitHub Desktop are also available via the source code management plugins and extensions available within VS Code and IntelliJ Idea. Once in a while I find the need to check the history and I find Github desktop helpful in that case. For regular operations like cloning Github repositories, doing checkins etc. I do them from iTerm terminal or from within the IDE.

SQL Operations Studio – Newly added

While working with databases, you can never underestimate the power of client connectivity tools. While on Windows SQL Server Management Studio (SSMS) was my preferred client, I found recently launched SQL Operations Studio to be quite useful. I am not a full time DBA so the bare minimum features provided by this client is enough for me to run simple select queries and to manage creation of databases and other SQL server objects.

Virtualization software

Docker – continued from past

Docker has become integral part of my life over the last year. I have used it almost every time during my public talks at conferences and meetups. Apart from using it for my public demos, I have used it for experimentations related to Kafka Connect, trying out SQL Server 2017 Linux features. Now a days I look for availability of Docker images for any new stuff I wish to experiment. It allows me to keep my system clean as I can delete the image from my machine without leaving any side effect behind.

Minikube – Newly added

As my knowledge of Docker has increased, I have also started using Kubernetes for Container Orchestration. To test out Kubernetes manifest files locally I use Minikube. It helps me in making sure all the Kubernetes objects are defined correctly before deploying them in actual clusters in Azure.

General utilities

Homebrew as package manager – Continued from the past

I have been using Homebrew as package manager for installing new software on Mac. For the UI applications I have been using Homebrew Cask. I don’t remember downloading most commonly used software from their website. Using Homebrew is super easy. 

Ironically, Dotnet framework was one of the thing which i had to install due to some problem with managing the dependencies. I have installed almost all the tools listed in this blog using Homebrew and casks.

I have tried doing the same with my Windows laptop with Chocolatey. Once you get used to Homebrew or other package managers, believe me you would not want to go back to UI based installations.

Retina Display Menu (RDM) – Newly added

RDM gives us the option of setting the Mac Retina to higher resolutions. It work not only with the Retina display but also with the secondary monitor that I have.

Skitch – Newly added

I use Skitch to edit images that I post on my blog. It helps with things like adding annotations to the image.

ImageOptim – Newly added

ImageOptim helps to compress the image sizes before posting it on the blog.

Awareness – Newly added

Awareness is a nice little utility which shows you the time since you have not taken a break while working on the PC. Good thing about this utility is that it also has a windows equivalent.

Whatsapp Desktop – Newly added

Whatsapp has become the primary mode of communication. I have whatsapp desktop client for Mac which is helpful when you don’t have to switch from laptop to smartphone.

Telegram Desktop – Newly added

I recently started using Telegram. Just like Whatsapp, having a telegram desktop client is useful to communicate right from the laptop with your contacts.

Ansible – continued from past

Ansible usage improved for past year as I started looking for opportunities to remove manual steps. I like the simplicity of Ansible. It makes automating things a breeze. Recently I managed to deploy a piece of software on a multi-node cluster using Ansible.

Powershell – Newly added

While working on my Voxxed Days and Azure Bootcamp demo, I realized that Ansible could not address my needs for automating parts of deployment for Kubernetes objects. That's when I stumbled upon Powershell. Powershell now works cross platform. I developed small scripts which help me automate the provisioning of Kubernetes cluster in Azure using AKS service, deploy multi-container app to it, teardown the application resources and finally delete the resources from a resource group in Azure.

Azure CLI – Newly added

I have been exploring cloud technologies using Azure quite a lot during last year. I find Azure CLI quite handy while dealing with Azure resources. I even wrote a blog post about how we can learn more about Azure capabilities using the interactive mode in Azure CLI.

Productivity tools

Microsoft OneNote – continued from past

OneNote has been my companion for last couple of years. I create a new OneNote notebook every year. One notebook is for office related work and another one for personal.  I make full use of my IPad Pro to take notes wherever I go be it in office or at community events. I can’t imagine what would happen if I loose the notebooks I have created over the last 2-3 years. All my IP would be lost with them in most likelihood.

MacPass – continued from past

MacPass continues to be my password manager across multiple devices.

Todoist – continued from past

Todoist has helped me better organize myself. I use it to create multiple projects for personal, official and community related activities. It also helps me organize the recurring tasks like bill payments, insurance payments etc.

Dropbox – continued from past

Dropbox continues to be my preferred approach for synching documents across multiple devices.

Adobe Acrobat Reader – continued from past

Adobe Acrobat Reader continues to be my preferred reader for reading ebooks due to its capability to sync across devices via Adobe Cloud.

CheatSheet – continued from past

CheatSheet still continues to be my best friend when I want to learn about keyboard shortcuts in existing and new programs.

Spectacle – continued from past

Spectacle is another utility which I always use on daily basis. I use it to move windows across multiple screens, resize the screens.

TweetDeck – newly added

TweetDeck gives the ability to add multiple columns for Tweets, Mentions, Retweets, Notifications etc.

Battery related utilities

I still use all the battery related utilities listed in last years blog. Among all Battery Monitor is the best I found useful. It gives option to set threshold when the notification should appear. I know immediately when the battery is 100% charged as well as when there is only 5% remaining. This helps me to not overcharge the battery.

Things which got deleted / deprecated

Atom and Sublime text code editors

I did not find any use case to use these two editors during last one year. I should remove them from the Mac Dev Setup ansible playbook which I use to setup the laptop.

VirtualBox & Vagrant

Since Docker now has support for Docker for Mac, I did not find use case for VirtualBox and Vagrant. VirtualBox is still used by Minikube internally. I did not have the need to work with VirtualBox or vagrant directly. I think Vagrant would go away from my Mac soon.

Blogo

Blogo did not have the features I was looking for. Instead of trying to find a blog editor native to Mac, i decided to stick with my preferred Open Live Writer on windows.

f.lux

Initially I was impressed by flux. But then I ended up switching it off more often than not. In the end I decided to remove it altogether.

What I am still looking for

Mac alternative for MobaXterm

On windows, I am used to MobaXterm which allows to store collection of remote systems. I can just double click on one of the node and login to the remote server. I have not found a similar utility for Mac. I also like the multi tabbed interface of Moba.

Conclusion

During the past year, as I shifted from Windows to Mac as my primary development laptop I embraced many of the open source tools. I mainly worked with Docker and Kubernetes during this time. For the C# code, I used VS Code and for Java and Scala I used IntelliJ as my preferred IDE. I have got used to the spotlight feature of Mac to quickly start application and look for documents.

Now a days, my workflow involves using the Spotlight with COMMAND + SPACE keyboard combination. Use Spectacle to move the application to the screen I want to work with mostly using the key combinations COMMAND + OPTION + CONTROL + LEFT ARROW or RIGHT ARROW to move to the screen. And finally use full screen mode using Spectacle COMMAND + OPTION + F.

I am sure there are many other tools and utilities which can make developers life easier while working with Mac. I would love to hear about them. I hope some of these tools and utilities are found useful to the readers.

Until next time, Code With Passion and Strive for Excellence.

spacer

Understand Kubernetes Object - Init Containers

Background

This is the fifth part of the series on building highly scalable multi-container apps using AKS. So far in this series we have covered following topics.

In this post we will get familiar with the concept of init containers. Lets first understand the need for init-containers. In our application we have the TechTalksWeb API which depends on the persistent data stored in MS SQL Server database. It takes few seconds for the SQL Server container to start and to initialize the static data. In the meantime the web API container will start and try to connect to the database container. It will fail if the database container is not yet initialized. In such situation, Kubernetes provides an object named init-container. It ensures that the initialization tasks are completed before the dependent container can be started.

In our case we will make use of the init-container to initialize the database. The web API container will be started only after the database service is successfully started.

We will be performing following actions during this post

  • Use Docker-compose to build and publish images to DockerHub
  • Setup init container using Kubernetes manifest file to initialize the TechTalksDB inside the SQlserver 2017 container image
  • Deploy services in one go using kubectl

Pre-requisites

Build Docker images for following components:

  • TechTalksWeb
  • TechTalksAPI
  • TechtalksDB

I find it easier to use Docker-compose to stich the services together. We can then build, tag and push these images using docker-compose instead of building them individually. Once the images are pushed to Dockerhub container registry, we can deploy web front, web api and the database containers using Kubernetes manifest.


Here is an example of the compose file which I use for composing the multiple services. I recently presented a hands on session specifically around stitching multi-container apps using Docker-compose for  Azure User Group. The video of the session is available  at Engineers.sg site.

We will be using a sql script to initilalize our database with static data. The initializedb.sql script is straightforward. It initializes the database if it does not exists. It creates tables to store static data like categories, level etc.

Set up init container

In the code snippet above, we can see that the spec has a section named init-containers on line number 21. Here we specify the image to be used which is named nileshgule/sqlclient. At the time of building this image, we are copying the initializedb.sql script. We specify the command to be executed using sqlcmd command line utility.

/opt/mssql-tools/bin/sqlcmd -S db-deployment -U sa -P January2018 -d master -i initialize-database.sql

This is then followed by the regular containers section on line 28. The containers will be instantiated only after the init container has completed its work. This ensures that the TechTalksDB database is fully initialized before the API tries to connect to the SQL server.

Deploy services in one shot

Earlier we saw how we can deploy a service using kubectl command and passing an individual manifest file to it. The kubectl command is quite flexible and provides us a convenient way using a single command to recursively deploy multiple objects.  Under the Minikube folder, we have multiple sub-folders which contain manifest files for each type of service. Instead of running kubectl create or apply command with individual filenames, I can run following command to recursively apply all the manifest files with one single command

kubectl apply –recursive –filename .

At the time when I was building the demo for Azure Bootcamp presentation, I had multiple services running using Minikube. I am using a subset of those services here. To deploy only 3 services along with the namespace, I executed the commands shown below in the specific order.

kubectl apply

The same set of commands  are also available as Powershell script and can be executed in one go by running the script. The apply command will apply only the changes to the desired state configuration. If the current state matches the expected state no changes would be done. The end result of this command is as shown below

Kubernetes dashboard

The techtalksapi we can see contains two images. If we dive deep into the Pods section & click on the techtalks API related pod, we can see the following details

tech talks API detailed events

Notice that the techtalkapi:v1 pod is only created after the sqlclient container is fully started.

We can also see the service discovery working to perfection. The WebAPI refers to the sql database using its service name db-deployment on line 35 in the manifest file. Normally we would have provided a fully qualified connection string to connect to sql database server. In the future post we will look into the aspects related to resourcing needs in detail.

Conclusion

As we saw in this post, the concept of init-containers is quite powerful in ensuring that dependent services are up and running. Ideally you would implement a resilient mechanism like exponential backoff to handle failures. We will cover that part in future post. The complete source code is available on Github.

Until next time, Code with Passion and Strive for Excellence.

spacer

My experience with Voxxed Days 2018

Background

This post is about my experience about the Voxxed Days Singapore 2018 conference. I had attended the conference previous year and had shared a similar experience following the 2017 event. This is my attempt to give my perspective about the great event which happened on 1st June 2018.

Brief history about my own journey from attendee to the speaker

Voxxed Days 2017 is a life changing event in my career. I was inspired and highly motivated by the talks from the event. Two talks which remain in my memory forever were the ones by Josh Long and Burr Sutter. There were definitely others which were equally good, but these two guys left a lasting impression on me. I was determined to follow in their footsteps and share my knowledge with the community.

I was active in the community events as an attendee before this event. But after 2nd June 2017, I was inspired to give back to the community as much as I could. I managed to speak at couple of our internal Tech Talks in our office which helped me polish my public speaking skills. I tried on couple of occasions to speak at the Singapore Azure Meetup events. All these talks gave me confidence that I could speak at a bigger stage.

So when the call for papers was out for Voxxed Days 2018 in January, I got super excited and submitted 3 different topics. And luckily one of them was selected for the full length session on Modernizing Application Development with Highly Scalable Architecture using Docker and Azure. I was on cloud nine when my name was confirmed as the speaker among first 10 talks selected for this year. I was also fortunate to present a similar talk at the Global Azure Bootcamp which happened on 21st March 2018.

Preparation for the event

Credit Agricole CIB, my employer was one of the platinum sponsors of the event. We had a booth at the venue to promote our culture and to improve our branding. As part of our efforts to improve the branding we had showcased the different community engagements via a small video. I am fortunate enough to be part of the core community group which organized more than 25 Tech Talks consisting of internal as well as external speakers. Like last year we transported various goodies from our office to the event location Marina Bay Convention Center on the evening of 31st May.

Meetups before the event

We had been extending our scope of community engagement by hosting Meetup events in recent times. My colleague Stephane Martins is part of the Voxxed Days organizing committee. He managed to arrange one Tech Talk on From JavaScript to Haskell functional programming made simple by Xavier Detant and Laurent Maillet on May 30. We also had a NightClazz by Xavier Detant from Zenika on 30th May at our office.

And then there was a big meetup for Java User Group (JUG) on may 31st, with Burr Sutter and Kamesh Sampath presenting Jakarta EE. As part of the same event, there was also a talk by Sergie Almar on Spring ecosystem. The period from around 23 May onwards was bit hectic in preparation for these meetups and Tech Talks. But whole of our core group consisting of Stephane Martins, Baltazar Chua, Yuvaraja Chennakrishnan, Svetlana Mae Ebbah (Yvette), Ross Brigoli, Sandeep Mishra and Allan Chua were all excited and charged up to make it a grand success. We also had support from other volunteers who helped us with the logistics. There was lot of buzz around the office space as we were hosting world class speakers in our premises. The feedback from the attendees was quite positive to these events.

Speaker Dinner

IMG_2151

As is the practice with most conferences, I was invited for the speaker dinner. Although I was little bit late in reaching the spot after attending the JUG meetup with few other organizers, it was a great experience. I made some new friends. Sitting alongside experienced speakers and listening to their views was in itself a great learning experience for me. I got some great tips from Simon Jaillais who shared his views on how to manage the emotions before the talk and how to prepare and relax at the conference. He was in fact giving the tips to fellow speaker Minisha Murugan who was also presenting for the first time at an international conference like me.

The Day of the event

IMG_2152IMG_2153

I reached the venue at around 7:40 in the morning. Registrations were starting at 8 AM. I was super excited to receive the coveted speaker badge and the speaker t-shirt. As people started coming in, we had quite a rush at our booth. We had prepared a small questionnaire for the attendees and were giving away a t-shirt with a very nice catchy design on it. Our initial plan was to distribute the t-shirts in multiple batches during the morning before keynote, during lunch time and during the coffee break in the evening.

IMG_2154IMG_2156

The t-shirts were so popular that we almost ran out of stock even before the keynote started. That was a fantastic beginning to the day. The event started on time with the Keynote by Burr Sutter and Kamesh Sampath. There was also a small ceremony at the beginning where representatives from the sponsors were presented with a memento. It was a proud moment for us to have one of our core team member Baltazar Chua receive the trophy. 

JFKN8443

Small tribute to the man who shaped my life

I am not know to be a very emotional person. By nature I am one of the most introvert person on earth. There have been very few occasions in my life where I have shown emotions. On this occasion, I went through lot of emotions. Due to my inherent nature, never in my wildest dreams I had imagined that I would be speaking at such a conference.

Ever since my talk was selected for the event, I had been constantly remembering the person who shaped my life. People who are close to me, know the importance of this person. It is none other than my maternal uncle Ashok Bhat. Whatever little success I had over the years, I owe it to him. I was completely clueless after finishing my class 12 exams and did not know what to do and where to go. He took me under the wings and ensured I had all the liberties that were needed for my higher education. My parents could not afford to pay fully my tuition fees. He  was the one who told my mother to forget about the financial difficulties and sponsored my full education.

When I was leaving for Bangalore in 2002, he told me one thing which has remained with me forever. He said I would like you to give your best in whatever you do wherever you go. Do not restrict yourself to just to Bangalore, but I would like you to travel abroad and make us proud with your work. I did not have any goals for myself. It was his dream to see me succeed which changed everything for me. I had never thought of moving out of my birthplace Goa, but destiny had other thoughts in mind for me. It was my uncles dream that I made mine and have been living ever since. I do not know what he saw in me that he backed me every time. When most of the other people would ridicule him saying this kid is not going to do anything in life, he would ask me to ignore all others and focus on my strengths.

Unfortunately, my uncle is no more with us. I literally cried on multiple occasions while preparing for the talk remembering his golden words. I wish he was there to see me talk in front of audience. I would have definitely bought him a Voxxed Days ticket and made him sit in the front row if he was here today.

Moment when I was dumbstruck

IMG_2157There was another moment during the day which triggered emotions for me. I was siting at the booth rehearsing my talk for the last time after lunch break. Someone tapped on my shoulder and handed me a copy of the Cloud Native Java book signed by Josh Long himself. He gave me a hug & said this is my gift for you and I hope you like it. I did not know how to react for few seconds. Thank you Stephane Martins for your wonderful gesture in making me feel so special. I am totally humbled by your graciousness.


My fanboy moment with Josh Long

After I received the book, I managed to get few moments with Josh Long at the Pivotal booth. I was too shy to talk to him for more than couple of minutes. It was my fanboy moment with him when he agreed to pose for a photograph with me. It was his talk last year at the same event which inspired me a lot. I would frame this photo and keep it forever to tell my grandchildren about my fanboy moment with the iconic Josh Long.

IMG_2158

Feedback about the talks I attended

I was trying hard to cool down the nerves before the first big talk of my life. I did not attend many talks this year. I preferred to stay at the booth and relax a bit knowing that all the talks were recorded and I could watch them later. There were couple of talks which I attended. Here is my personal opinion about it.

Designing Reactive Apps in Kotlin by Stephan Goncharov

I don’t know anything about Kotlin. The speaker tried to explain language features and his experience in using some of them. It was decent talk with some practical advice on how to apply some language constructs.

Reactive Spring by Josh Long

As the Kotlin talk finished earlier, I moved to the adjacent room where Josh Long was enthralling the audience with his inimitable style. Even though I attended only the 2nd half of his talk, there were so many wow moments in it. Here is the video link to this exciting talk. I am sure you will agree with me on this point. So far I have attended 3 live talks & sessions by Josh Long and every time he inspires with his passion and energy levels. I hope some day in this lifetime I can do a live coding session like him.

Kubernetes for the Konfused, a strategy toolkit for normals by Michael Cote

This talk was in the same room after my talk. For initial 5 minutes or so I was not sure where Michael was going. His references to blinking cursors and stuff like that did not make much sense. But after about 5 minutes everything started to come together and I really liked his talk. The way he explained concepts was like a sage explaining things in a calm and composed manner.

A bit about my own talk

AQUE5820

EZAQ7552

My talk was scheduled in the Steve Jobs room from 16:00. I had prepared the talk for about an hour. But then when the organizers asked me to keep it to 45 minutes, I was bit worried whether I could do it in that time frame or not. I breezed through some of the sections and managed to finish the talk in about 40 minutes or so. It was a great experience. I had few questions from the audience at the end. There were couple of guys who requested me to send the copy of the presentation to their email ids. There was an experienced professional who spent more than 5 minutes trying to understand why I was using Docker compose instead of Kubernetes manifest files. He also shared his experience with expired certificates problem with an existing Kubernetes cluster.

IAEU9094

I was thrilled when one of the attendees told me that it was the best session he had attended during the whole day. That made my day. A small compliment like this is worth all the effort we spend as speakers in preparing for the talk and delivering the talk. Frankly speaking, I did not expect that  kind of complement knowing that there were far more accomplished speakers at the conference.

I would like to take the opportunity to thank two of my colleagues who helped a lot during the preparation of the demo for the talk. First it was Yuvaraja Chennakrishan who created the initial project for a tech talk on .Net Core that we delivered together about 6 months back. I have been using his initial codebase since the time for my various tech talks. Next is Baltazar Chua who has been giving me different challenges of sorts to incorporate into the demo.

Every time I discussed with him about what I am going to present, he would ask me why not you do this or that? The stuff that I showed during the talk about container monitoring using Prometheus and Grafana as well as the state management using Persistent Volumes were his suggestions. These things were not in my initial list of things but were included as per Bal’s suggestions. This guy is a genius and that is one of the reason I have a special hashtag for him #BalIsARockStar. Thank you guys for your contributions.

OOAN0730

My colleagues who attended the talks made me feel like a celebrity by taking a selfie with me. Thanks you guys for making me feel special. Without your constant support I could not have done this alone. A big heads up to the guy in red T-shit Stephane Martins who really motivated and inspired us to get out of our comfort zones and to be active community members. He managed to infuse the community culture in our DNA.

I can’t thank enough my friend Allan Chua who recorded my talk. The video is available on Engineers.sg site.

IMG_2169

I received couple of other feedbacks from my colleagues about the talk. I would really appreciate if the attendees could give me their honest feedback which will help me improve my future talks.

Closing keynote by Go-Jek CTO Ajey Gore

When we go to conferences like this we do expect great talks from star speakers. There was one talk which most of the guys found really interesting. It was the closing keynote by the Go-Jek CTO Ajey Gore. Throughout his talk he made statements which were thought provoking and experienced professionals could relate to what he was saying. The little nuggets of wisdom that precipitated during his speech were simply awesome. After the event, my colleague Nikhil Baranwal shared a link about an inspiring story of Ajey Gore about how a small town commerce graduate became CTO of a multibillion dollar company. It is real life stories like this that inspire us to believe that you don’t always need to be a computer graduate to become successful in IT industry. All you need is passion to succeed, a little bit of luck and lots of hard work to make the things happen. 

Conclusion

The overall event was a great experience, both as a first time speaker as well as attendees. The organizers did a great job in raising the bar from the previous year. They paid special attention to the needs of people fasting during Ramadan by providing a special meal at the end of the day. It was attention to such minute details which distinguishes great conferences from good ones.

At the end of the event, I was discussing with few colleagues. I had read somewhere that when we go to such conferences, do not expect the speaker or people at the booths to solve your problems. The context in which you are presenting the problem and the one speaker is using will be completely different. The role of the speaker is to infuse enough curiosity into your minds in about 30-45 minutes that you can go back and try to find solutions on your own to your problems using the new found knowledge that you acquired during the talk. I hope attendees managed expand their network with some of the experts from the industry. I had opportunity to make new friends. Personally I am satisfied with what I got from a day well spent at Voxxed Days. We managed to have lots of fun. I hope next years event will be even bigger and better.

The recording of all the talks is available at Engineers.sg site.

Until next time Code with Passion and strive for Excellence.

IMG_2161IMG_2164IMG_2253IMG_E2233IMG_E2234IMG_E2235IMG_E2238

spacer