Upgrade SQL Server 2017 to 2019 with HA for containers

Background

I have been using SQL Server 2017 Linux container image for quite some time on my learnings related to Docker and Kubernetes. Earlier, I had written about how to build a custom SQL Server 2017 Docker image and also how to integrate SQL Server 2017 Linux with ASP.Net Core. Microsoft has announced the preview of SQL Server 2019 with Always on Availability groups feature. I decided to upgrade from 2017 to 2019 with HA features and deploy it to Kubernetes cluster on Azure Kubernetes Service (AKS). This post is about the changes required to migrate from the older SQL 2017 Linux version to the 2019 preview.

In this post we will be performing following activities
  • Deploy Kubernetes cluster with 4 nodes on Azure using Azure Kubernetes Service (AKS)
  • Deploy SQL Server 2019 Linux with Always On Availability Group
  • Update the application code to redirect to SQL 2019 instance from SQL 2017
  • Quick look at the differences between 2017 & 2019 SQL Server Linux containers

I will be using the code from my AKS learning series repository. Note that the code is in feature/sql2019 branch and has not yet been merged into the master.

1 - Deploy Kubernetes cluster with 4 nodes on Azure using Azure Kubernetes Service (AKS)

I have got used to creating AKS cluster frequently for different demos and presentations. Instead of typing the lengthy commands on terminal, I developed a Powershell script which does all the heavy lifting. There are other bunch of powershell scripts which I used quite often. These are available under the Powershell folder in the repo. I use initializeAKS.ps1 script to create AKS cluster with following configuration
  • Resource Group Name – techtalksgr
  • Resource Group Location – South East Asia
  • Cluster Name – techtalkscluster
  • DNS Name prefix – techtalksdns
  • Worker node count – 4
  • Kubernetes version – 1.13.5
All these parameters can be overwritten at the time of invoking the Powershell script. Apart from these the script also uses following parameters which are currently hardcoded
  • VM Node size – Standard_D2_V2
  • Kubernetes addon – http_application_routing
Along with AKS cluster creation, the script also sets up the kube config with the credentials required to run the kubectl commands. It also creates a cluster role binding so that the kubernetes dashboard can be accessed by using a proxy url.

In order to run the script, you can clone the repo and execute initializeAKS.ps1 from the Powershell command prompt. You should see an output similar to the screenshot shown below


After the resource group is created, it will take few minutes for the AKS cluster to spin up depending on the size of the cluster. In my case to provision the 4 node cluster it took about 10-15 mins. Once the initialization script is completed, the output should look similar to the screenshot below





The green lines in the output show that the cluster has been successfully created with Kubernetes version 1.13.5. With the cluster ready lets get started with deploying the SQL Server 2019 with Always on Availability Groups.

2 - Deploy SQL Server 2019 Linux with Always On Availability Group

In order to deploy SQL Server 2019 Linux with Always On Availability Groups, we need to take following actions
  • Deploy SQL Server Operator
  • Create sa passowrd and master key password
  • Deploy SQL Server custom resource
  • Deploy load balancing services

All these steps are documented in the Always On Availability groups for SQL Server containers deploy documentation. Once again I have automated the steps listed in the docs as part of an Powershell script. The script is a series of kubectl commands which deploys the required objects to the Kubernetes cluster. The script is part of the overall application stack which deploys the SQL Database, the Web API and the ASP.Net Core front end. The script is available in Github as deployTectTalks-AKS.ps1.


Upon successful execution of the script we will get an output similar to the below screenshot

We can verify that the required objects are created by querying the kubernetes pods using Kubernetes command line

kubectl get pods –namespace ag1





We can see that the mssql initialize pod has completed and the operator and mssql pods are running successfully. As part of this deployment the mssql pod is running two replicas. Apart from the the pods, there are multiple services also deployed to the Kubernetes cluster. 

We can query these services with the command
kubectl get svc –namespace ag1
Initially we will see that it takes some time to assign the external IP to the load balanced services.




After few minutes we will see that all the services are assigned external IP.




Once we have the IP address assigned to the ag1-primary service, we can use that IP to connect to the SQL Server 2019 from the application.


3 - Update the application code to redirect to SQL 2019 instance from SQL 2017

The way TechTalks application works is the TechTalksAPI sits in between the web front end and the database. I will use the technique of Init container to initialize the database when the TechTalks API related objects are deployed to Kubernetes. This is the same approach I have used with SQL 2017 as well. Note down the public IP of the ag1-primary service which we will need to update in the deployment file for the api-deployment.yml. Replace the server IP and the Data Source in the connection string with the IP address as 104.215.193.33. With this change we will be able to connect to the SQL server instance with Availability Group.

There is one more minor change I had to make to the initialize-database.sql script which I trigger via the init container. I put in a check to see if the current version of the SQL server is SQL 2019. If that is the case, we will add the TechTalksDB to an availability group named AG1. This makes the initialize database script compatible with both SQL Server 2017 and 2019.




With these modifications we are all set to deploy the TechTalksAPI. Navigate to the k8s/AKS/TechTalksAPI folder in the source code and apply the Kubernetes manifest files using the command
kubectl apply –-recursive –f . –namespace ag1

The API deployment and service objects will be created as shown below




We now have all the components of the application stack deployed to AKS cluster. Lets browse the front end using the IP of the TechTalksWeb which in our case is 104.215.185.167. If everything went fine, we should see output similar to

We have successfully connected to the SQL Server 2019 database with Availability Group on Kubernetes.

4 – Quick look at the differences between 2017 & 2019 SQL Server Linux containers

The SQL Server 2019 Linux container is still in preview. It offers benefits over the SQL 2017 version by providing built in high availability by means of Availability Groups. The SQL operator makes it easier to support features such as health checks and automatic selection of primary and secondary replicas in case of failover. With SQL Server 2017 Kubernetes manages the failover via statefulsets. it can take some time to recover from pod or node failure in 2017. Whereas the recovery is much faster in SQL 2019.
There was one difference I encountered between SQL 2017 & 2019. With the 2017 version, I was able to access the SQL Server via its service name for e.g. db-deployment from the TechTalksAPI deployment. I need not have to know the IP address of the container. This is not the case with Availability Group service. I need to know the IP to access SQL server 2019. I hope this will be addressed by the time SQL 2019 goes GA.

Conclusion

Migrating from SQL 2017 to SQL 2019 took some time. The number of steps to get SQL 2017 up and running on Kubernetes is quite straightforward with a Statefulset deployment and a service objects which can be accessed by API or front end application. SQL 2019 requires additional steps in terms of deploying the operator, SQL server custom objects and also availability group services. Although these additional steps can be tedious to debug, they add much needed resiliency and makes the database highly available in the event of node or pod failure. I hope the readers will find it easier with the help of Powershell script which handles some of these complexities to deploy and experiment with the SQL 2019 Linux features on containers.

Until next time, Code with Passion and Strive for Excellence.
spacer

My mantra to clear AZ-300 Azure Architect Technologies certification

Background


Recently I cleared the AZ-300: Microsoft Azure Architect Technologies certification. Many people have asked me how did I go about preparing for the certification exam and what all resources I used to clear the exam. This post is about the approach I took to prepare for this exam. I’ll share my experience with the following topics
  • Skills measured
  • Online courses
  • Hands-on labs
  • Sample test
  • Notes


Skills measured

The first thing to do while preparing for the exam is to understand what skills are measured as part of this test. Head over to the test details at https://www.microsoft.com/en-us/learning/exam-AZ-300.aspx and scroll down to the section which takes us through the skills measured section. It tells us about the % of questions that are likely to appear in the test from different sections. Here is the quick summary of the main topics
  • Deploy and configure infrastructure (25-30%)
  • Implement workloads and security (20-25%)
  • Create and deploy apps (5-10%)
  • Implement authentication and secure data (5-10%)
  • Develop for the cloud and for Azure Storage (20-25%)
In summary 80% of this exam is around the topics related to infrastructure, networking, security and storage.


Online courses

There are multiple options when it comes to online courses. Everybody has their pros and cons. Even before taking the certifications, I have personally used multiple online learning platforms including Pluralsight, Udemy, Edx, Linkedin learning etc. Particularly for this exam, I referred to 3 main online resources
  1. Pluralsight learning path for AZ-300
  2. Udemy course by Scott Duffy
  3. Microsoft Learning with Hands-on Labs

Pluralsight learning path for AZ-300

Pluralsight is one of my favourites when it comes to technical learning. When I started looking  for online resources, I was happy to see Pluralsight had already put together a learning path which is a collection of courses related to AZ-300 exam
.
There are 28 different courses authored by multiple authors. The total content length is more than 60 hours if you wish to cover each and every course. Most of the courses are short and to the point like 1 to 3 hours of length. There are few which are quite lengthy running more than 5 hours. One of the difficulties I found with Pluralsight courses is that there is no predefined structure. You can take them up in any order. It is difficult to understand which course should be taken up first and which ones later.


Udemy course by Scott Duffy

Personally, I always find it easier to refer to multiple resources. I got to know from a couple of other folks who were also preparing for the same exam that Udemy has courses material. A few months back I had subscribed to the 70-353 exam course. Since Microsoft replaced the older exam with AZ-300, the author Scott Duffy had restructured his course to cater to the needs of the new exam.




I found the course structure to be very well laid out and it helped me quite a lot. The content may not be very deep in this course but it covers all the basics required to get through the exam. There are also a lot of references provided for additional learning.


Microsoft learning with Hands-on Labs

Not many people are aware that Microsoft Learning offers free educational content. This is provided on a different axis like based on role or based on technology or product. I selected the Solution Architect role based learning. The content is very specific to the point.

I like the part where you can perform Hands-on Labs without having an Azure subscription. You are given temporary Azure account based on the duration of the topic and you can familiarize yourself with the capabilities of Azure. This is not just limited to the portal, there are cases when predefined VMs are also created with required tools in order to complete the required tasks.




Microsoft learning also has a bit of gamification. Every course you complete, you get some points in the form of experience. Although the points cannot be redeemed its a good incentive to collect more points and indirectly it shows how many features related to Azure you have covered via Microsoft learning resources.


Hands-on Labs

Although the online courses are helpful, I am one of those people who does not feel comfortable unless I try it out myself. In most cases, I used my Azure subscription to get familiar with the concepts. In some cases, it is not possible to do so. Like in case of migrating an on premise VM to cloud. In such cases, I had to rely on the online course. Thing related to Storage accounts, Virtual Networks, VMs, app services, containers is all easily doable using the Azure subscription.

I would highly recommend practising using your own subscription as much as possible. There are multiple reasons for this. First one is that many of the online courses are recorded more than 6 months before. Some of the features discussed in the module may not be available in the current Azure portal. Secondly, there are two sections in the exam where you will be required to perform tasks by accessing live Azure portal. If you have never tried it yourself, I can bet that you will not be able to perform these live tasks during the actual exam.


Sample Test

I think there is an option provided by some test providers to retake the test once in case you do not clear it in the first attempt. In my case, I had used my MVP benefits to get some discount on the online test. There was no such option for me. Which meant that if I did not clear the test in the first attempt, I had to repay the full amount again.

Luckily I found a sample test on Udemy by Scott Duffy and Riaan Lowe named AZ-300 Azure Architecture Technologies Practice Test. It has 2 tests with 50 questions each. Although the format does not match the actual exam, it still gives you an idea of what kind of questions to expect. It was also helpful for me to focus on areas which I was not very comfortable. One point to note is that this exam does not contain the live lab feature and is only restricted to the multiple choice type question.



I had never taken any online certification test before. So this sample test really helped me a lot. If you have already taken other online certifications, maybe you will have a different experience.


Notes

Every individual has different ways of studying. Some learn by reading books, some by watching videos. Others are good at learning by discussing with peers. I personally like to take notes in digital form. One Note is my favourite notes taking app. I made good use of One note’s note taking abilities during the preparation. Apart from using it on multiple devices like iPad pro and Surface Book, I found the handwritten notes along with highlighters and combination of images useful. Here is an example of my handwritten notes using One Note




Conclusion

This is was the first time I was taking any certification exam. In the past, I have never been a fan of certifications. I am thankful to Puneet Ghanshani for changing my perception of certifications. On multiple occasions, I found my discussions with Mayur Tendulkar very helpful. He was the one who suggested that I try Udemy along with other resources.

The certification exam looks at the breadth of the resources around Azure. I was able to complete the exam well before the stipulated time of 3 hours. If I were to go back and prepare for this exam again, I would follow a slightly different path. I would start with Udemy course and use Microsoft Learning in parallel with that. Then I would go through the Pluralsight courses and do the hands-on labs for deep dive kind of sessions.

During the exam, there are different scenario based questions. At first, I found it a bit difficult to understand the structure of the test. You are given a case study and then there are supporting documents like existing infrastructure, technical requirements and target scenario. The multiple choice questions are then related to these sections. The different sections are separated into a multi-tabbed interface. If you do not go through each of the tabs, the multiple choice questions do not make sense. It took me some time to understand this pattern, but once it was understood the rest of the exam went well.

I feel relieved that the monkey is off the back. It is a good starting point and I hope to continue with few more certifications. I hope that the people who are looking to clear this exam find it useful.
Until next time, Code with Passion and Strive for Excellence.
spacer

Migrate Travis CI Build to Azure DevOps

Background

Back in July 2017, I had written about how to setup Mac Book almost at the speed of light using Ansible. I had setup continuous integration using Travis CI. The objective of doing this was to test any changes to the ansible playbook automatically. At that point of time Travis CI provided an option to run the build on a Mac OS. The advantage of doing this was I could test any changes to the playbook without impacting my personal Mac. I had also scheduled to run the build on daily basis. This was really helpful in identifying outdated extensions in Visual Studio Code. I had blogged about this experience in another post how Travis CI saved me from using an unpublished extension in future.

Recently Microsoft announced availability of Azure DevOps which is combination of multiple services helping us to achieve DevOps practices. One of the service available under the umbrella of Azure DevOps is the Azure Pipelines. Azure Pipelines allow us to define build and release pipelines. We can use these pipelines for Windows, Linux and Mac. We can make use of the Hosted Mac agent to run the build pipeline which can run the ansible playbook. This post is about how I migrated an existing Travis CI build to Azure DevOps build pipeline.
We will go through following points over the course of this post
  • Understand how the Travis CI was setup
  • Identify changes required to run the build using Azure DevOps
  • Run Ansible playbook using Hosted Mac Agent 

Understand how Travis CI was setup

I had setup the initial Travis CI automated build using .travis.yml file. This file contains the build definition described in a yaml file. It can be divided into following sections
  • Environment setup : defines sudo access, language as objective-c and osx image as the xcode9.1
  • Before Install: uninstalls existing homebrew and reinstalls the required version
  • Install: installs Ansible and creates folders which are required to run the playbook
  • Script: Runs the actual playbook. Note on line #39, we are using a timeout setting within Travis CI set to 90 minutes with travis_wait. This was set to 90 minutes based on different builds which took approximately 90 minutes to finish. This is the main part of the whole setup. It runs the ansible playbook with main.yml.

Identify changes required to run the build using Azure DevOps


There are slight differences Azure DevOps pipelines and Travis CI. Both support yaml files as a way of declaratively defining the build steps. The greatest advantage of this is that the build definition can be scripted and checked into the source code repository. The ability to use Yaml files for defining build is currently available only for build pipelines in the Azure DevOps. The release pipelines are currently defined using visual components but are expected to have support for Yaml in near future.
Compared to the Travis CI yaml file, Azure pipeline yaml file contains fewer settings. These are
  • Pool: Defines the VM Image to be used which we are setting to macOS 10.13
  • Steps: Defines different steps which form part of the build. We have only a single step which runs the script. As part of the script we install ansible, set up the required variables and run the playbook using the same main.yml file which was used along with the Travis CI build.
The main difference between the two scripts is we do not need to install Homebrew and pip in case of Azure pipeline. The remaining steps related to executing the ansible playbook are exactly same. The macOS hosted agent contains truckloads of build tools preinstalled for our usage.

Run Ansible playbook using Hosted Mac Agent 

Lets trigger the build using a Hosted Mac Agent. After defining the build pipeline in Azure DevOps when we want to trigger the build, we get an option to choose the agent where we would like the build to run. In our case we make sure that the selected agent is Hosted Mac agent.
Before we trigger the build, I would like to check the capabilities hosted Mac agent. We can go into the details of all the software packages installed by default on the agent by selecting Project settings –> Agent Pools –>Hosted macOS –> Details. The agent has almost all the tools required for building most common applications in various programming languages like Java, Node, .Net along with the ones for XCode and Objective C. It also has Homebrew 1.9.2 version preinstalled which our Ansible playbook uses.

We create a new build pipeline as shown in the screenshot below. The process is is pretty straightforward we choose the Github repository as the source and Hosted macOS as the agent. YAML file path points to the azure-pipelines.yaml file inside the Github repo which contains the build definition as declarative steps.


The final step is to Queue the build. Assuming everything is setup correctly, one the build is triggered we can check the real time output log. Below screenshot shows the successful build.


Issues faced during migration from Travis CI to Azure Pipeline

The migration was not a lift and shift operation. I had some minor hiccups along the way. First one was due to the changes to the Homebrew packages. Over a period of time Homebrew had made some changes to the core packages and casks. There were some packages moved from core to cask. One such example is the Chrome Driver. I had to move this from Homebrew package section to the casks section. Another one was Telegram which changed the cask name from Telegram to telegram-desktop. Once these minor issues were fixed, I was able to successfully run the ansible playbook.



The final output shows time taken for each step in the build. As we can see from the screenshot below it took about 16 minutes to complete the whole build.


Conclusion

It was a huge relief to migrate from Travis CI to Azure DevOps build pipeline. One of the biggest benefit I got from this migration is the speed in testing the automation script which runs the ansible playbook. When I was using Travis CI, the build used to take close to 90 minutes to finish. I had to put in a kind of hack to timeout the builds which ran more than 90 minutes. After migration the Azure pipeline for similar set of activities is taking just about 15-20 minutes. That is a huge time saver.

Next time I need to set a Mac Book from scratch with all the software I need, I know that I can test it on an independent machine using Azure pipeline and in 20 minutes or so I would be able to identify if any changes are required to my Ansible playbook before I run in on actual laptop.

Another advantage I get by using a cloud hosted macOS agent is that any new tool or software I want to try to install, I can test it in isolation first and my main laptop will remain untouched. Isn’t that great?


Apart from the benefits to personal users of Mac, the main advantage of using hosted macOS agent is that teams can build their software for iOS using the Azure pipelines. I hope I have give enough pointers for the readers to explore Azure Devops more. You can register for free and use up to 10 build agents in parallel. That is not all, if you have open source projects and public repos you get huge boost in the amount of time the pipelines can be run for free.


Until next time, Code with Passion and Strive for Excellence.


Technorati Tags: ,,,
spacer