Upgrade SQL Server 2017 to 2019 with HA for containers

Background

I have been using SQL Server 2017 Linux container image for quite some time on my learnings related to Docker and Kubernetes. Earlier, I had written about how to build a custom SQL Server 2017 Docker image and also how to integrate SQL Server 2017 Linux with ASP.Net Core. Microsoft has announced the preview of SQL Server 2019 with Always on Availability groups feature. I decided to upgrade from 2017 to 2019 with HA features and deploy it to Kubernetes cluster on Azure Kubernetes Service (AKS). This post is about the changes required to migrate from the older SQL 2017 Linux version to the 2019 preview.

In this post we will be performing following activities
  • Deploy Kubernetes cluster with 4 nodes on Azure using Azure Kubernetes Service (AKS)
  • Deploy SQL Server 2019 Linux with Always On Availability Group
  • Update the application code to redirect to SQL 2019 instance from SQL 2017
  • Quick look at the differences between 2017 & 2019 SQL Server Linux containers

I will be using the code from my AKS learning series repository. Note that the code is in feature/sql2019 branch and has not yet been merged into the master.

1 - Deploy Kubernetes cluster with 4 nodes on Azure using Azure Kubernetes Service (AKS)

I have got used to creating AKS cluster frequently for different demos and presentations. Instead of typing the lengthy commands on terminal, I developed a Powershell script which does all the heavy lifting. There are other bunch of powershell scripts which I used quite often. These are available under the Powershell folder in the repo. I use initializeAKS.ps1 script to create AKS cluster with following configuration
  • Resource Group Name – techtalksgr
  • Resource Group Location – South East Asia
  • Cluster Name – techtalkscluster
  • DNS Name prefix – techtalksdns
  • Worker node count – 4
  • Kubernetes version – 1.13.5
All these parameters can be overwritten at the time of invoking the Powershell script. Apart from these the script also uses following parameters which are currently hardcoded
  • VM Node size – Standard_D2_V2
  • Kubernetes addon – http_application_routing
Along with AKS cluster creation, the script also sets up the kube config with the credentials required to run the kubectl commands. It also creates a cluster role binding so that the kubernetes dashboard can be accessed by using a proxy url.

In order to run the script, you can clone the repo and execute initializeAKS.ps1 from the Powershell command prompt. You should see an output similar to the screenshot shown below


After the resource group is created, it will take few minutes for the AKS cluster to spin up depending on the size of the cluster. In my case to provision the 4 node cluster it took about 10-15 mins. Once the initialization script is completed, the output should look similar to the screenshot below





The green lines in the output show that the cluster has been successfully created with Kubernetes version 1.13.5. With the cluster ready lets get started with deploying the SQL Server 2019 with Always on Availability Groups.

2 - Deploy SQL Server 2019 Linux with Always On Availability Group

In order to deploy SQL Server 2019 Linux with Always On Availability Groups, we need to take following actions
  • Deploy SQL Server Operator
  • Create sa passowrd and master key password
  • Deploy SQL Server custom resource
  • Deploy load balancing services

All these steps are documented in the Always On Availability groups for SQL Server containers deploy documentation. Once again I have automated the steps listed in the docs as part of an Powershell script. The script is a series of kubectl commands which deploys the required objects to the Kubernetes cluster. The script is part of the overall application stack which deploys the SQL Database, the Web API and the ASP.Net Core front end. The script is available in Github as deployTectTalks-AKS.ps1.


Upon successful execution of the script we will get an output similar to the below screenshot

We can verify that the required objects are created by querying the kubernetes pods using Kubernetes command line

kubectl get pods –namespace ag1





We can see that the mssql initialize pod has completed and the operator and mssql pods are running successfully. As part of this deployment the mssql pod is running two replicas. Apart from the the pods, there are multiple services also deployed to the Kubernetes cluster. 

We can query these services with the command
kubectl get svc –namespace ag1
Initially we will see that it takes some time to assign the external IP to the load balanced services.




After few minutes we will see that all the services are assigned external IP.




Once we have the IP address assigned to the ag1-primary service, we can use that IP to connect to the SQL Server 2019 from the application.


3 - Update the application code to redirect to SQL 2019 instance from SQL 2017

The way TechTalks application works is the TechTalksAPI sits in between the web front end and the database. I will use the technique of Init container to initialize the database when the TechTalks API related objects are deployed to Kubernetes. This is the same approach I have used with SQL 2017 as well. Note down the public IP of the ag1-primary service which we will need to update in the deployment file for the api-deployment.yml. Replace the server IP and the Data Source in the connection string with the IP address as 104.215.193.33. With this change we will be able to connect to the SQL server instance with Availability Group.

There is one more minor change I had to make to the initialize-database.sql script which I trigger via the init container. I put in a check to see if the current version of the SQL server is SQL 2019. If that is the case, we will add the TechTalksDB to an availability group named AG1. This makes the initialize database script compatible with both SQL Server 2017 and 2019.




With these modifications we are all set to deploy the TechTalksAPI. Navigate to the k8s/AKS/TechTalksAPI folder in the source code and apply the Kubernetes manifest files using the command
kubectl apply –-recursive –f . –namespace ag1

The API deployment and service objects will be created as shown below




We now have all the components of the application stack deployed to AKS cluster. Lets browse the front end using the IP of the TechTalksWeb which in our case is 104.215.185.167. If everything went fine, we should see output similar to

We have successfully connected to the SQL Server 2019 database with Availability Group on Kubernetes.

4 – Quick look at the differences between 2017 & 2019 SQL Server Linux containers

The SQL Server 2019 Linux container is still in preview. It offers benefits over the SQL 2017 version by providing built in high availability by means of Availability Groups. The SQL operator makes it easier to support features such as health checks and automatic selection of primary and secondary replicas in case of failover. With SQL Server 2017 Kubernetes manages the failover via statefulsets. it can take some time to recover from pod or node failure in 2017. Whereas the recovery is much faster in SQL 2019.
There was one difference I encountered between SQL 2017 & 2019. With the 2017 version, I was able to access the SQL Server via its service name for e.g. db-deployment from the TechTalksAPI deployment. I need not have to know the IP address of the container. This is not the case with Availability Group service. I need to know the IP to access SQL server 2019. I hope this will be addressed by the time SQL 2019 goes GA.

Conclusion

Migrating from SQL 2017 to SQL 2019 took some time. The number of steps to get SQL 2017 up and running on Kubernetes is quite straightforward with a Statefulset deployment and a service objects which can be accessed by API or front end application. SQL 2019 requires additional steps in terms of deploying the operator, SQL server custom objects and also availability group services. Although these additional steps can be tedious to debug, they add much needed resiliency and makes the database highly available in the event of node or pod failure. I hope the readers will find it easier with the help of Powershell script which handles some of these complexities to deploy and experiment with the SQL 2019 Linux features on containers.

Until next time, Code with Passion and Strive for Excellence.
Share:
spacer

No comments:

Post a Comment