Provision Docker Swarm with Azure Container Service using ARM templates

Background

In one of the earlier post on Provisioning Docker Swarm Cluster in Azure, I had demonstrated how to use the ACS Engine to create the Swarm cluster. In that post I had manually entered all the parameters based on which the cluster was provisioned. In this post I will demonstrate how we can eliminate the manual steps and automate the process using Azure Resource Manger template. We will also use Azure CLI to perform the task of provisioning. In the end we would have provisioned exactly the same resources but with much less effort.

provisioned resources

Following steps will help to achieve our objectives

  • Login to Azure CLI
  • Create resource group
  • User ARM template to add resources to resource group
  • Verify the provisioned resources

Login to Azure CLI

There are different ways in which we can provision resources in Azure. Simplest one is the Azure portal with all the visual elements. For those who like to work with command line, we can use Azure CLI which is a cross platform command line interface. As of this writing the 2.0 version is latest. If you do not have the Azure CLI installed, follow the steps to get it installed.

Login to the Azure subscription using the command from your preferred terminal

az login

This will give you a link and the code to authenticate. Once you are authenticated, the list of subscriptions associated with the login will be displayed on the screen. In my case I had multiple subscriptions.

available subscriptions

I need to select one of these subscription. In this case I will choose the third option Azure Pass. We do that by setting the subscription parameter of the account as

az account set --subscription "Azure Pass"

Create Resource group

Resource Group in Azure is the logical grouping of related resources. We need to assign the resource group to one of the location which basically maps to the Azure region. In my case Southeast Asia is the nearest region. I will create a resource group named swrmresourcegroup in Southeast Asia region using the command

az group create \
--name swarmresourcegroup \
--location "Southeast Asia"

Use ARM template to add resources

Azure Resource Manager (ARM) templates provides us an easy way to describe different resources using a JSON formatted file. The structure of template consists of following

  • schema for versioning
  • content version
  • list of parameters required by the template
  • list of variables used within the template
  • Resources created by the template
  • outputs which can are used to access the resources

This fits in very nicely with the DevOps practices. We can store the contents of the file which describes in a declarative manner our infrastructure as code. This file can be stored within the source control.

This file lists down all the resources which will form part of the stack, their type and the related properties.


We describe each resource that we wish to create as part of this template. As we can see on line 18, we can specify default value for agentCount parameter as 2. There are multiple parameters specified in the template. The template also describes the relationship between different resources.

In the earlier post, we had specified the values for the parameters in the Azure portal. In fact the portal uses this file itself to show different drop down values. The template file is a mix of static and dynamic content. We can see example of dynamic content from line 276 onwards. We use functions like concat, variables, parameters etc to derive dynamic values for the resources.

Coming back to parameters, we will store all the parameters that can be overridden in a dedicated file named parameters.json

This way we have the complete list of resources as well as runtime parameter values required to instantiate the required resources. The parameters file is just a key value pair consisting of parameter name as key and its values as the value part. Please note that you will need to specify the correct value for sshRSAPublicKey on line 39. All that is needed now is to use these two files to trigger the deployment process.

az group deployment create \
    --name "coredemo" \
    --resource-group "swarmresourcegroup" \
    --template-file azuredeploy.json \
    --parameters parameters.json

The command is self explanatory. We use coredemo as the name of the deployment, we associate the deployment with the swarmresourcegroup using the resource-group flag. Template-file parameter is used to specify the name of the resource template and finally parameters are provided using the parameters.json. As always depending on the number of resources requested, this process can take about 5-10 minutes. At the completion of successful deployment, we will get the 15 resources provisioned which are shown in the screenshot at the beginning of this post.

Conclusion

As you can see we can automate the whole process of provisioning using two simple commands. Every time I need to create the Docker Swarm cluster now onwards, I just use these simple commands. I can avoid lot of manual mistakes which can happen when I copy & paste the values in the portal. This process of storing the resource templates and the parameters makes my provisioning step a repeatable process. In this post I demonstrated the provisioning using ARM templates and Azure CLI. You can also use the same template and the parameters file with Powershell instead of Azure CLI. I will demonstrate that separately in a later post. Any guesses what command I need to delete the resource group once I am done with my testing?

az group delete --name swarmresourcegroup

This one command is enough to delete all the resources under the swarmresourcegroup. ARM resource templates can save you lot of time if you need to create same set of resources across multiple environments. You can easily create copies of environment like Dev / Integration / QA Test / Preproduction / Production. Parameters help you to customize the resources. For example you can have 2 agent nodes running in Dev & QA environment while Preproduction & Production can have 10 agent nodes. In such cases all that is required to be done is to have an environment specific parameters file.

Another scenario where the parameters could be useful is to have different processing power for machines in the dev & QA environment. These could be less performant machines while preproduction and production can have hardware with more firepower in them.

Hope you found this information useful. Until next time code with passion and strive for excellence.

spacer

Docker Compose Tip : How to avoid SQL Server 2017 container exiting when running using docker compose

Problem starting SQL 2017 container

While working on recent blog post on Integrating SQL Server 2017 Linux with ASP.Net Core using Docker I cam across a strange behavior. I was able to run SQL Server 2017 Linux container image successfully as demonstrated here. I was able to initialize the database using the command

docker run -it -p 1433:1433 \
--name sql2017 \
nileshgule/sqldb

This command runs the custom image in interactive mode based on the –i flag. We also pass the t flag to create a psudo tty terminal. The next step for me was to try the integration with Dotnet Core project. I started with updating the docker-compose file and other changes explained in the post on integration.

Before I added the sql2017 as the service inside docker-compose file, I ran the SQL Server 2017 container individually using the above command & verified that the Core MVC and Core Web API we able to communicate to the database inside container.

This seemed pretty easy at first. All I had to do was update docker-compose file with additional service for SQL Server. I did that change and fired the docker compose build command. The build command was executed successfully and 3 images were produced.

I fired up the docker compose up command to bring up all the services using the command

docker-compose -f docker-compose-build.yml up -d

I was expecting all the services to interact together. It was too good to pass at the first attempt. Instead what I got was a problem with starting the SQL 2017 container.

container exited

The container exited with code 0. The MVC and web API containers were created successfully. I thought it might be a problem with the detached mode which I had used with –d flag with docker-compose. So I tried to run the command in interactive mode by removing the flag as

docker-compose -f docker-compose-build.yml up

Even with this change I got the same result.

Root cause of the problem

Why does the container exits in docker-compose mode immediately? If I run the standalone container again everything was working as expected. So there had to be some reason specific to docker-compose and nothing to do with the custom image. After looking around for possible reason on the forums I realized what the problem was. Any guess?

In the docker-compose mode, docker tries to start all the container in detached mode by default. If there is no process running inside the container within certain duration, docker thinks the container has finished its work and exits it. In our case in order to initialize the TechTalksDB we are running the initializing script after waiting for few seconds (between 10 to 30 seconds). This is not a foreground process. It runs as a background process. Docker things the foreground process has finished and exits the container.

We need some way of keeping the container alive and ensure that docker does not exit the container. Docker-compose does not support –t as a command line parameter. In absence of command line parameter, we need some way of simulating the standalone behavior of tty psudo terminal. We can amend the sqk2017 service by adding tty: true option as shown below on line 13

Rebuild the images by running docker-compose build command after making this change and run the docker-compose up command as shown above. Now we have the sql2017 container working as expected.

container runs successfully

Conclusion

The complete source code used for this post is available on Github. If you have container which runs some process in the background, it might exit immediately after starting. Use tty:true to keep the service running. It can be quite irritating when things work in isolation using docker run commands but fail in docker compose mode. That is why it is important to understand the differences between different docker commands and how they operate. There might be minor differences between commands in standalone mode and in compose mode. I tend to use the online docker docs as well as the help with the –help command on docker CLI commands. The best way to fix these issue are by hit and trial mode. I hope you found this little trick useful. Until next time code with passion and strive for excellence.

spacer

Integrate SQL Server 2017 Linux with ASP.NET Core using Docker

In the previous post, we saw how to initialize a SQL Server 2017 on Linux database running inside Docker container. This post is continuation of my exploration about Docker containers for stateful applications. In this post we will explore how to integrate the SQ Server 2017 on Linux container with ASP.Net Core Web API also running inside a Docker container. We will perform following activities

  • Update Docker compose file to build the SQL server image
  • Update Web API project to use SQL Server Database instead of In memory database
  • Build and run the application containers using Docker compose commands
  • Query Data from SQL container using the MVC web front end

Update Docker compose file to build the SQL server image

Docker provides Docker Compose as a tool for defining and running multi-container Docker applications. I already have an existing docker compose file which I have been using for my series of post on implementing continuous deployment of multi container apps. In a compose file we define a set of services which are part of our application. The compose file is used build different images required for running the application.

Pay attention to lines from number 4 to 13. This is the SQL Server 2017 service we are adding to the existing MVC & Web API services of the application. These lines are the replica of the docker build and docker run commands that we used in the previous post. Note that we are using container_name attribute to specify the name of the container as sql2017. This has the same effect as –name flag in the docker run command. We will see shortly how this name helps us to resolve the container using the built in service discovery feature of Docker.

Update Web API project to use SQL Server Database instead of In memory database

Currently the Web API project uses in memory database to store the key value pairs. We will modify this by changing the code to use SQL Server database. Lets add a reference to the Entity Framework core SQL Server to our CoreWebAPI.csproj file.

Refer to line 19 where we are adding the package reference to Microsoft.EntityFrameworkCore.SqlServer package with version 2.0.0. Next step is to add a connection string in appsettings.json file.

We add a ConnectionStrings section containing DefaultConnection as the connection string. This can be named to any valid name as per your project convention. The values specified as part of connection string are Data Source=sql2017;Initial Catalog=TechTalksDB;User Id=SA;Password=January2018;MultipleActiveResultSets=True

Pay special attention to the Data Source attribute. In usual scenario, this is the name of the server or the IP address of the machine where the SQL Server service is running. In our case it is running inside the container. So instead of the machine name or IP, we specify the container name. This has to exactly match the conatainer_name specified inside the docker compose file. It is this magic of Docker networking which allows us to connect one container to another using the container name. The last part is to use the connection string from the code instead of in memory data source. This requires a change in the Startup code.

Refer to lines 28-30. We have commented the UseInMemoryDatabase call and replaced it with UseSqlServer method. We are also passing the DefaultConnection as the connection string. That is all the hardwork we need to do in order to switch from in memory database to persistent one.

Build and run the application containers using Docker compose commands

Building docker containers based on docker compose is very simple. We execute the docker compose build command as shown below

docker-compose -f docker-compose-build.yml build

Usually the convention is to name the compose file as docker-compose. If the file name is something different we can override the convention by passing the –f flag with the filename which happens to be docker-compose-build.yml in our case. The build command builds all the images for the services which have the build context specified for it. Note that we have defined a dependency between the webapi and sql2017 services using depends_on attribute.

If everything goes fine, we should have 3 images built for the services defined inside the docker compose file. Lets run the application using the docker compose command again. This time instead of build, we will be using the up command to run the services.

docker-compose -f docker-compose-build.yml up

If everything goes fine, we should get output similar to below screenshots.

docker compose output1

Docker creates a network using default driver named dotnet2017_default. It also creates 3 containers named sql2017, dotnet2017_corewebapi_1, dotnet2017_coremvc_1. We can also see that the TechTalksDB database is initialized correctly.

docekr compose output2

Query Data from SQL container using the MVC web front end

We now have all the pieces of the puzzle arranged together. Lets browse the MVC frontend and query the web API which will return us the results from the SQL server table. Hit the url http:localhost using your favorite browser.  In the service url textbox put the web API url. In my case this is http://docker.for.mack.localhost:8080/api/keyvalue. Search for key MS. The result is the corresponding value for MS returned from the database from container instance. You can test the other keys by cloning the repository and running the docker compose up command.

MVC app displaying data from container

Conclusion

We saw how easy it is to integrate the database container with the ASP.NET Core. With minor modification to the existing code, we have been able to switch from in memory data source to containerized database. Docker makes it very easy for us to quickly test features. By using docker-compose we are able to describe in a declarative manner the various services which make up our application. One single command is used to build all the docker images for an application and we can run all the containers using a single command.

This is a great way to quickly build and test the applications quickly. You can imagine the huge productivity gains we can get from this kind of development process. We can use the same process to test the application on local machine as well as in dedicated test or UAT environment. Once again I would stress that you should explore Docekr for your needs. If you are not using Docker probably you are missing something.

The complete source code used for this demo is available on Github. In the next post we will see how to persist data outside of container to a persistent storage so that the state of the application is preserver even between the container restarts. Hope you enjoyed this post. Until next time code with passion and strive for excellence.

spacer

Build custom SQL Server 2017 Linux Docker image

Background

In the earlier series of posts on continuous deployment of multi-container apps, I had used two containers for ASP.Net Core MVC web site and ASP.Net Core Web API. This post is a sort of continuation. In the earlier version we were using in memory database to store the data. This approach is useful in a proof of concept or a quick demo application. In an enterprise scenario we would use some sort  of persistent data store to store the data. It is very easy to containerize the stateless services and web front ends. When we move towards stateful processing, things start to get interesting.

Containers by their very nature are supposed to be ephemeral. This means that containers can be stopped and destroyed and a new one built and put in place with minimum setup and configuration. If we destroy the container and replace it with a new one what happens to the data stored inside the container? Can we use containers to run database related processes? How can the stateless containers communicate with stateful container? We will try to answer these kind of questions as part of this and future posts.

In this post we will perform following activities

  • Download Docker image of SQL Server 2017 running on Linux
  • Create new database inside the container and initialize data
  • Query data residing inside the container

Download Docker image of SQL Server 2017 running on Linux

I would like to use the latest image of SQL Server 2017. Microsoft provides multi-architecture images of SQL Server 2017. I will use the mssql-server-linux image. This is smaller in size compared to the windows image. The linux image compressed size is 450 MB compared to that of windows developer version which is about 6 GB. I could have used any other database related container like MySQL or Postgress. Personally I am very much familiar with Microsoft ecosystem, hence I chose an image of SQL Server 2017. This also gives me an opportunity to test the features of SQL Server running on Linux without having the need to provision a Linux VM.

Like all other images that we pull from container repositories, we will start by pulling the image named mssql-server-linux:2017-latest. The official documentation from Microsoft gives a good introduction about how to get started with this image. What I am more interested is to make use of this image in my multi-container application.

Create new database inside the container and initialize data

I will reuse the codebase that is used in other parts of my blog series. We will build a custom image based on the mssql-server-linux image which will have a database named TechTalksDB. We will start off by declaring a Dockerfile which will define the steps to build the custom image. I have created a new folder named Database to store all the files related to database operations under this folder.

We can see that the base image used is microsoft/mssql-server-linux:2017-latest. This image has a pre-requisite of setting 3 environment variables

  1. ACCEPT-EULA
  2. SA_PASSWORD
  3. MSSQL_PID

I am setting these values to defaults as Y, January2018 and Developer respectively. Please note that this is not the best way of managing the password. Never use this approach in production system. This is purely for demo purpose. The reason I am providing default values is that it helps me from passing these values every time. This approach also gives me the flexibility that I can override it if required using environment variables at runtime.

We define a working directory as src. Next we copy 3 files into the working directory. These files are used to create the initial state of the database. We change the permissions on the files to mark the shell scripts as executable. Finally we run the entrypoint.sh script.

Entrypoint script starts the sqlservr process residing inside the /opt/mssql/bin directory. This is the default process run by the base image. Along with this process we run the setup-database shell script.

The setup-database script is a sort of intelligent part of the whole process. It takes into account the fact that the sqlservr process takes few seconds to start. It sleeps for 10 seconds and then runs the initialize-database.sql script. Please note that we use the sqlcmd command line tool to run the intialize-database sql script.

The sql script is pretty straightforward. It creates a database named TechTalksDB. Add 3 tables Categories, TechTalk and KeyValue. It also add few records into each of these tables.

Query data residing inside the container

We defined all the steps in the Dockerfile for building the image. Whoever creates a container based on this image will now have the TechTalksDB initialized with default values. The process of building and running the image is very very simple. Run the docker build command

docker build –t nileshgule/sqldb .

We have tagged the custom image as nileshgule/sqldb. If everything goes fine, we should see an output as shown below

docker build output

Lets run this newly built image using the docker run command.

docker run -it -p 1433:1433 \
--name sql2017 \
nileshgule/sqldb

Thats all and we have a full fledge SQL Server 2017 running with a database initialized with values as can be seen from the screenshot below.

doker run output

For simplicity I have shown only the last part of the output. But the whole process takes about less than a minute to fire up a brand new container with the data initialized. Isn’t that great? When was the last time you were able to get a fresh copy of database running on a SQL Server instance within a minute?

We can connect to the container and query the data from KeyValue table using the docker exec command as

docker exec -it sql2017 /opt/mssql-tools/bin/sqlcmd \
-S localhost \
-U SA \
-P "January2018"

This command starts the sqlcmd shell by connecting to the container named sql2017 which we created using the docker run command earlier. We can then issue the queries against any database within the container. In the below screenshot I am selecting all the records from the KeyValue table.

docker exec output

We can see that all the records inserted using initialize-database.sql script are available in the output.

Conclusion

Docker really makes it very simple to develop and test applications using containers. Here we saw how to initialize a database inside SQL Server 2017 instance running on Linux. There was a small change I had to do to the Docker demon settings in order to run the SQL Server 2017 docker image. This is due to the pre-requisite that at least 2 GB of RAM is required.

We saw how Docekrfile can be used to describe the process of building a custom image. The image had a new database created along with some static data. The best part was the speed at which all these changes were completed. As I said before, I can’t recollect when was the last time I was able to get up and running with brand new database on a completely fresh copy of SQL Server. Mind you the process of installation itself would take few hours. With docker it is matter of minutes and not hours. For the very same reason, I believe you should move to docker if you have not done so far.

The complete source code for the application used during this post is available on Github. I have also pushed the nileshgule/sqldb docker image to DockerHub. Feel free to download it and play around with it. In the future post we will see how to connect ASP.Net Core Web API to the SQL Server database running inside the Docker container. We will also see how to persist data across container restarts. All that and many more related topics will come very soon. Until next time code with passion and strive for excellence. 

spacer

Enable continuous deployment for multi-container app to Swarm cluster in Azure using VSTS

Background

This is the continuation of Continuous Deployment of Multi-Container app in Azure using VSTS and Docker. So far we have covered following topics

  • Part 1 – Overview of different tools and technologies we will be using during this series
  • Part 2 – Setup VSTS Continuous Integration (CI) build for DotNet Core 2.1 multi-container app
  • Part 3 – Integrate DockerHub container registry with VSTS CI build to publish Docker images
  • Part 4 – Provision Docker Swarm cluster in Azure using ACS Engine
  • Part 5 – Create SSH Endpoint to Docker Swarm running in Azure
  • Part 6 – Create Release Pipeline using VSTS for Multi-Container app
  • Part 7 – Deploy Multi-Container apps to Swarm cluster in Azure using VSTS

In this post we will be looking at enabling the Continuous Deployment feature of VSTS. In the previous post, we had to manually initiate the deployment step and the release creation step. The idea is to deploy to an environment as soon as an Artifact is published after a successful build. In the process we will also put the complete CI CD pipeline to test.

As part of this post we will be performing following steps

  • Enable Continuous Deployment trigger
  • Modify source code to trigger CI CD pipeline
  • Verify the CD pipeline

Enable Continuous Deployment trigger

Continuous Deployment is an important part of DevOps journey for teams staring to move into full CI CD mode. It allows teams to continuously deploy changes to an environment as and when the development team makes changes to the source code. Lets start with checking the current state of our website. If I browse the public DNS of the agent node, I am presented with the home page

website before changes

I would like to make a small change to the code and would like to see the changes deployed to the environment automatically. To do that we need to change the way code is currently being deployed. In the last post, we manually triggered the release and then the deploy phases. VSTS provides us the facility to automate this part. Go to the release definition and edit it.

enable continuous deployment trigger

Under the Pipeline section, click on the Continuous Deployment trigger option under the Artifacts section. On the blade which opens up, Enable the flag. We are almost ready to see all our changes continuously deployed to the Swarm cluster. Lets test this feature.

Modify source code to trigger CI CD pipeline

In the MVC project, I have a Carousel control defined inside a div tag in Index.cshtml file. Currently this piece of code is commented. Let me uncomment the whole div and check in the changes. This should trigger a CI build.

CI triggered build

Note down the latest build number 66 which is in progress. Once the build is successfully completed, the Artifacts will be published to the drop location. This should trigger the Release process.

Verify the CD pipeline

CD triggered release

We can see from the above screenshot that Release-22 is triggered by the continuous deployment trigger based on build number 66. Compare this with Release-17 which was manually triggered by me. Wait for the release to complete. We can verify from the release log that the changes were deployed successfully to Docker Swarm.

release log

Notice the highlighted section. The log clearly indicates that the two services have been updated. If you go back to the pervious post and look at the log, it says creating service. This is a nice feature of docker stack deploy command.

It handles the creation or updating of services for us. Because of this we are able to use the same script for all deployment scenario. For fresh deploy it creates the required services based on the description specified in the docker compose file. Updates are also handled effectively provided the stack name is same between the two deployments. In our case we are using the stack name as webapp.

The last part is to browse the website again and verify that the carousel control is visible.

website after changes

Sure enough we have achieved the goal. All this process took about 4 minutes to complete right from the time I committed the changes to GitHub. Isn’t that amazing? You can compare that with the amount of time it would take if you were to do each of the steps related to building the docker images, tagging each of the image, pushing them to DockerHub, login to Docker Swarm master node and run the deploy command. I am pretty sure it will be more than 4 minutes.

More than the amount of time it takes, the whole process is made repeatable. Anybody who has access to the VSTS build and Release definitions can understand the flow. It makes life so much easier for developers. By automating these release and deployment steps, we can focus on building the real business functionalities.

Conclusion

As I have been stressing throughout this series, VSTS provides us all the tool required to embark on the journey to DevOps. In this example we saw how making a change to the source code of a containerized app is pushed all the way from developer checking in the code to GitHub repo, CI build publishing the Docker images to DockerHub container registry. The the Continuous Deployment release pipeline picking up the latest artifacts and deploys the latest version of the container to Docker Swarm hosted in Azure using Azure Container Service engine. All this is managed via VSTS build and release pipelines.

This concludes the major part of the things I wanted to cover as part of this series on Continuous Deployment of Multi-Container apps. All the source code is available in the GitHub repository. I feel that this is just the tip of the iceberg and there could be many more things which can be added to the pipelines in bits and pieces.

Two things which come to my mind straight away are integration of automated test and managing the secrets more effectively. I will make those modifications and share it as and when they are ready. Until next time code with passion and strive for excellence.

spacer