Deploy multi-container app to Swarm cluster in Azure using VSTS

Background

This is the continuation of Continuous Deployment of Multi-Container app in Azure using VSTS and Docker. So far we have covered following topics

  • Part 1 – Overview of different tools and technologies we will be using during this series
  • Part 2 – Setup VSTS Continuous Integration (CI) build for DotNet Core 2.1 multi-container app
  • Part 3 – Integrate DockerHub container registry with VSTS CI build to publish Docker images
  • Part 4 – Provision Docker Swarm cluster in Azure using ACS Engine
  • Part 5 – Create SSH Endpoint to Docker Swarm running in Azure
  • Part 6 – Create Release Pipeline using VSTS for Multi-Container app

In this post we will look at running containers in Docker Swarm cluster by deploying the latest version of the code packaged as Docker image from DockerHub container registry. As part of this post we will be performing following steps

  • Define Release variables for connecting to DockerHub account
  • Add SSH tasks to login to DockerHub and to deploy the latest version using Docker CLI command to Docker Swarm manager
  • Test release cycle

Define Release variables for connecting to DockerHub account

In part 6, we had published couple of shell scripts as part of build artifacts. We will use both those scripts in this post. The first script named dockerhub-login.sh is used to login to the DockerHub container registry. It uses the docker login command with username and password as parameters.

docker hub login sh

In order to make the script generic and to avoid hardcoding the credentials, I am taking in the username and password as parameters. We need to pass these parameters when we trigger the script as part of release task.

VSTS allows us to store build and release variables. I will add 3 Release variables for storing the username, password and the URL of the container registry as shown below

release variables

The variables are named docker.username, docker.password and docker.registry. Make sure to change the type of password from cleartext to password field to avoid leaking the password. The registry url is set to https://index.docker.io/v1/ 

Add SSH task to login to DockerHub

Edit the release definition and add a new SSH task. This task is required to trigger the login script shown above.

Add SSH task

In the configurations for the task, select the SSH endpoint and select the Run type option as Script file. In the script file path choose dockerhub-login.sh file from the drop location. In the arguments section pass $(docker.username) and $(docker.password) as runtime arguments.

dockerhub login

We can use inline scripts for quick testing. But going back to the DevOps best practices, we should keep everything in source control as much as possible. This approach also allows us to use different parameters based on environment and the script can be reused in multiple places. By using the SwarmSSH endpoint we ensure that the command will be executed on the Swarm master node.

Add SSH tasks to deploy the latest version using Docker CLI command to Docker Swarm cluster

Similar to the DockerHub login script, I have another script which will run the docker stack deploy command with the compose file which we copied to the deploy folder in part 6 of this series.

docker deploy stack sh

The script calls docker stack deploy command with –c parameter for compose file name which is docker-stack-swarm.yml in our case. We are also naming the stack as webapp.

We add another SSH task to execute this script.The settings are similar to earlier steps except for change in the script file name and arguments. Make sure to pick thee docker-deploy-stack.sh file. We do not need specify any arguments to this script.

docker stack deploy

Save all the changes to the release steps definition. With these two additions to the release tasks we are ready to test the full release end to end.

Test release cycle

Trigger a new release and deployment. Monitor the log to ensure that our changes are effective.

release log

As we can see from the screenshot, the two newly added tasks have been completed successfully. We have two services named webapp_coremvc and webapp_corewebapi created as part of the stack deployment. Congratulations, with this we have successfully deployed the latest version of our code from DockerHub registry to Swarm cluster using VSTS.

Conclusion

In this post we saw how to use release variables to store DockerHub credentials. VSTS makes life easier by providing us multiple options to execute scripts by means of SSH tasks. The complete process of triggering the release and monitoring the live changes using logs is oversimplified due to various integration points. As demonstrated in this series so far it is very easy to setup CI CD pipeline using VSTS, Docker, DockerHub and Azure. We are still doing some manual steps in terms of creating a new release and deploying it to an environment. In the upcoming post we will see how this can be automated  and we can truly achieve CI CD right from checking in code in the IDE all the way till production. Until next time code with passion and strive for excellence.

spacer

Create Release Pipeline using VSTS for multi-container app

Background

This is the continuation of Continuous Deployment of Multi-Container app in Azure using VSTS and Docker. So far we have covered following topics

  • Part 1 – Overview of different tools and technologies we will be using during this series
  • Part 2 – Setup VSTS Continuous Integration (CI) build for DotNet Core 2.1 multi-container app
  • Part 3 – Integrate DockerHub container registry with VSTS CI build to publish Docker images
  • Part 4 – Provision Docker Swarm cluster in Azure using ACS Engine
  • Part 5 – Create SSH Endpoint to Docker Swarm running in Azure

In this post we will see how to define a release pipeline using VSTS. Release pipeline will build upon the artifacts which are produced from the earlier stages of Continuous Integration (CI) build.

As part of this setup we will be performing following steps

  1. Publish build artifacts from CI build
  2. Create new Release Pipeline
  3. Copy Docker Compose file to Swarm cluster

Publish build Artifacts from CI build

In a traditional CI build, the Artifacts are some sort of installer like an EXE or a JAR / WAR file depending on the type of programming language. As we saw in Part 3, the output of our build is already packaged as a self contained container image and already published to the container registry like DockerHub. What can be an Artifact we can publish as part of CI build for containerized app?

We don’t need to publish any artifact like EXE in this case. We need to adjust our publishing part to accommodate the changes due to containerization. Think of it from a deployment point of view. The release pipeline takes the output of the build pipeline and deploys it to the target environment. Do you remember the unit of deployment in Docker world? As you might have guessed it is a Docker image. When we are dealing with multi-container apps, the deployable is the docker-compose file which defines the relationship between different services.

As part of DevOps practices it is advisable to have all the code related to build and release process stored within the source control. I have created a directory named VSTS-Deploy in the project structure. This contains the files related to deployment aspects. This is usually referred to as infrastructure as code. We start off with the docker-stack-swarm.yml which describes the services similar to the one we had used during the docker-compose part.

The difference here is that we are describing the desired state configuration in the target environment instead of defining the build process. The images are already available in DockerHub registry and will be pulled down when the Docker Swarm orchestrator creates instances of the containers.

There are couple of shell scripts in the same folder. We will use them in later part of this series. For now imagine that we need all these contents of the VSTS-Deploy folder to be published as part of the artifact. Lets start with modifying the CI build definition and add a task for publishing the Artifact as the last task

CI Publish artifact

Select the path to publish and set it to DotNet2017/VSTS-Deploy as the source. Provide the artifact name. Coming from Team Foundation Server background drop is a convention. You can name it whatever you like. Save the changes and queue the new build. Verify that the drop folder is created by checking the build log. We now have all the elements to start our release pipeline.

Create new Release Pipeline

Hover on top of the Build & Release menu from the top navigation bar. Click on Releases and create a new release pipeline. VSTS provides various predefined templates for us to choose from. We will start with a Empty process template.

Release templates

This is the place where we define different steps to deploy our package to multiple environments. As per DevOps best practices same package should be deployed to all the environments. What changes between environments is the environment specific settings like number of machines, the firepower of machines etc. For us Docker solves this problem by packaging the application and all its dependencies into a single image. Lets create an environment and call it Production.

new release definition

Lets link the Artifacts from build output to the release pipeline. Click on Add artifact button to select the build which should act as the source for this release

choose source location

We could choose from multiple sources like Git, GitHub Team Foundation Server, Jenkins, Azure Container Registry or Docker. We will stick with Build in our case. Select Docker-CI as the source. We now have the link established between the CI build published artifact and the release build. Next step is to define the various steps of the release process. Click on the link which says 1 phase, 0 tasks in Production environment.

Copy Docker-compose file to Swarm Cluster

In order to copy the docker-compose file named docker-stack-swarm.yml, we need to add a Copy files over SSH task. Search for the task and add it to release definition

SSH task

You might be able to relate to creating the SSH endpoint connection in the previous post. We use that endpoint now to specify where to copy the file. We also specify the source folder, docker-compose filename and target folder as shown with numbers 2, 3 and 4 below. When the secure copy task is completed, we will have the file copied to deploy folder in the master node of the swarm cluster.

copy task config

Save all the changes and hold on to your seatbelts. We are about to send some file into the cloud with a push of a button. Go and click on the Release button and Create release. Select environment as Production

create new release

The moment of glory is about to come when you click the create button. Unfortunately there are still some clicks required to send the release into cloud. The create process just creates a release which is ready for deployment. We need to go and trigger the deployment by clicking on the deploy button.

deploy release

Once again select Production as the target environment and finally the deployment process starts. You can monitor the log by navigating to the Logs link. If everything goes fine you should see the log messages indicating that the drop folder is created on the agent node and also the file is securely copied over to the target machine.

log

Conclusion

We were able to successfully connect the output of our CI build process and initiate the release pipeline. If you have experience in developing CI CD pipelines you would realize the difference in the way we publish the artifacts with containerized build. Instead of publishing exe or jar files we publish the docker-compose file which will make use of the images stored in the container registry.

VSTS is a great tool which makes this possible with minimal effort. Although the process looks a bit tedious for the moment to setup the release pipeline, we will see in the next post how all this can be automated. We will also deploy the containers in the swarm cluster in the next post. Stay tuned for the real fun.  until next time code with passion and strive for excellence.

spacer

Create SSH endpoint to connect Docker Swarm running in Azure

Background

This is the continuation of Continuous Deployment of Multi-Container app in Azure using VSTS and Docker. So far we have covered following topics

  • Part 1 – Overview of different tools and technologies we will be using during this series
  • Part 2 – Setup VSTS Continuous Integration (CI) build for DotNet Core 2.1 multi-container app
  • Part 3 – Integrate DockerHub container registry with VSTS CI build to publish Docker images
  • Part 4 – Provision Docker Swarm cluster in Azure using ACS Engine

This short post is about creating a SSH endpoint between the VSTS hosted agent and Master node in Docker Swarm. The SSH endpoint will enable us to execute bash scripts on the Swarm master node.

Create new SSH Endpoint

Login to the VSTS account and select the Docker-CI-CD-Azure build definition we created earlier in part 2. Click on the Services menu. We already have endpoints established to DockerHub and GitHub accounts. Click on the New Service Endpoint as shown below

New Service Endpoint

VSTS can connect to different services like BitBucket, GitHub, Subversion, NuGet etc. Once the endpoints are established they can be reused within different tasks or steps during the build and release pipelines. We have already seen an example of this during the usage of DockerHub tasks in part 3 of this series. Select SSH from the drop down to bring up the dialog box for configuring the connection properties.

SSH endpoint

The properties in the configuration section are self explanatory. We start off by giving a meaningful name SwarmSSH for connection. Hostname is the FQDN of the Swam Master node. This is the same we used during part 4 after the cluster was provisioned. In my case the name is swarmagent.southeastasia.cloudapp.azure.com

Port number is very important. By default the SSH port is 22. But ACS Engine exposes 2200 as the port for SSH connectivity. Make sure to update the port number to 2200 instead of 22.

User name is what we specified as the Linux Admin user while provisioning the Swarm cluster. In my case it was swarmadmin.

Password or passphrase is related to the SSH keypair that was generated using the keygen command. Last parameter is the Private Key.

New SSH connection

We need to paste the contents of private key similar to the way we had specified public key during cluster creation. We can either use our favorite editor to copy & paste the contents of id_rsa file in .ssh folder or upload the file. I chose to upload it. If you use the copy & paste method, make sure to paste the complete file contents including -----BEGIN RSA PRIVATE KEY----

id_rsa private key

With all the parameters set we are ready to submit commands to the Master node in Swarm cluster. As part of our release steps we will need to copy files from the VSTS published artifacts to Master node and login to the DockerHub account using command line.

Once successfully connected we will run the deployment script which will instantiate the docker images and start the container services by running the docker stack deploy command.  For all these activities we need the SSH connectivity. In the next post we will see how to use these features. Until next time code with passion and strive for excellence.

spacer

Provision Docker Swarm cluster in Azure

Background

This is the continuation of Continuous Deployment of Multi-Container app in Azure using VSTS and Docker. So far we have covered following topics

  • Part 1 – Overview of different tools and technologies we will be using during this series
  • Part 2 – Setup VSTS Continuous Integration (CI) build for DotNet Core 2.1 multi-container app
  • Part 3 – Integrate DockerHub container registry with VSTS CI build to publish Docker images

This post is about provisioning the resources in Azure cloud to deploy our container images. We will be using Docker Swarm cluster to run containers for the two images we published to DockerHub in the previous post.

Pre-requisites

1 - RSA key for SSH connectivity

We will see during one of the provisioning steps that we are required to provide the RSA public key. This will be used when we have to connect to the master node of the Swarm cluster from our terminal to issue some commands. In the later part of the series we will automate the communication between the VSTS agent node and the Swarm cluster. If you do not have the RSA key pair created, you can follow the steps mentioned in the link.

On my Mac I can generate the RSA key pair by generating the command

ssh-keygen –t rsa

By default the private key and public key will be stored under the ~/.ssh folder. We will need to provide the public key as one of the parameters while creating the Swarm cluster. In the later post we will also need to provide the private key to the secure file copy task from VSTS.

2 - Azure Subscription

As already mentioned in the first part of this series, a valid Azure subscription is required to spin up multiple virtual machines along with various other resources like public IP, virtual network etc. which will form part of the Swarm cluster.

I assume that you have all the pre-requisites satisfied at this point of time. Before we go any further let us try to understand the need for Docker Swarm.

Need for Docker Swarm

Usually when we start developing software using containers, it is okay to test the images locally using Docker demon installed on the laptop or PC. When we want to scale the application we will be hit by the limits of the resources available on a single machine. This is where Docker Swarm comes into the picture.

It provides cluster management and orchestration features. We can combine multiple physical machines to form a cluster. Docker Swarm provides various services like load balancing, managing services and tasks. It provides us an interface where we can interact with different nodes in the cluster named workers via a single point called master. You can refer to documentation about Docker Swarm key concepts for more details.

Later in the series we will be able to see that the activities related to deploying new containers and services are initiated by sending commands to the master. The master then decides the scheduling of the containers onto worker nodes taking various constraints into account. This is commonly referred to as the Desired State Configuration in DevOps terminology.

One of the example of constraints is the number of replicas for a particular service. We can specify at the time of deployment that there needs to be two replicas of a container image. The Swarm master will then decide based on the available resources in the cluster where to place the two replicas.

Provision Swarm cluster using ACS engine

We can provision the Docker Swarm cluster by logging into the Azure portal and searching for Azure Container Service. Azure Container Service provides abstraction and automation around provisioning of multiple resources required for setting up the Swarm cluster. We will need to provision multiple machines usually consisting of 1 Master and multiple workers also known as agents. Then we need to establish the networking between these nodes. We also have to expose the correct ports. All this plumbing work is taken care off for us by the Azure Container Service (AKS). It also initializes the Swarm on the nodes and joins the Master and agents in the Swarm cluster.

However there is one limitation. The Docker Swarm orchestrator in ACS runs in standalone mode and not in the integrated mode which was released with the Docker 1.12 version. To overcome this limitation we are going to use the open source ACS Engine which is a community contributed quickstart template. This template installs Docker Swarm orchestrator in full integrated mode in Azure. Refer to the description of different parameters used by this template to know more details.

Navigate to the quickstart template link and click the Deploy to Azure which will trigger the deployment to Azure. If you are not logged in to Azure portal, you will be requested to login before continuing further. After successful login, you should see the page similar to the screenshot

Provision Swarm

I chose to create a new resource group named swarmresourcegroup. Here are other values I specified and reasons for selecting those values.

Parameter Value Description
Location Southeast Asia Azure region closest to your location
Agent count22 nodes will be provisioned as agent nodes
Agent Endpoint DNS Name PrefixswarmagentPrefix to be uses for agents, we will use this to browse the web urls using this agent name prefix
Agent VM SizeStandard_D2_V2I went with the default values. You can select the appropriate one based on your budget range
Linux Admin UsernameswarmadminThis will be required when we want to connect from the terminal using SSH connection
Master Endpoint DNS Name prefixswarmmasterThis will be useful when we want to deploy the application. We will issue commands to the master node
Master VM SizeStandard_D2_V2same as the agent VM size above. We can specify different sizes for the master and agent VM based on our needs
SSH RSA public keycontents of id –rsa.pub fileCopy the contents of the file ~/.ssh/id_rsa.pub

I did not modify the ip addresses. Verify all the values and click the Purchase button. Depending on the number of resources requested it could take anywhere between 10-20 minutes to provision all the resources and combine them into a swarm cluster. We can verify that the requested resources were created successfully by navigating to the Resource Group blade and then selecting the swarmresourcegroup.

swarm resources

We have 15 resources created for us. This includes a set of storage accounts. There are two load balancers one for master and one for agent node. Similarly there are a pair of public ips created for master and agent nodes. You can click on each of the resource to find out more about them.

Next we need to find the fully qualified names of the master and agent node so we can connect to them using SSH or browser. Click on the hyperlink which says 1 succeeded under the deployment label. This will take us to the deployment template. Select Microsoft.Template link.

deployment template

Verify Swarm cluster

We can see the outputs section. This gives us the fully qualified names of the master and agent nodes. Lets try to connect to the master node and verify that we are running Docker Swarm in Swarm mode. First thing we need to do is to establish a SSH tunnel which will execute the commands from our terminal remotely onto the master node. This is done by executing the command

ssh –fNL 2375:localhost:2375 –p 2200 [email protected]

We are mapping the 2375 port which is used by Docker demon to the remote demon. You can also see the Linux Admin user swarmadmin and master DNS name swarmmaster which we specified during the provisioning time coming into effect here. You will be asked to add the host to know hosts. Say yes and proceed further

ssh bridge

With the SSH tunnel established we can start issuing the commands related to Docker and docker swarm from the terminal. We set the environment variable DOCKER_HOST to 2375 using EXPORT. Finally Run the docker info command to get the status of the cluster

docker swarm info

We can see that we are running in Swarm mode with the active flag. We can also see that there are 3 nodes in the cluster including 1 manager.

Conclusion

There are different ways in which we can provision the Docker Swarm cluster. In this post we used the approach of deploying to Azure an ARM template using ACS Engine. With just few configuration settings we can get the Swarm cluster up and running. The ARM template really simplifies the task of setting up the cluster. Without the template we would need to manually create all the 15 resources that we saw above . Also we will need to setup the Swarm related things like initializing the swarm cluster , allocating nodes as master and agent, establishing connections between them by creating secure keys etc. All these boilerplate things are taken care for us by the deployment template. Within 15-20 minutes we can get started with a fully managed Docker Swarm cluster.

This is an area where cloud technologies are really making a huge difference. In many of the big organizations you can expect few days or even months to provision the resources that we could spin up within minutes. By having the self servicing capabilities we can get our own environments and manage them effectively.

In the upcoming posts we will see how we can deploy the Docker images that we published to DockerHub in the previous post using VSTS release pipeline. Until next time code with passion and strive for excellence.

spacer

Publish Multi-Container app images using VSTS

Background

This is the third part of the series of blog posts on Continuous Deployment of Multi-Container app in Azure using VSTS and Docker. In the first part we covered the brief overview of all the tools we will be using during this series. In the previous post we established connection between VSTS and GitHub to trigger the build every time there is change in the source control pushed to the repository. This post will continue from there to build Docker images and publish them to the container registry. Lets get started.

Integrate DockerHub with VSTS build

We will be using the Multi-Stage build feature of Docker to build our images for the Web and API parts of the application. We will publish these images to DockerHub registry. Ensure that you have valid DockerHub account. We will need to perform following steps to continuously build and push docker images

  1. Create Docker Compose file which creates the latest version of the images
  2. Edit VSTS build definition to add task to trigger docker-compose build command
  3. Add task to trigger docker push command for coremvc and corewebapi images to DockerHub using docker compose task

Create Docker-compose file

During the earlier post, I had demonstrated how to build Docker images using multi-stage build feature. In that example I had to separately build two docker images one for AspNet Core MVC web application and the other one is for ASPNet Core WebAPI project. This approach works fine if we have just couple of images. As the application grows, more likely you will have multiple images. We need a convenient way of creating the images. Docker provides us that convenience by means of Docker-compose. It gives us the ability to use declarative syntax to describe different parameters required to build different images and then build all the images using one single command.


As we can see from the above code snippet, all the parameters we were specifying at the terminal to docker cli are specified in declarative manner. This compose file is referencing the existing dockerfile from mvc and api projects. One of the advantage of using a compose file is also that we can specify dependencies between different services of an application. You can see this in action on lines 9 & 10 where we specify that the frontend MVC site depends on the web API. Docker will automatically take care of build the images in correct order to satisfy these dependencies. I am using version 3 of docker compose. You can read more about the docker compose features using docker documentation.

Edit VSTS build definition

VSTS provides out of the box integration with Docker tools. Lets start by editing the build definition which we created earlier. Click the plus sign on the Phase 1 section to bring up available tasks

add docker compose task

There are various types of tasks available which can be viewed differently under categories like Build, Utility, Test, Package, Deploy and Tools. For simplicity we will search for docker in the search box as shown in the screenshot. We get 4 tasks filtered. Select Docker Compose and click Add button.

We need specify all the details in order to work with Docker tools. Select the newly added task and give some meaningful display name to it. By default the task is configured to run against Azure Container Registry. As we will be using DockerHub change the selection to Container Registry. As this is the first time we are connecting to DockerHub registry, we need to configure it. If you already have an existing connection we can select it from the dropdown instead of creating new. Click on New to configure DockerHub.

customize docker compose task

The new Docker Registry Connection dialog box is self explanatory. Choose the type of registry as DockerHub, provide username and password and you are done. Verify that the settings are correct.

Dockerhub connection

By default the task is configured to look for a file named docker-compose. In my case I am using a file named docker-compose-build.yml to distinguish between the build and deploy versions. We need to update the filename in the text box for Docker compose file as shown below by clicking on the ellipses next to the text box

docker compose file selection




The file is located under the DotNet2017 folder. We are almost done with the setup. The last step remaining is to configure the build command. This is done by setting the Action to Run a Docker Compose command and setting the Command to build.

docker compose build config

Let us test that the settings are correct and all the things are glued together as expected. Click on the Save & Queue button on the top right to queue a new build. A hyperlink will be provided when the build is queued. Click that link to monitor the progress of the build.

compose task output

The build succeeded and we have the two images created as the output. coremvc:latest is highlighted in the screenshot. Similarly we have the corewebapi:latest also tagged and ready to be pushed to DockerHub.

Publish images to DockerHub

Go back editing the build definition and add one more Docker Compose task. This time we will reuse the DockerHub connection. The compose filename remains the same because we want to publish the images created using the file. Modify the action to Push Service Images.

Docker Compose push config

That is all we need to push the newly created docker images to DockerHub. Save the changes and queue a new build. We should be able to see our images being pushed to the DockerHub registry in the final build log output.

VSTS build log

The build log confirms that the two images have been successfully pushed to registry. I still like to get a double confirmation by checking Dockerhub. Login to the DockerHub account and verify that the recent images are available. The screenshot below confirms that the coremvc image was updated less than a minute ago. Same is the case with corewebapi image.




DockerHub

Conclusion

We can see that it is very easy to integrate Docker, DockerHub and VSTS. With the help of Docker Compose files we are able to build multiple images using single command. And then Microsoft makes it super easy by providing out of the box functionalities with VSTS to build and publish images to container registries. Without these tasks we would have to do lot of hand coding to create and push images to DockerHub using cli commands. VSTS has abstracted this functionalities and by providing GUI tools the process is seamless.

This completes our continuous integration CI phase where a code change will be able to produce a latest version of the docker image and push it to DockerHub. The image below is the summary of what we have achieved so far for the CI part. The blue arrows indicate build section and the purple ones are the publishing part.

CI phase

The complete source code for this post is available on GitHub. In the upcoming posts we will see how to provision a Docker Swarm cluster in Azure running using Azure Container Service and continuously deploy the docker images to Swarm. Until next time code with passion and strive for excellence.

spacer

VSTS CI Build for Dockerized .net Core 2.1 multi-container app

Background

This is 2nd part of the series on Continuous Deployment of Multi-Container app in Azure using VSTS and Docker. In the first part I laid out the broad idea of what I intend to do as part of this series. Lets get started with the first step of creating the Continuous Integration build definition. Setting up the CI build involves multiple steps right from choosing the build server, getting the source code from the source code repository, running unit test, ensuring code quality by running static code analysis, producing the build artefacts etc. It would be difficult to cover all those topics in single post. I will focus on initial step of connecting the build server to the source code and update the source code on each checkin.

Create New VSTS Build

As mentioned in the pre-requisites in the first part of this series, I assume you have a valid VSTS account. Login to your account and you will be presented with the screen similar to what is shown below

VSTS start page

Click on the new project option to create a new VSTS project. You will be requested to provide minimal settings to start new project. Project name and Description are self explanatory. When it comes to version control you can chose from Git or Team Foundation Version Control.

Git version control

I have my Github repository which I will be linking shortly. So I will go and pick Git. Last step is to chose the process template. VSTS provides Agile, CMMI or Scrum as the available options.

Agile Work Item process template

I will go with the default of Agile template. Based on your team needs you might want to chose other template. Based on the work item template selected, VSTS will provide additional features for the application lifecycle management. For example, when I chose Agile, I will be give default Sprint and Iteration planning templates which can be helpful while working in teams.

With these basic information VSTS will create a project and whole set of related items for us. After successful completion of the project you should be presented with a welcome screen for newly created project

Project source options

You can start with a blank repository or push an existing repository to VSTS. You also have options to import source code from various supported providers like Bitbucket, Github, Subversion etc. In our case I do not wish to duplicate the source code between Github and VSTS. I am going to continue using Github as my source code repository and use VSTS purely for the purpose of build and release management.

Setup Build

Click on the Set up Build hyperlink. This will navigate us to the screen where we can define a new build. Based on the type of technology and type of project VSTS provides prebuilt templates. I could have chosen ASP.Net Core project, but I preferred to use the Empty template.

blank project

Similar to the project definition, build definition also needs some basic information to get started with.

Agent type selection

Provide some unique name for the build. We can choose from 4 different types of queues

  • Hosted
  • Hosted Linux Preview
  • Hosted macOS Preview
  • Hosted VS2017

We will choose the Hosted Linux Preview option. At this point of time we need to decide what are the different steps our build is going to perform. First of all we need the source code to be available on the build server. Click the Get Sources section to establish the connection from one of the available source

GitHub connection

I selected GitHub and provide a meaningful name (GitHub DotNet-2017 Connection). I opted to authorize with OAuth and was presented with a popup to provide the credentials to connect to GitHub account. Once successfully I will have access to all the GitHub repositories.

Github project names

I selected dotnet-2017 as the repository. With these configurations we are ready to go. Lets test that we are in fact able to pull the source code hosted in public GitHub repo to the build server. Click on the Save & Queue option to trigger the first build. You will get a hyperlink to the build number, click that to go to the build details. If everything was configured correctly, you should see the log of what happens during the process

successful code download

The build is successful and we can see the details of each phase if we wish to go into the details. This proves that our connection between VSTS and GitHub is working fine. The last part remaining is to trigger the build automatically anytime there is change to the source code in GitHub. That is pretty easy. Go back to the build definition click on edit option to edit it.

CI trigger

Click the Triggers to enable CI trigger. Default is turned off because of which the changes to source control are not automatically built. Enable this setting and we will have automatic build triggered for every change. I like to enable one more option to Batch the changes if there is already build in progress. This will ensure that any changes checked in while another build is in progress are batched together instead of running individual builds for each change. Save the changes. To test this feature you will need to checkin some changes to GitHub. That is an exercise for you to test your knowledge so far.

Conclusion

This was the stepping stone to get our journey stated with VSTS and automated build. We managed to integrate GitHub and VSTS. With the base set nicely we will be cruising along in the future posts of the series. Microsoft has managed to replicate most of the features from on prem Visual Studio Team Foundation server to the online offering. With minimal efforts we can start building CI and CD pipelines. Feel free to explore other features related to CI build. Until next time code with passion and strive for excellence.

spacer

Continuous Deployment of Multi-Container app in Azure using VSTS and Docker : Part I

In an earlier post, I demonstrated how to upgrade .Net Core Docker image to an Alpine based Linux distribution. Before upgrading the Docker image, I had set up the CI CD pipeline which was deploying the multi-container apps to the Docker Swarm cluster hosted in Azure and provisioned using Azure Container Service (ACS). I updated the CI CD pipeline to reflect the changes done during the upgrade and was able to run the .Net Core 2.1 based Docker images. This is the first post of the journey of the containers running on my local laptop to Swarm cluster using an automated continuous build and deployment process.

I intend to cover the different steps in a series  of posts which will include


Pre-requisites

In order to perform all the above mentioned steps there are some pre-requisites.

1 - VSTS account

Visual Studio Team Services provides a complete solution for managing the Application Lifecycle management for various kinds of application. It is free up to 5 users. It has all the features required to start your DevOps journey. In an enterprise scenario, we can use it for requirement management, building dashboards, manage Sprints etc. for end to end delivery process. For the purpose of this blog post series, we will be using it purely as build and release management tool. We will configure the automated build and release pipeline using VSTS. If you don’t have a VSTS account you can create one for free.

2 - Source code in GitHub repo

VSTS has its own Git repository. This can be used if we do not have existing source code in some other version control system. I already have the source code hosted on Github. VSTS supports integration with existing Github repositories. We will be using this approach to connect VSTS to the DotNet-2017 repository. Every time there is a checkin a continuous integration build will trigger and produce the two docker images.

3 - DockerHub account

The images created during CI build will be pushed to the DockerHub repository. If you do not have dockerhub account you can create one. I am using DockerHub since I already have these images published there. If you prefer you can chose other container registry like Azure Container Registry. Only prerequisite is that we should be able to push the images to this registry during the build phase. VSTS provides integration with DockerHub out of the box.

If you decide to use other registry and there is no integration with VSTS, we should at least be able to push docker images to it using the CLI. That would enable us to write a bash or windows script to trigger the commands using scripting task.

4 - Azure Subscription to create Docker Swarm cluster

Finally we need Azure Subscription which will allow us to provision the Docker Swarm cluster. As I had mentioned during the post on Visualizing Docker Swarm using Portainer And Swarm Visualizer, there is a limitation with the Azure Container Service. It runs the Swarm cluster in legecy mode. To overcome this limitation, we will be using the ARM template for ACS Engine – Swarm Mode.

At the end of this series we will have the fully automated build and release pipeline for .Net Core Multi-Container app running inside docker containers in Azure. The whole process described below takes away the mundane and repetitive task of running various commands on terminal and automates them as much as possible. We would still need to write those commands. But they will be part of source control in script files. The automated build process will use the script files to execute the commands. This is important from DevOps point of view where we have a repeatable process which can be used by anybody in the team working on the project. By using such technique we follow the infrastructure as code principle which ensures that there are no manual steps required during the build and deployment process.

Release pipeline

I am using Visual Studio code as my code editor but you can use anything that suits your needs. I am in love with VS Code since I started coding with it. If you have not tried it I can say you are missing something. The other steps in the above picture are self explanatory with regards to CI and CD phases. I hope you will enjoy this journey with me.

Given the popularity of Kubernetes and recent announcements by Microsoft that Azure Container Service (AKS) will support managed Kubernetes cluster, I would like to migrate to Kubernetes at some point in future. Other option I am also exploring is to create an additional step in the deployment phase which will deploy the application to Kubernetes cluster in parallel with Docker Swarm. This would require some additional resources in Azure.

Let me know via comments if you have any suggestion or feedback for the upcoming posts. Until next time code with passion and strive for excellence.

spacer

Migrate DotNet Core Docker image to Alpine based distribution

Background

Recently I had been using Docker and related technologies like Docker Swarm to play around with DotNet Core 2.0. I also used Azure Container Service (AKS) to scale multi-container app in  Azure. There was a recent announcement that early next year a Docker base image for .NetCore 2.1 will be available based on Alpine distribution. Currently these images are in preview and are available as nightly builds. I thought of giving it a try to upgrade my base image from stable microsoft/aspnetcore:2.0 image. This post is about the changes I had to do to upgrade to the experimental version. As usual the complete source code is available on GitHub repo.

Update Dockerfile to multi-stage build format

Before moving to the Alpine based Docker image, I was using aspnetcore 2.0 runtime optimized image. I used to publish the artifacts myself and then copy them over to the custom image. With recent release of Docker, there is support for multi-stage builds. This enables us to use a single Dockerfile to define various steps of our build, test and release stages of our application which is packaged using containers. Lets get started with these changes for the CoreMCV project.

Build source code using sdk image

We start with dotnet-nightly:2.1-sdk image. This image contains the sdk tools required for building and packaging the Dotnet Core 2.1 application. One difference you might have noticed is the AS keyword. We are naming this stage as the build-env to indicate that this is part of the build stage.

WORKDIR instruction remains the same. Next we copy the project file and the NuGet.config file. We need to override the default NuGet configuration. Otherwise we will be pointing to the stable version of NuGet instead of the 2.1 preview version. With these 2 files copied, we run the dotnet restore command to restore all the required NuGet packages required for running the application.

We also copy any static resources to the container. With all the files and dependencies available inside the container, we trigger the release with the help of dotnet publish command. Upon successful completion of publish command, releaseOutput directory should be created inside the container.

Package artifact using runtime image

The part related to containerizing the artifacts is exactly the same as earlier with just one exception. Earlier we used to copy the contents of release folder from the host machine to the container. Now we source the release output from the build-env image. The base image used here is dotnet-nightly:2.1-runtime-alpine image.

As we can see all the steps required for building and packaging the application into container are specified in the same file. With these changes we are good to build and test our code using the .Net Core 2.1 framework.

The Docker command to build the image is same as before docker build –t nileshgule/coremvc .

Error 1 : Mismatched .Net Core version reference

publish runtime error

The image does not build. We get an error saying that the compatible framework version is not found. The .NetCore version 2.0.0 is not found and the error also clearly states that 2.1.0-preview1-25919-02 is installed. To fix the error we update the TargetFramework property in CoreMVC.csproj file.

Along with the TargetFramework we also add the RuntimeFrameworkVersion to specify the exact version as shown below

<TargetFramework>netcoreapp2.1</TargetFramework>

<RuntimeFrameworkVersion>2.1.0-preview1-25919-02</RuntimeFrameworkVersion>

After modifying this change rerun the docker build command. The image is built successfully.

docker build success

We can try to run it with the command docker run –it –p80:80 nileshgule/coremvc

Error 2 : Default exposure to port 5000

docker run port error

Here we can see that although we are publishing port 80 on both the container and the host, for some reason the container tries to listen on port 5000. This seems to be the default port with the Alpine image. I do not wish to expose port 5000 but 80 as specified in the Dockerfile. In order to achieve this we need to update the Program.cs file.


We add UseUrls(“http://*:80”) statement to BuildWebHost method. This is to specify that the MVC website will listen to 80 port. Once again run the docker build command followed by docker run command. This time the image successfully listens on port 80. At this point we hit our next roadblock

Error 3 : Application Insight related setting

Application Insights error

We get an error suggesting that ApplicationInsights package is missing. This package is required for enabling cloud analytics. We do not need it at this point of time. Lets disable it. Get back to the CoreMVC.csproj file and add the following entry

<PropertyGroup>

<PublishWithAspNetCoreTargetManifest>false</PublishWithAspNetCoreTargetManifest>

</PropertyGroup>

Once again rebuild the image and run it. This time everything is fine and we can browse the site successfully by accessing the url http://localhost:80

Update WebAPI project

In order to update the WebAPI project we need to follow the same steps

  1. Create multistage docker build in Dockerfile.
  2. Update framework version in CoreWebAPI.csproj file
  3. Add PublishWithAspNetCoreTargetManifest property with false value
  4. Update Program.cs file and expose port 8080 as we will be exposing the webapi on 8080 port.

Build the image using command docker build –t nileshgule/corewebapi .

Run the image using docker run –it –p 8080:8080 nileshgule/corewebapi

We can see the output of running the MVC application accessing the WebApi

web app with api

You can see that I am using docker.for.mac.localhost special variable to access the webapi container. I had explained about this in my earlier post.

Conclusion

Except for minor problems, it was quite easy to upgrade for .Net Core 2.0 base image to 2.1 Alpine based image. The final runtime image is almost half the size of the previous image built with microsoft/dotnetcore:2.0 image. This is what I like the most about Docker and its ecosystem. It allows us to quickly test different technologies without messing up with our laptop. Tomorrow if I wish to revert back to stable version of .Net Core, I could do so by just changing the Docker files to earlier version. Same is the case if I want to try some experimental and preview features of Core 2.1 framework, I don’t  need to install anything on my base machine.

One feature which I like about the stable version of the aspnetcore:2.0 base image is that we do not need to explicitly expose the ports using UseUrls method. Once we expose the ports in the Dockerfile they are mapped automatically to the underlying ports. Since the Alpine image is in preview, I expect the full version to address this issue when it is release officially.

If you have not yet started using Docker I would really recommend that you start doing so. For sure you will fall in love with this wonderful technology. If you have already started using it, I would be very happy to hear about your experience.

Until next time Happy Programming.

spacer