Provision Docker Swarm cluster in Azure

Background

This is the continuation of Continuous Deployment of Multi-Container app in Azure using VSTS and Docker. So far we have covered following topics

  • Part 1 – Overview of different tools and technologies we will be using during this series
  • Part 2 – Setup VSTS Continuous Integration (CI) build for DotNet Core 2.1 multi-container app
  • Part 3 – Integrate DockerHub container registry with VSTS CI build to publish Docker images

This post is about provisioning the resources in Azure cloud to deploy our container images. We will be using Docker Swarm cluster to run containers for the two images we published to DockerHub in the previous post.

Pre-requisites

1 - RSA key for SSH connectivity

We will see during one of the provisioning steps that we are required to provide the RSA public key. This will be used when we have to connect to the master node of the Swarm cluster from our terminal to issue some commands. In the later part of the series we will automate the communication between the VSTS agent node and the Swarm cluster. If you do not have the RSA key pair created, you can follow the steps mentioned in the link.

On my Mac I can generate the RSA key pair by generating the command

ssh-keygen –t rsa

By default the private key and public key will be stored under the ~/.ssh folder. We will need to provide the public key as one of the parameters while creating the Swarm cluster. In the later post we will also need to provide the private key to the secure file copy task from VSTS.

2 - Azure Subscription

As already mentioned in the first part of this series, a valid Azure subscription is required to spin up multiple virtual machines along with various other resources like public IP, virtual network etc. which will form part of the Swarm cluster.

I assume that you have all the pre-requisites satisfied at this point of time. Before we go any further let us try to understand the need for Docker Swarm.

Need for Docker Swarm

Usually when we start developing software using containers, it is okay to test the images locally using Docker demon installed on the laptop or PC. When we want to scale the application we will be hit by the limits of the resources available on a single machine. This is where Docker Swarm comes into the picture.

It provides cluster management and orchestration features. We can combine multiple physical machines to form a cluster. Docker Swarm provides various services like load balancing, managing services and tasks. It provides us an interface where we can interact with different nodes in the cluster named workers via a single point called master. You can refer to documentation about Docker Swarm key concepts for more details.

Later in the series we will be able to see that the activities related to deploying new containers and services are initiated by sending commands to the master. The master then decides the scheduling of the containers onto worker nodes taking various constraints into account. This is commonly referred to as the Desired State Configuration in DevOps terminology.

One of the example of constraints is the number of replicas for a particular service. We can specify at the time of deployment that there needs to be two replicas of a container image. The Swarm master will then decide based on the available resources in the cluster where to place the two replicas.

Provision Swarm cluster using ACS engine

We can provision the Docker Swarm cluster by logging into the Azure portal and searching for Azure Container Service. Azure Container Service provides abstraction and automation around provisioning of multiple resources required for setting up the Swarm cluster. We will need to provision multiple machines usually consisting of 1 Master and multiple workers also known as agents. Then we need to establish the networking between these nodes. We also have to expose the correct ports. All this plumbing work is taken care off for us by the Azure Container Service (AKS). It also initializes the Swarm on the nodes and joins the Master and agents in the Swarm cluster.

However there is one limitation. The Docker Swarm orchestrator in ACS runs in standalone mode and not in the integrated mode which was released with the Docker 1.12 version. To overcome this limitation we are going to use the open source ACS Engine which is a community contributed quickstart template. This template installs Docker Swarm orchestrator in full integrated mode in Azure. Refer to the description of different parameters used by this template to know more details.

Navigate to the quickstart template link and click the Deploy to Azure which will trigger the deployment to Azure. If you are not logged in to Azure portal, you will be requested to login before continuing further. After successful login, you should see the page similar to the screenshot

Provision Swarm

I chose to create a new resource group named swarmresourcegroup. Here are other values I specified and reasons for selecting those values.

Parameter Value Description
Location Southeast Asia Azure region closest to your location
Agent count22 nodes will be provisioned as agent nodes
Agent Endpoint DNS Name PrefixswarmagentPrefix to be uses for agents, we will use this to browse the web urls using this agent name prefix
Agent VM SizeStandard_D2_V2I went with the default values. You can select the appropriate one based on your budget range
Linux Admin UsernameswarmadminThis will be required when we want to connect from the terminal using SSH connection
Master Endpoint DNS Name prefixswarmmasterThis will be useful when we want to deploy the application. We will issue commands to the master node
Master VM SizeStandard_D2_V2same as the agent VM size above. We can specify different sizes for the master and agent VM based on our needs
SSH RSA public keycontents of id –rsa.pub fileCopy the contents of the file ~/.ssh/id_rsa.pub

I did not modify the ip addresses. Verify all the values and click the Purchase button. Depending on the number of resources requested it could take anywhere between 10-20 minutes to provision all the resources and combine them into a swarm cluster. We can verify that the requested resources were created successfully by navigating to the Resource Group blade and then selecting the swarmresourcegroup.

swarm resources

We have 15 resources created for us. This includes a set of storage accounts. There are two load balancers one for master and one for agent node. Similarly there are a pair of public ips created for master and agent nodes. You can click on each of the resource to find out more about them.

Next we need to find the fully qualified names of the master and agent node so we can connect to them using SSH or browser. Click on the hyperlink which says 1 succeeded under the deployment label. This will take us to the deployment template. Select Microsoft.Template link.

deployment template

Verify Swarm cluster

We can see the outputs section. This gives us the fully qualified names of the master and agent nodes. Lets try to connect to the master node and verify that we are running Docker Swarm in Swarm mode. First thing we need to do is to establish a SSH tunnel which will execute the commands from our terminal remotely onto the master node. This is done by executing the command

ssh –fNL 2375:localhost:2375 –p 2200 [email protected]

We are mapping the 2375 port which is used by Docker demon to the remote demon. You can also see the Linux Admin user swarmadmin and master DNS name swarmmaster which we specified during the provisioning time coming into effect here. You will be asked to add the host to know hosts. Say yes and proceed further

ssh bridge

With the SSH tunnel established we can start issuing the commands related to Docker and docker swarm from the terminal. We set the environment variable DOCKER_HOST to 2375 using EXPORT. Finally Run the docker info command to get the status of the cluster

docker swarm info

We can see that we are running in Swarm mode with the active flag. We can also see that there are 3 nodes in the cluster including 1 manager.

Conclusion

There are different ways in which we can provision the Docker Swarm cluster. In this post we used the approach of deploying to Azure an ARM template using ACS Engine. With just few configuration settings we can get the Swarm cluster up and running. The ARM template really simplifies the task of setting up the cluster. Without the template we would need to manually create all the 15 resources that we saw above . Also we will need to setup the Swarm related things like initializing the swarm cluster , allocating nodes as master and agent, establishing connections between them by creating secure keys etc. All these boilerplate things are taken care for us by the deployment template. Within 15-20 minutes we can get started with a fully managed Docker Swarm cluster.

This is an area where cloud technologies are really making a huge difference. In many of the big organizations you can expect few days or even months to provision the resources that we could spin up within minutes. By having the self servicing capabilities we can get our own environments and manage them effectively.

In the upcoming posts we will see how we can deploy the Docker images that we published to DockerHub in the previous post using VSTS release pipeline. Until next time code with passion and strive for excellence.

Share:
spacer

No comments:

Post a Comment