Skip to main content

Understand Kubernetes Object–Service

This is the fourth part of the series on building highly scalable multi-container apps using AKS. Previous posts in this series:

This post we will focus on creating another basic resource in Kubernetes named Service.

We will be performing following actions during this post

  • Understand the basics of Service object in Kubernetes
  • Build Kubernetes service manifest file
  • Create a service using Kubernetes CLI kubectl
  • Get details about Service using kubectl
  • Get Details about service using Kubernetes dashboard
  • Delete service using Kubectl

Understand the basics of Service object in Kubernetes

If you recall from the last post, we successfully deployed the web application container using the Deployment controller. If we want to access the web user interface, how do we do that? This is where Kubernetes Service comes into the picture.

Service provides a stable endpoint for various components to talk to one another. It is an abstraction which defines the policy by which to access the pods. Assume that there are 2 replicas of a pod running on two different nodes. There is new version of the container to be deployed. The Kubernetes master decides to schedule the workload for the new version on completely different nodes. It would be extremely difficult for dependent services to keep updating the endpoint addresses every time there is a new deployment. This is where Service discovery mechanism and load balancing within Kubernetes comes into the picture. It provides stable endpoint for all the services which need to be exposed internally within the cluster or even outside the cluster.

Build Kubernetes service manifest file

We can use the kubectl command to create a service with all the parameters from the command line. Personally I prefer to use manifest file. Lets look at the example of the Service file manifest.

service-definition

Like earlier manifest files we had used for Namespace and Deployment, we start with the API version and the kind attributes. The metadata attribute defines the name of the service as webfront. We also specify the namespace as abc2018sg.

The specification of the service is defined using Spec. The type defines how the service will be exposed to other components within the Kubernetes cluster or to the outside world. Available options are ClusterIP / NodePort / LoadBalancer / ExterName. In our example we are using LoadBalancer. This will expose the service using Azure load balancer. LoadBalancer service type automatically creates the NodePort and ClusterIP to which the external load balancer will route the requests.

The default service type is ClusterIP. It exposes the service on clusters internal IP. The disadvantage of this is the service will not be accessible from outside the Kubernetes cluster.

If we use NodePort the same port will be exposed on all the nodes of the cluster. We can access the service from any node on the cluster by using the specified port number.

ExternalName maps the service to the contents of the externalName. This is useful we wish to connect to a service outside of the Kubernetes cluster using a stable name. The example that comes to mind is that of connecting to a SQL Azure database instance.

Next we define the ports which need to be exposed using the service. Here we are exposing port 80.

The last part is the most important one. This links the service to the underlying pod using the selector. Be very careful to match the key value pairs. Usually spelling errors are the most common mistake while trying to match the labels if you are typing everything by hand.

Create service using Kubernetes CLI kubectl

Save the contents of the file shown above with name web-service.yml. We use Kubectl to create the service object as

kubectl create –-filename web-service.yml

Get details about Service using kubectl

We can query the Kubernetes API server to get the list of services. We can use any one of the following commands

kubectl get services –namespace abc201sg
kubectl get services -n abc201sg

Once we have the name which is webfront, we can query the service details

kubectl get  service webfront –namespace abc2018sg
kubectl get  service webfront -n abc2018sg

The output of the above commands is shown below

kubectl output

We can see that in case of LoadBalancer service type the Cluster-IP and the External-IP both are created. It also exposes port 80 and a higher order port with 31960. We can now access the service using the external IP and Port combination http://104.4.129.205:80

The Public IP address provisioning can take few minutes depending on the cloud provider.

web UI

Note that since I am using Azure subscription,I keep the cluster running only for short period of time. When the cluster is recreated the public IP address will be different.

Get Details about service using Kubernetes dashboard

The process is similar to what we had in the previous one to access details of the pod and deployment. We filter the namespace and then the services section

service initializing

Delete service using Kubectl

Deleting the service is similar to deleting other objects.

kubectl delete service webfront –namespace abc2018sg
kubectl delete service webfront -n abc2018sg

You can use the dashboard or the kubectl command to query the service details and ensure that it has been deleted successfully. I will skip the part related to deleting the service from UI using the Kubernetes dashboard. You can try it out yourself which is same what we did in the last post for deleting the deployment.

Conclusion

In this post we saw how to access pods using Service. Service gives the stable endpoint that other dependent components can rely on even if the node on which the pods are running are impacted. Services is one of the topic which would need more than one post to cover all the major aspects related to things like networking, load balancing, scaling etc. We will cover them in more details as and when the need arises. In the future posts we will see other service types like NodePort and ClusterIP.

As always the complete source code for this post and the related posts is available on Github.

Until next time code with passion and strive for excellence.

Comments

Popular Posts

Unit Test Application Configuration Settings

BackgroundAll the post I had written this month so far were related to the Entity Framework learning series. This time I thought of taking a diversion and share a quick method of unit testing setting which might be stored in the Application Configuration file (App.config) of a DotNet application. Unit Test Application Configuration SettingsLets fire up Visual Studio and create a simple console application. You can name it whatever you want. I have named the project as AppConfigTest. Once the project is created right click on Add and select Add New Item option in the solution explorer window. Select Application Configuration File option and click Add. This will automatically name the new file as App.config.Since this file used to configure settings at application level there is a possibility of someone changing these setting unknowingly. It can lead to severe problems later. It might also happen that the settings are different in different environments like development machines, test s…

Integrate SQL Server 2017 Linux with ASP.NET Core using Docker

In the previous post, we saw how to initialize a SQL Server 2017 on Linux database running inside Docker container. This post is continuation of my exploration about Docker containers for stateful applications. In this post we will explore how to integrate the SQ Server 2017 on Linux container with ASP.Net Core Web API also running inside a Docker container. We will perform following activitiesUpdate Docker compose file to build the SQL server imageUpdate Web API project to use SQL Server Database instead of In memory databaseBuild and run the application containers using Docker compose commandsQuery Data from SQL container using the MVC web front endUpdate Docker compose file to build the SQL server imageDocker provides Docker Compose as a tool for defining and running multi-container Docker applications. I already have an existing docker compose file which I have been using for my series of post on implementing continuous deployment of multi container apps. In a compose file we defi…

Setup MacBook almost at the speed of light

BackgroundI bought a new MacBook recently. It is always fascinating to setup your new machine. But it is also a pain to look for all the tools that you have on your old machine and port it to the new machine. Sometime back I started learning abount Ansible which helps to automate routine tasks. I came across a blog by Jeff Geerling who is the author of book Ansible for DevOps. Jeff and many others had used Ansible to setup their machines. I took inspiration from these guys blogs to automate the process of setting up a new MacBook Pro. Here is my experience.Why Ansible?Ansible is very easy to understand. It uses human readbale YAML syntax to describe the different actions which needs to be performed. Group of Ansible actions which are executed as part of a playbook are idempotent. It does not have a side effect on the setup. The same playbook can be run multiple times. Only the changes will be applied incrementally.
How did I use Ansible?I started off by cloning the Git repository of Ge…