Improve code quality using Visual Studio Code Metrics


In my previous post, I had blogged about improving code quality using software code metrics. In this post I’ll try to demonstrate how to use Visual Studio Team System edition and the code metric feature to improve code quality.

Visual Studio Team System Code Metrics

You can open the solution or project for which the code metrics has to be analyzed. This can be done at the complete solution level or at individual project level.


As can be seen from above screenshot, you can select the Analyse menu option from the menu bar. Towards the bottom of the pulled down menu you can see Calculate Code Metrics for the selected project or Calculate Code Metrics for Solution options.

I have selected the project option and the output is shown below


This shows the various metrics at the project level as well as the individual namespace levels within the project. Except for Depth of Inheritance all other metric are aggregates of the values at each namespace level. This is just the beginning. What I like the most is the drilldown feature where in you can drill down to the method level by navigating through the hierarchy in the following order project –>Namespace –> Class –> Method.


In the above screenshot I have drilled down to the method level. If you look carefully, in the Maintainability Index column you can find a green box next to the value of Maintainability Index. VSTS gives you a visual indicator based on the value of the Maintainability Index at each level. The indicator values based on the range in the following order

  • 0-10 : Red which means that the class is very hard to maintain
  • 10-20 : Orange / Amber which means that the class is somewhat maintainable
  • 20-100 : Green which indicates that the class is maintainable and easier to change

If you observe carefully, the depth of inheritance matrix is not applicable at the method or function level. As mentioned in my earlier blog post, Depth of Inheritance tells you how many classes are inherited by the current class or how many levels of inheritance is involved in implementing a particular class. Hence this metrics is not applicable at the function level.

Lines of code is pretty straightforward metric which tells how many executable lines of code is there in a method or class or namespace level.

Two of the most interesting metrics are the Class Coupling and Cyclomatic Complexity. Class Coupling tells how many unique classes are used to implement a method or function. It also indicates the aggregate of unique classes used within a class implementation at the class level. At the namespace level its the aggregate of unique classes used in all the classes and same applies at the project and solution level. This metrics can be used to split the responsibilities among  classes. If some of the classes are overloaded trying to do too many thing you might get to know using this metric.

Cyclomatic Complexity is one of the most commonly used metric to measure code quality. It indicates the number of decision points within a function or class. To be very frank this requires a dedicated blog post in itself and I might do that sometime in future. For the time being I’ll demonstrate it using a small code snippet.

I have a function which verifies if a string is a valid EAN13 type barcode based on the length of the string which is passed in as a parameter.

private bool ValidateBarcodeLength(string barCode)


            if (barCode.Length == 13)


                return true;



            return false;


I’ll run the code metric and monitor Cyclomatic Complexity for this function.


If you look at the highlighted line above it shows the Cyclomatic Complexity for this function as 2. I’ll do a small refactoring and lets see if it impacts the Cyclomatic Complexity.

 private bool ValidateBarcodeLength(string barCode)


            return barCode.Length == 13;


Again I run the Code Metrics once again.


Immediately the Cyclomatic Complexity drops down by 1. This is a very trivial example but good enough to demonstrate the feature I wanted to show.


One of the best thing with code metrics in Visual Studio Team System is that it integrates nicely into the IDE. You need not open separate windows.

Using these metrics can help us to improve code quality to a good extent. Its very recent that I have started using these metrics. I might soon do a  full fledged post on Cyclomatic Complexity.

Till then Happy Programming :)


Improve code quality using code metrics


In our project we have been trying to automate as many things as possible. One thing which we are yet to do is to incorporate code metrics into our automated build process. As part of Agile process it is suggested that we should automate as many things as possible. Code metrics is an optional feature but I personally believe that it helps a lot in designing quality code and also maintaining it. In this post I would like to give an overview of various software code matrices we can use to improve the quality of code. In future posts I’ll try to highlight each of them with a dedicated post.

Code metrics

Code metrics are a set of software measures that enable developers to develop better quality code. Some of the most commonly used code metrics are

Maintainability Index

Maintainability Index calculates an index based on other factors like Cyclomatic Complexity and Lines of Code to indicate how easy or difficult it is to maintain a particular class. This value ranges between 1 to 100. It should be as close to 100. Lower maintainability index means that the class is difficult to maintain. 

Cyclomatic Complexity (CC)

Cyclomatic Complexity is the count of decision points in a class or a method. Usually at a class level its an aggregate of all the cyclomatic complexities of the properties and methods of that class. At the function level it is the sum of decision points in that function. Examples of decision points can be an if else construct, a looping structure like a do, do while or while loop. It could also be for and foreach loops. Decision point also include switch statements. Every time there is a decision point, it requires more code to write unit tests to cover that code. This results in higher complexity and lower maintainability. A code with large control structure is hence a more complex and less maintainable code.

Depth of Inheritance

This indicates the number of levels up to which classes are inherited or extended. It does not include interfaces. If there is multilevel inheritance, it becomes difficult to identify where are some of the properties and methods defined. At the class level this defines the depth of inheritance tree. It can also be calculated at the namespace or project level. At namespace or project level we consider only the highest depth of inheritance.

Class Coupling

Class coupling is an indicator to show the interdependencies between classes. It calculates the sum of unique classes which are referred by a particular class. It does not take into consideration the primitive types and framework classes. Class coupling will take into account the unique classes which are used in various sections of the program like parameters, local variables, base classes, interface implementations etc. This metric should be as low as possible. Higher value indicates that a class has more number of dependencies which increases maintenance costs and makes it difficult to reuse.

Lines of code (LOC)

This is pretty straight forward. It indicates the number of lines of code. It does not take into account the non executable lines such as comments, white spaces etc. If a class or method consists of too many lines of code, it might be an indication that the class or method is trying to do too much. In such cases we should try splitting it up.


In my personal opinion very few people make use of these matrices to improve their code quality. Many of the senior developers I come across are not aware of these at all or simply ignore them even when their favourite IDE supports it.

There are various tools available to measure the above mentioned software metrics. If you are using Visual Studio Team System (VSTS) it comes built in with code metrics feature. There are other commercial tools like NDepend and NCover. As far as I know NCover provides an option to report only Cyclomatic Complexity . Best part of using these tools is they can provide feedback in the form of graphical indicators.

For e.g. If you are using VSTS, it can show maintainability index with a green, orange or red box based on the value.

In my future post I’ll try to demonstrate each of the above metrics using Visual Studio Team System. Till then Happy Programming :)


Making NCover & NCover Explorer work on 64 bit machine


In our project we have integrated code coverage feature into the automated build process. If the code is not covered to a minimum level using unit tests we fail the build on the continuous integration server. Recently we migrated to Windows 7 64 bit machine for our development machines. Similar to the continuous integration build we have a local build file which we run to ensure that everything works fine as expected before checking in the changes to the source repository. After moving to the 64 bit machine our local build started failing. Since our continuous integration server hosting Cruise Control.Net  was still 32 bit it was working fine.

Possible Reason for NCover & NCover Explorer not working

We are using the Test Driven.Net Visual Studio add-in which comes built in with the version of NCover and NCoverExplorer exe’s. From our MS Build file we invoke these two exe to calculate the code coverage and also to integrate the results as HTML output in the CCNet dashboard.

Although we install the TestDriven.Net add–in in the default location of program files for Visual Studio integration, for the purpose of automated build we copy all the dependent executables and third party dll’s like NUnit, Rhino mocks into a directory called Dependencies.

If we use the add-in and run the coverage from the Visual Studio IDE it works fine on 64 bit machine. But if we try to run the NCover exe from the command line it throws an error “Profiler process terminated. Profiler connection not established.”.

This could be because both NCover and NCover explorer are 32 bit exe. If we have to run them on the 64 bit operating system we need to explicitly force these exe’s to run under the x86 bit emulator under WOW64. We also use the NUnit-console exe to execute a set of unit tests. We also need to do the same for NUnit-console exe. But if we are using a NUnit version greater than 2.4.2 it comes with a separate exe named NUnit-Console-x86.exe which can be used on a 64 bit operating system.


In order to force the NCover-Console exe to run in 64 bit emulation mode, we need to use the utility available in DotNet SDK called CorFlags.exe. Usually this exe is located at “C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin”.

Following is the syntax to use this utility

–°orFlags.exe NCover.Console.exe /32BIT+

Please note that you’ll need to give the complete path to both the CorFlags.exe as well as NCover.Console.exe.

This step ensures that NCover.Console exe runs in 64 bit emulation mode. Even after doing this sometimes the build fails saying that NCover explorer exited with code –2. This is because we have set the property to fail the build if minimum coverage is not met. But ideally in a local build you would want to see the coverage report to see which parts of the code needs to be covered.

If you want to generate the HTML report even if the minimum coverage is not achieved, please set the ContinueOnError property of the NCover task in MS Build script as shown in below fragment




    ContinueOnError ="true"

Also you can note that the CommandLineExe property points to the X86 version of the NUnit-console exe.


I wish that like NUnit even NCover comes up with a exe which can run in 64 bit emulator mode. Also it would be great if it can be inferred directly based on the command line itself whether to invoke the 32 bit or 64 bit version of the exe. Otherwise developers will need to change the build scripts to identify type of operating system and accordingly call the respective exe.

Hope this helps. Until next time happy programming :)


Management responsibility in Agile projects


I have been working on Agile projects for past couple of years. Due to the distributed teams we have in our organization there are various challenges to make Agile work and scale. Some of the challenges are  merely technical while others are process related and organizational.

Role of organization in promoting Agile

Before starting with anything else I would like to refer to the Agile manifesto and the principles on which Agile is based. These are documented in the link

The point I would like to highlight is “Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.”. Many a times organizations forget this point. This especially happens when there is lot of hierarchy involved in decision making.

Agile works on the basis of collaboration. But in an hierarchical organization things don’t move fast enough and they start affecting the progress of the Agile team. If the team doesn’t resolve its impediments in timely manner work is not going to get done within stipulated time and budget.

One example that comes to my mind is when different teams are responsible for different needs of the organization. We have teams which maintain the infrastructure needs of the developer teams. In order to test our software on different operating systems we need to build machines with those operating systems installed on it. But many a times it happens that the infrastructure teams are busy with some other project and do not have enough bandwidth to support our requirement.

Many a times simple things like giving Administrator rights to a developer on a machine where he is supposed to develop are sacrificed under the pretext of security. Getting the admin rights becomes a long process involving approvals from various people in the organization. This clearly shows that the organization doesn’t trust its people in doing the job.

Another problem is when the teams give requirements for developer environment, infrastructure teams ignore them and build systems with whatever they have as a template. On many occasions we have requested systems with lets say 4 GB RAM and 30 GB hard disk space on C drive. But the infrastructure team comes back with a machine which they have built from a template which has 2 GB RAM and 20-25 GB space on C drive.

This is clearly against the Agile principle of providing the team what it needs. If we go back to the infrastructure team saying we need 4 GB of RAM to run Visual Studio, they start pointing to the hardware requirements for Visual Studio which says minimum 1 GB is enough. I would say they look only at the minimum requirement. In practice anyone who has worked on Visual Studio knows that even with 4 GB of RAM it just works out to be a  good solution.


Organizations should realize that adopting Agile involves change at all levels. Its not just enough to have teams formed across globe and start having daily scrum and scrum of scrum calls. There needs to be involvement from all stakeholders and every attempt should be made to remove impediments at the earliest. This would definitely help the teams to stay motivated and deliver a quality software.


ASP.NET state server is cluster unaware


In ASP.NET State Management is one of the most common requirement for many web application. This enables developers to share data across user sessions (session state), across requests on the same web server using Caching and Application State. ASP.NET framework offers various options to use state management techniques to scale the state management. In this post I am going to highlight the issues I faced recently with the session state management.

Session State Management

ASP.NET offers different options to scale the session state by means of InProc, Out of Proc and SQL Server State modes. All these work very well with web sites. We needed to use the state information in a WCF web service. WCF does offer state maintenance feature. But this is always stored in the servers memory using InProc mode. Hence this solution cannot scale up to a web farm or web garden mode of deployment which are used for failover scenarios.

We can overcome the this limitation by enabling the ASPNET compatibility mode for WCF service by decorating the service with the attribute. This MSDN article talks more about this. This ensures  that when the WCF service is hosted in IIS it leverages the features available to ASP.NET pipeline.

In our project we use a Windows 2008 cluster and have multiple webservers which are load balanced using the content switch. We decided to store the session data on the cluster environment as out of Proc mode. But unfortunately the ASP.NET state service is not cluster aware. This means that if the primary machine fails in the cluster and it switches to the secondary machine, the state information does not get copied over to the secondary machine automatically. Due to this reason it can’t scale out very well.


I wish Microsoft guys had given a thought to the ASP.NET state service and made it work even in a clustered environment. In absence of that we can try out the possibility of using ASP.NET state service in SQL Server mode. This works well in a clustered environment as the data is stored in physical tables in SQL Server database. The SQL server can be set up in a cluster and provides highest scalable option due to failover capability. But in terms of application performance it might get hit as the state service is slowest in the SQL server mode.

Alternately we can try other scalable options like distributed cache. there are multiple choices available if somebody wishes to try any of them

Microsoft Velocity


Scale Out


I haven’t tried these so far due to time constraints. But for the next phase of the project I’ll be evaluating all these.

Hope this helps. Until next time happy programming :)


Remove secure sites using Appcmd in Windows 7 & 2008


Recently we encountered some problems while installing and uninstalling Secure web site on Windows 2008 machine. We are using Windows Installer Script (WIX) to generate MSI packages. My team is responsible for building a secure service which uses Windows Identity Foundation to authenticate and authorize users. As part of installation we had to enable https binding and map port 81 to the web site.

Problems while installing and uninstalling Web site which uses SSL

Installing a web site which uses http is quite simple using WIX. While installing a SSL enabled web site, we need to register the certificate with IIS certificate collection. This can be done using the WebSite task and associating the CertificateRef task as shown below.

<Component Id="CraeteWebSite" Guid="{5624171D-82E2-4e6e-B616-C54227E8F422}">

                <iis:WebSite Directory="INSTALLDIR" Id="GHS_UserManagement" Description="GHS_UserManagement">

                    <iis:WebAddress Id="AllUnassigned" Port="81" Secure="yes" />

                    <iis:CertificateRef Id="MyCertificateUM" />



We are using Appcmd command which is available in Windows 7 and Windows 2008 to manage IIS related activities. Appcmd is a command line tool for managing II7.

This is a nice article  on getting started with appcmd -

For more details you can also refer to Microsoft documentation of this exe at


While uninstalling we can uninstall the Web site using the Add Remove Programs dialog. But there is an entry in the Internet service manager which is something like SITE_2. This number varies based on the concept called site ID. This can be automatically generated which is the default behaviour when a new site is added to IIS. But we can also give a unique site ID for each site.

You can read more about site identifier at Chris Crowe’s blog

Default Web Site is always assigned the site ID as 1.

These web sites which start with SITE_ <<number>> are  called as orphaned sites. These do not cause any harm and can be deleted safely.

For testing purpose we tried changing the site bindings and run it in normal http mode. In this case everything was working as expected on running the uninstaller.

We figured out that because we were adding some bindings to the Website to enable the SSL it was not able to remove all entries from IIS metabase while uninstalling the site. This resulted in orphan site being created after uninstallation. The virtual directory and the physical contents would get deleted from the file system but still IIS would show us SITE_2.


To resolve the issue of orphan site, we modified the WIX script to run the uninstall process as a two step process. In the first step we would remove the https binding for the website. This is done as part of a custom action which removes all the bindings for the site as shown below

<CustomAction Id="RemoveBindings" Property="cmd" ExeCommand="/c %windir%\system32\inetsrv\AppCmd SET SITE &quot;GHS_UserManagement&quot; /-bindings" Execute="oncePerProcess" Return="check" />

Please note the –bindings parameter which does the trick of removing all the bindings for the website.

Once that was successful in the second step we would delete the web site using AppCmd command

<CustomAction Id="DeleteWebSite" Property="cmd" ExeCommand="/c %windir%\system32\inetsrv\AppCmd DELETE SITE &quot;GHS_UserManagement&quot;" Execute="oncePerProcess" Return="check" />

This seem to delete everything from IIS without any problem.

Even after all this if the physical directory remains, we can delete that as well by running the rmdir command as follows

<CustomAction Id="RemoveUserManagementPhysicalPath" Property="cmd" ExeCommand="/c rmdir /s /q %SystemDrive%\Inetpub\wwwroot\GHS_UserManagementService" Execute="oncePerProcess" Return="check" />


I am not sure if this is the perfect solution. There might be a better approach to resolve this issue. But currently this works for our requirements. If anyone knows a better approach to handle this scenario I would like to hear from them.

Until next time happy programming.