Acceptance Test Code Coverage

My earlier posts were related to Acceptance Testing Anti Pattern and Different methods to measure Code Quality. I recently had a discussion with one of the consultant working with our team regarding Code Quality and Code Coverage. He suggested that we could measure code coverage for the Functional tests or Acceptance tests in the same manner we do it for Unit tests. His idea was to use an open source tool called PartCover that I mentioned in the previous post. He believed we could run Fitnesse tests and measure the coverage for the code that was executed as part of running those Fitnesse tests using PartCover. Ever since that discussion I wanted to try it myself. While I was reading the MSDN articles related to Code Coverage for unit tests, I stumbled upon the command line utilities provided with Microsoft Visual Studio 2010. These command line utilities can be used to measure Code Coverage. So this post is about using the command line tools to measure the Code Coverage but not by running any units test, instead by running the Functional test written in Fitnesse.

How to measure Code Coverage by running Acceptance Tests

There are mainly 5 steps involved in this process to measure the code coverage.

  1. Instrument the assembly or a set of assemblies for which we are interested in measuring the coverage using VSInstr.exe
  2. Start the performance monitoring tool – VSPerfCmd
  3. Execute the code by running Fitnesse tests
  4. Stop the performance monitoring tool
  5. Analyse Code Coverage results

This looks like a lengthy process, but believe me its pretty simple. Lets look at each of these steps. First and foremost we need to Instrument the assembly. When we use Visual Studio IDE, we select the assemblies that we are interested in for running the coverage. Behind the scenes IDE instruments the assemblies for us when we run the unit tests. Since we are not going to use unit tests in this demo, we need to manually instrument the assembly. Instrumentation is a process of adding additional information to the assembly for the purpose of gathering the coverage data. 

Please note that I’ll reuse the example from the previous post related to UserManagement. I am interested in measuring the Code Coverage for the service layer code. So we’ll need to run the VSInstr command on the Visual Studio Command Prompt and instrument the dll for UserManagementService. Make sure you open the right version of the Visual Studio Command Prompt based on your operating system. In my case I had to choose the 64 bit version as I have 64 bit OS installed on my system.

VisualStudioCommandPrompt

The syntax is very simple for using the VSInstr. In the command line parameters we specify that we are interested in the Coverage and the name of the exe or dll. Make sure that the project is built before running this step. Navigate to the output directory where the dll will be located. In our case since Fitnesse will trigger the monitoring process we need to instrument the dll in the Debug folder of the Fitnesse fixtures project. We issue following command at the command prompt

vsinstr - coverage UserManagementService.dll

If you wish to understand what goes on behind the scenes you can open the Debug folder in Windows Explorer. These are the contents of the Debug folder before I run the Instrumentation command.

dllBeforeInstrumentation

Lets run the command and compare the changes. If the command was successful you should see the following output

InstrumentationCommandPrompt

Note the highlighted text which says that the original file has been backed up and the instrumentation was successful. We can verify this by monitoring the contents of the Debug folder once again

DllAfterInstrumentation

We can see that the original UserManagementServices.dll has been renamed and also the .pdb file has a new .instr.pdb file created. If you compare the file sizes there is difference between the original and the instrumented dll. We are now ready to measure the code coverage.

We move onto the second step of starting the performance monitoring tool. This is named VsPerfCmd  and has the following syntax

vsperfcmd -start:coverage -output:FitnesseCoverage.coverage

Here we are starting the performance monitoring using the –start command line switch and coverage as the parameter to it. We also specify the output filename using the Output switch. The filename extension is .coverage. In some cases you get a different command prompt window automatically opened after running this command. But don’t be surprised if a new windows doesn’t open up. As long as there is no error reported on the command prompt we can be assured that everything is going on smoothly.

As a result of running the VSPerfCmd command, the monitoring tool is waiting for the code in the instrumented dll to be executed by some means. We can use any method to exercise the code. It could be by running the application or exe, by running unit tests or like we wish to do here i.e. by running the Fitnesse tests. So start the Fitnesse server and execute the Fitnesse tests which can execute the code in our instrumented dll. This brings us to the end of our third step.

There is no automatic way of signalling the performance monitoring or measuring tool that we have finished executing our code. We revert back to the command prompt and stop the performance monitoring tool by running the following command

vsperfcmd –shutdown

Once this command is successfully executed, you’ll see the coverage file written to the disk. The last step is to analyse the coverage results. We can drag and drop the coverage file onto an instance of a Visual Studio or double click on it to open a new instance. Following are the results of the Code Coverage

image

The results are represented exactly the same way they are shown for unit tests code coverage. but the important thing to remember here is that these are the coverage results for Fitnesse tests. We can drill down to the method level and also see the coverage using the colored convention same as unit tests. Some may think what's the big deal. Its just showing the percentage of code that got executed. Let me explain why I think this is helpful.

Advantages of measuring and monitoring Functional Test Coverage

Let me once again share my experience with the readers to get some more context into this topic. On one of the project I was working, we were developing a complex piece of Algorithm. We used to follow TDD on this project. Majority of the code was unit tested. But there were no automated functional tests. We always had to rely on manual testing to verify the functionality. The scope of the project was very large compared to normal projects. We would encounter numerous regression issues as well as functional issues during the testing because of the misunderstandings between the business requirements and developers interpretation or testers interpretation. I wished we had a good set of functional tests which could have made our job lot more easier during those times.

Assuming that this algorithm is very central to the business needs, we can use functional tests to verify the business functionality using the approach I showed here. I am sure we might not be able to cover 100% of the code using functional tests. But even if we achieve 70-80% coverage that would mean we have built a strong suite of functional tests around one of the most important business functionality. This can definitely help reduce regression issues.

We can also look at it from another angle. Assume you have a very critical business requirement which needs to be thoroughly tested. The product owner wants to make sure that almost 95% of the business functionality is tested using a functional testing tool like Fitnesse. There is no way for the Product Owner or a Business Analysts to know what percentage of the business functionality is covered using the functional tests written by the business users. Using this technique we can help the business users identify the missing functional cases. That way the business users would be able to add more functional tests to cover the business needs.

Another place where I find this useful is to get rid of the dead code. Imagine a project which is going on for few years. Over a period of time you’ll have lot of functionality added, deleted or modified. There might be fragments of code which are no longer required. If you are religious about unit testing I assume this code would have been unit tested. And as such would show up in the unit test coverage report as covered. But in reality there could be dead code which is not used anywhere except for unit tests. This usually happens with large projects which are in maintenance phase or the ones where multiple teams are working on the same codebase. Some times people are sceptical about deleting the code which is no longer required. In modern day all the Source Control Systems allow us to recover deleted code.

We can identify the dead code by measuring code coverage using the above mentioned steps and running all parts of the application. We can either run all the functional tests assuming they are all updated. Alternately we can use all the functionalities of the application in the live like environment. That should also give us the statistics for coverage. Then we will identify the dead code and we can first comment it. Rerun the same process and ensure nothing is broken. If everything works fine you can go ahead and delete the code. You would have saved yourself and your team few bytes of space and also made the code more maintainable Smile. May be your boss will be tempted to give you some goodies if this results in some performance benefits at the application level. 

Note : If you modify any code in the dll or exe after it was instrumented using the first command, you need to make sure that you instrument it again so that the code modifications are taken into account. If you forget to do this the coverage will be reported on the earlier instrumented version of the dll. As a best practice I suggest you do a clean build of the solution after making the modification and rebuild it. Also delete all the previously instrumented assemblies and instrument them again.

Also don’t forget to provide the –Coverage parameter on the command line. If you don’t supply this parameter, you’ll get an error while trying to write the coverage data to the .coverage file.

Conclusion

I hope the examples provided here are good enough for you to realise the importance of using Code Coverage for Functional tests. Not only they can help you reduce the regression issues, it can also help the business users identify the missing functional tests. Either ways the benefits are going to help you and your team in building a good quality product. It is not a very tedious task to follow these 5 simple steps. If you wish you can also go one step ahead and use this as another metrics in your build process to measure the quality of the build. If you really wish the whole process can be integrated into the Continuous Integration (CI) build. The possibilities are many we just need to think little bit out of the box Smile

There isn’t any code changes to the source code of the project I used during  my earlier post on Functional Test Anti Pattern. So I have not attached any solution files along with this post.

Until next time Happy Programming Smile

 

Further Reading

Here are some books I recommend related to the topics discussed in this blog post.

Share:
spacer

2 comments:

  1. Seems like good reasons to me. Thanks for the information

    ReplyDelete