Acceptance Test Code Coverage

My earlier posts were related to Acceptance Testing Anti Pattern and Different methods to measure Code Quality. I recently had a discussion with one of the consultant working with our team regarding Code Quality and Code Coverage. He suggested that we could measure code coverage for the Functional tests or Acceptance tests in the same manner we do it for Unit tests. His idea was to use an open source tool called PartCover that I mentioned in the previous post. He believed we could run Fitnesse tests and measure the coverage for the code that was executed as part of running those Fitnesse tests using PartCover. Ever since that discussion I wanted to try it myself. While I was reading the MSDN articles related to Code Coverage for unit tests, I stumbled upon the command line utilities provided with Microsoft Visual Studio 2010. These command line utilities can be used to measure Code Coverage. So this post is about using the command line tools to measure the Code Coverage but not by running any units test, instead by running the Functional test written in Fitnesse.

How to measure Code Coverage by running Acceptance Tests

There are mainly 5 steps involved in this process to measure the code coverage.

  1. Instrument the assembly or a set of assemblies for which we are interested in measuring the coverage using VSInstr.exe
  2. Start the performance monitoring tool – VSPerfCmd
  3. Execute the code by running Fitnesse tests
  4. Stop the performance monitoring tool
  5. Analyse Code Coverage results

This looks like a lengthy process, but believe me its pretty simple. Lets look at each of these steps. First and foremost we need to Instrument the assembly. When we use Visual Studio IDE, we select the assemblies that we are interested in for running the coverage. Behind the scenes IDE instruments the assemblies for us when we run the unit tests. Since we are not going to use unit tests in this demo, we need to manually instrument the assembly. Instrumentation is a process of adding additional information to the assembly for the purpose of gathering the coverage data. 

Please note that I’ll reuse the example from the previous post related to UserManagement. I am interested in measuring the Code Coverage for the service layer code. So we’ll need to run the VSInstr command on the Visual Studio Command Prompt and instrument the dll for UserManagementService. Make sure you open the right version of the Visual Studio Command Prompt based on your operating system. In my case I had to choose the 64 bit version as I have 64 bit OS installed on my system.

VisualStudioCommandPrompt

The syntax is very simple for using the VSInstr. In the command line parameters we specify that we are interested in the Coverage and the name of the exe or dll. Make sure that the project is built before running this step. Navigate to the output directory where the dll will be located. In our case since Fitnesse will trigger the monitoring process we need to instrument the dll in the Debug folder of the Fitnesse fixtures project. We issue following command at the command prompt

vsinstr - coverage UserManagementService.dll

If you wish to understand what goes on behind the scenes you can open the Debug folder in Windows Explorer. These are the contents of the Debug folder before I run the Instrumentation command.

dllBeforeInstrumentation

Lets run the command and compare the changes. If the command was successful you should see the following output

InstrumentationCommandPrompt

Note the highlighted text which says that the original file has been backed up and the instrumentation was successful. We can verify this by monitoring the contents of the Debug folder once again

DllAfterInstrumentation

We can see that the original UserManagementServices.dll has been renamed and also the .pdb file has a new .instr.pdb file created. If you compare the file sizes there is difference between the original and the instrumented dll. We are now ready to measure the code coverage.

We move onto the second step of starting the performance monitoring tool. This is named VsPerfCmd  and has the following syntax

vsperfcmd -start:coverage -output:FitnesseCoverage.coverage

Here we are starting the performance monitoring using the –start command line switch and coverage as the parameter to it. We also specify the output filename using the Output switch. The filename extension is .coverage. In some cases you get a different command prompt window automatically opened after running this command. But don’t be surprised if a new windows doesn’t open up. As long as there is no error reported on the command prompt we can be assured that everything is going on smoothly.

As a result of running the VSPerfCmd command, the monitoring tool is waiting for the code in the instrumented dll to be executed by some means. We can use any method to exercise the code. It could be by running the application or exe, by running unit tests or like we wish to do here i.e. by running the Fitnesse tests. So start the Fitnesse server and execute the Fitnesse tests which can execute the code in our instrumented dll. This brings us to the end of our third step.

There is no automatic way of signalling the performance monitoring or measuring tool that we have finished executing our code. We revert back to the command prompt and stop the performance monitoring tool by running the following command

vsperfcmd –shutdown

Once this command is successfully executed, you’ll see the coverage file written to the disk. The last step is to analyse the coverage results. We can drag and drop the coverage file onto an instance of a Visual Studio or double click on it to open a new instance. Following are the results of the Code Coverage

image

The results are represented exactly the same way they are shown for unit tests code coverage. but the important thing to remember here is that these are the coverage results for Fitnesse tests. We can drill down to the method level and also see the coverage using the colored convention same as unit tests. Some may think what's the big deal. Its just showing the percentage of code that got executed. Let me explain why I think this is helpful.

Advantages of measuring and monitoring Functional Test Coverage

Let me once again share my experience with the readers to get some more context into this topic. On one of the project I was working, we were developing a complex piece of Algorithm. We used to follow TDD on this project. Majority of the code was unit tested. But there were no automated functional tests. We always had to rely on manual testing to verify the functionality. The scope of the project was very large compared to normal projects. We would encounter numerous regression issues as well as functional issues during the testing because of the misunderstandings between the business requirements and developers interpretation or testers interpretation. I wished we had a good set of functional tests which could have made our job lot more easier during those times.

Assuming that this algorithm is very central to the business needs, we can use functional tests to verify the business functionality using the approach I showed here. I am sure we might not be able to cover 100% of the code using functional tests. But even if we achieve 70-80% coverage that would mean we have built a strong suite of functional tests around one of the most important business functionality. This can definitely help reduce regression issues.

We can also look at it from another angle. Assume you have a very critical business requirement which needs to be thoroughly tested. The product owner wants to make sure that almost 95% of the business functionality is tested using a functional testing tool like Fitnesse. There is no way for the Product Owner or a Business Analysts to know what percentage of the business functionality is covered using the functional tests written by the business users. Using this technique we can help the business users identify the missing functional cases. That way the business users would be able to add more functional tests to cover the business needs.

Another place where I find this useful is to get rid of the dead code. Imagine a project which is going on for few years. Over a period of time you’ll have lot of functionality added, deleted or modified. There might be fragments of code which are no longer required. If you are religious about unit testing I assume this code would have been unit tested. And as such would show up in the unit test coverage report as covered. But in reality there could be dead code which is not used anywhere except for unit tests. This usually happens with large projects which are in maintenance phase or the ones where multiple teams are working on the same codebase. Some times people are sceptical about deleting the code which is no longer required. In modern day all the Source Control Systems allow us to recover deleted code.

We can identify the dead code by measuring code coverage using the above mentioned steps and running all parts of the application. We can either run all the functional tests assuming they are all updated. Alternately we can use all the functionalities of the application in the live like environment. That should also give us the statistics for coverage. Then we will identify the dead code and we can first comment it. Rerun the same process and ensure nothing is broken. If everything works fine you can go ahead and delete the code. You would have saved yourself and your team few bytes of space and also made the code more maintainable Smile. May be your boss will be tempted to give you some goodies if this results in some performance benefits at the application level. 

Note : If you modify any code in the dll or exe after it was instrumented using the first command, you need to make sure that you instrument it again so that the code modifications are taken into account. If you forget to do this the coverage will be reported on the earlier instrumented version of the dll. As a best practice I suggest you do a clean build of the solution after making the modification and rebuild it. Also delete all the previously instrumented assemblies and instrument them again.

Also don’t forget to provide the –Coverage parameter on the command line. If you don’t supply this parameter, you’ll get an error while trying to write the coverage data to the .coverage file.

Conclusion

I hope the examples provided here are good enough for you to realise the importance of using Code Coverage for Functional tests. Not only they can help you reduce the regression issues, it can also help the business users identify the missing functional tests. Either ways the benefits are going to help you and your team in building a good quality product. It is not a very tedious task to follow these 5 simple steps. If you wish you can also go one step ahead and use this as another metrics in your build process to measure the quality of the build. If you really wish the whole process can be integrated into the Continuous Integration (CI) build. The possibilities are many we just need to think little bit out of the box Smile

There isn’t any code changes to the source code of the project I used during  my earlier post on Functional Test Anti Pattern. So I have not attached any solution files along with this post.

Until next time Happy Programming Smile

 

Further Reading

Here are some books I recommend related to the topics discussed in this blog post.

spacer

Code Quality–Different ways to measure it

In software industry the word quality holds different meaning to different people. For a developer it means different, for a tester it means different, for the management it means different, for the client it means different. The perspective of each person is different. The tools used by each person to measure quality are different. The methods used to measure the matrix are also different. In todays post I am going to illustrate the quality metrics for software developers.

Reasons for quality check

There is lot of material written on the net for the need to do quality checks in software. The oblivious need it to reduce the number of bugs and improve the reliability and efficiency of the system. Most of the times when we say quality check, the first thing that comes to our mind is the test team doing some manual or regression tests. That is the functional quality of the product or the system. But what I am interested in is the quality of the code that is being written by the developers. How do we measure the quality of the code? Here are some of the ways used for measuring code quality.

Method 1 : Code reviews / Peer Reviews

One of the oldest form of quality check is the Peer Review. Once the functionality is developed by one developer, it is reviewed by someone else in the team. Typically this is done by a senior developer who is aware of the functional requirement as well as technically sound to find out any deficiencies in the code. In some cases it is done by the technical lead or project lead. The idea here is to get the second opinion and try and improve the code.

If your team follows Agile or Extreme Programming (XP) methodology for development, peer reviews can be reduced a lot by using a technique called Pair Programming. Most of the code is already getting reviewed by the person you are pairing with during the development phase itself.

There are benefits of this method as well as some disadvantages. One of the advantage is that it can help reduce bugs because some of the things can be caught during the review which could have led to bugs. It can also help different people approach the same problem in different perspectives and choose the best solution. It also helps in evaluating a developers performance in annual appraisal Smile

Lets look at some of the disadvantages. The method is rendered ineffective if the person doing the review is biased and is not open to new ideas or latest trends in the technology. Let me take an example of the biased approach. Assume an individual is used to particular style of coding. While reviewing some other developers code he might expect the other developers to follow the same coding style. As an example consider the case of using a ternary operator in a simple if else statement or using the if else in its expanded state. Some people have their own preferences with the way they define the code structure. They want all the variables, constants, methods to be organized into regions. While others don’t prefer this style of coding.

I remember once one of my ex-colleague told me his experience with his technical lead. My ex-colleague had implemented a piece of functionality in C# using LINQ and Lambdas. During the code reviews his tech lead asked him to rewrite the code without using LINQ and Lambdas because the Tech Lead did not know anything about LINQ and Lambdas. I am sure most of us would have gone through similar situation at some point of time in our developer lifecycle. I know lot of instances where the code was rejected because a variable was not named to the liking of the reviewer.

The point I am trying to make here is that manual code reviews are left to the discretion of the individual. What feels right for one person might not be the case for someone else. They are time consuming and in reality can delay the delivery of the software if there is lot of rework needed to fix the reviewers comments. If these manual code reviews are used purely to improve the quality of the code they are fine. But I know in many organizations they are used as a measure to rate the developers during annual reviews. To me it sounds absurd.

Long ago I had a personal experience when I was working as a Technical Lead on one of the project. The project lead asked me to review each line of code and create an excel sheet with the name of the developer who was responsible for writing the piece of code which was supposed to be refactored. I thought it was an utter waste of time. First of all, reviewing each line of code written by your team mate, means you don’t show trust in their ability. And even if I had so much time in the world to do this nonsense thing, I knew where it would have ended. It would have come up during the annual reviews filtered by the name of the developer and with some pivot table applied to it to prove how useless the developer was. Another thing that stopped me from doing it was that the developers on the team would have been prompted to fix only the issues found in their code and ignore other piece of code which is totally against the principle of Collective Code Ownership.

My preferred way of dealing with such issues is to come up with a well defined coding standard that is agreed by all the team members.And even after that if someone violates the coding standards I would prefer to talk to the individual to try and understand the reasons behind doing so instead of creating an Excel sheet which doesn’t hold any relevance after a short period of time. Having a coding standard might solve majority of the issues but there will always be the edge cases which can go either way. How can we overcome such issues?

Method 2 : Use Standard tools for code analysis

One way I have used previously to address these issues is to use the industry standards and best practices. That way you are at least following the footsteps of people who are bound to influence the majority of people. If your team is using Microsoft platform and tools for development then you can take advantage of some of the tools provided by Microsoft. Some tools are built into the Visual Studio itself while others are available as additional add ins or plugins. There are tools which can be used for checking the consistency of the code structure. Two such tools which come instantly to my mind are FxCop and StyleCop. FxCop is used for managed code analysis and StyleCop for static code analysis.

There are some rules which are common to both and some which are unique to each of them. One of the example where FxCop can be helpful is to ensure the Framework guidelines like Abstract class should not have a constructor. Similarly StyleCop can be used to ensure that there are no multiple lines between method definitions in a file. Both these tools can be integrated with the Continuous Integration (CI) build.

These tools are good for bringing in the consistency in the code structure. We can ensure that someone is not coming up with their own naming style when rest of the team is following a particular set of rules. Is it good enough? Lets assume you have integrated both FxCop as well as StyleCop in your build process, there are no errors reported after the build is completed. Does that mean the code is quality code. Definitely not. Just having a code that passes the FxCop and StyleCop rules does not ensure that it is good code. There has to be additional measures to ensure the quality of code.

Method 3 : Use Code Coverage

I assume most of the teams are using Test Driven Development (TDD) or at least plan to do so in future. If not then you can skip this section. When we are developing software using TDD, we can make use of Code Coverage feature to measure the percentage of code covered as part of unit tests. Again this feature can be integrated into the build process. Generally minimum 80% code coverage is considered as a standard. Now a days I find teams striving to achieve 100% code coverage. This link here demonstrates how to get started with Code Coverage in Visual Studio.

The Code Coverage feature is available only for Ultimate and Premium editions of Visual Studio. If you are using other editions of Visual Studio, you can use other free tools which are counterparts of the Code Coverage. I have used TestDriven.Net which used to integrate even with VS2003 IDE. If I am not wrong then Code Coverage was introduced with Visual Studio since 2005 release. If you are still one of those people using VS 2003, I would recommend using TestDriven.Net. We had integrated it with our build in couple of projects that I had worked in my previous organization.

There is also an Open Source alternative which I haven’t used personally but have heard from few people called PartCover. I can’t comment much on this as I haven’t had any experience in using it. But if you are interested you can give it a try.

There are some critical issues related to code coverage. Using code coverage as a measure of quality can be quite misleading. Let me share my own experience here. In one of our project we were using Code Coverage to decide the build quality. If the coverage was below a certain percentage the CI server would fail the build. This was to ensure that all the newly built software was properly unit tested. But as always there are people who know how to get around in certain situations. The project lead of that particular project did not understand the importance of unit tests and the value add that we got from them. He assigned a junior programmer to write unit test for all the classes which were not covered. The junior programmer worked alone on writing those unit tests due to tight deadlines. He ensured most of the classes were covered to ensure the build was green. But the tests that he had written were completely useless to say the least. Most of the tests did not have any asserts to verify the expected functionality. In essence all he did was to make sure the unit tests executed the production code without really verifying anything.

This is an abuse of TDD. If your team does not believe in the principle of TDD there is no point writing unit tests just to pass the build. Its a waste of time and effort and most annoyingly a maintenance headache. Other developers in the team had to spend hell lot of time to fix those tests when the issue was identified. By the time it was identified the person who wrote those tests was already out of the organization.

My suggestion is that if you are using TDD make sure you write test which are meaningful and help you validate the code. What's the point if you have 100s of test and none of them verify anything in the end. Now assume you have one of the best code coverage statistics for all your projects, does that mean you have good quality code? I am not entirely convinced. Let me illustrate why. You can have the best coverage, but the developer might have written code which is 1000 lines of executable code in a single class file. There could be methods which are 500 lines of executable code. This scenario was quite common with developers who started doing Object Oriented development after spending years in developing procedural code. Believe me those are not the only creatures writing such code, I have seen such methods and classes myself written by people who have worked with only Object oriented languages right from day one of their career. Its bit sad that people don’t really make use of Object Oriented features to come up with code which is structured correctly.

You might ask me what’s bad with such code? The answer is not so difficult. Its a maintenance nightmare. You might know what is happening inside the class or method while developing it, imagine what will happen when you have to revisit the code after 6 months, may be 1 year later or even 2 years later. Spare a thought for the developer who is going to refactor that piece of code after you have moved onto a different project or a different company. So how do we take care of this situation?

Method 4 : Use industry standards such as Cyclomatic Complexity

Long ago I had written couple of post related to DIfferent Metrics available in Visual Studio and how to use Code Metrics. You might want to read them before continuing further.

One of the commonly used matrix to gauge the quality is Cyclomatic Complexity. It is the measure of the decision points a function has to evaluate. Decision points are measured by the number of logical statements such as an if statements, switch statements, for loops, while statements etc. At a class level its the sum of Cyclomatic Complexity of all the methods and similarly at the Module or Assembly level it is the aggregate of the Cyclomatic Complexity of all its child elements. Cyclomatic Complexity can tell us approximately how many unit tests will be required to fully cover all the code paths within a function. Different teams have different acceptance levels for Cyclomatic Complexity. In general the recommended value is 40. If this value is higher, we should try to refactor the method to make it more readable and maintainable. It is far more easier to understand a method which is 10 to 15 lines then go through a page full of code and figure out what is going on.

There are no hard and fast rules related to Lines of Code (LoC) per method, class etc. But a general guidelines is to make the methods smaller and self explanatory. If we follow that, most of our methods should have between 8 to 15 lines of code. At a class level this should generally be around 100 to 150 lines. You can argue that there can be classes with more than 200 lines none of which have methods more than 15 lines. This does happen in real life. You’ll find that the Cyclomatic Complexity for such classes will be higher compared to the smaller classes.

Can we rely entirely on Cyclomatic Complexity as our preferred measure of code quality? Not entirely. Because we’ll come across situation where we cannot split methods and classes beyond certain extent and their complexity is on the higher side. So how can we arrive at a reliable measure. In Visual Studio Code Matrix report we can also find Maintainability Index which indicates how easier or difficult it would be to maintain a particular method or a class or an Assembly. Along with maintainability index we can also use another metrics called Depth of Inheritance. It is a measure of how many levels of inheritance are involved. All these matrix can be helpful in deciding the quality of the code. I remember once refactoring a very lengthy method with Cyclomatic Complexity of more than 300 into a smaller one with less than 50 Complexity points. These metrics are available from within the Visual Studio development environment. Using these metrics should be a good starting point to build quality code.

Once the whole team becomes comfortable with using these tools and techniques we can go a step forward and automate this using some dedicated tool like NDepend which can give us numerous other quality metrics including Cyclomatic Complexity. NDepend supports a language called Code Query Language (CQL) which can be used to query runtime code attributes and define custom rules. For example we can define a rule in NDepend which says a warning or an error should be reported if a method has more than 50 lines of code and has Cyclomatic Complexity is more than 50. As of today NDepend can measure up to 82 different metrics related to the code quality. All these comes with a cost. NDepend is a commercial product and you need to pay for the licences. Having used the trial version I think its worth investing in NDepend for long term benefits.

Conclusion

As we saw, the code quality can be measured in different ways. Each measure can be used to validate different aspect of the code. We can never rely on one single tool or technique to measure code quality. In my opinion we should use a combination of two or more metrics so that the short comings of one technique might be covered by the other. The biggest advantage of using a standard metrics like Cyclomatic Complexity is that it is not left to the discretion and interpretation of one or two individuals. You’ll get the same result if we measure the metrics on the same piece of code 100 times or even 1000 times.

We cannot rely 100 percentage on the tools like we saw in the example I mentioned above in case of unit test code coverage. We need to get the balance right between the manual and the automated tools and techniques. There also needs to be right mix between the tools that we use. These tools and techniques are there to help you write better code. If there are 10 tools available in the market we shouldn’t be using each and every one of them. If a particular tool slows you and your team down instead of helping you its better to look for the alternatives. I am sure there are more tools and techniques out there to improve the quality of code code and I would like to learn more about them.

Until next time Happy Programming Smile

Further Reading

Here are some books I recommend related to the topics discussed in this blog post.

spacer

Acceptance Test Driven Development and Anti Pattern

Background

Recently I have been working on projects that use the approach of Acceptance Test Driven Development (ATDD) along with Test Driven Development (TDD) . Fitnesse is the tool used for writing Acceptance tests. For those who are unaware of Acceptance tests, in layman terms these are the tests which validate the business functionality. Usually they are used to compare the output of Service calls. These tests are different from the normal unit tests written by developers. Acceptance tests are mainly written by business people. In most cases business folks are represented by Business Analysts (BA) and it is BA’s who are involved in writing these tests.

In software industry we have patterns and Anti Patterns. With all technologies and programming languages, we always find good patterns and the other ones which are not used in the proper sense. Such patterns are commonly called as Anti Patterns. I have come across few anti patterns while working with ATDD. Let me concentrate on one such anti pattern here.

A case of Acceptance Test Anti Pattern

Imagine we have a user in the system and need to display certain details related to the user on screen. Lets imagine we need to display the user ID which uniquely identifies the user, his first name and last name. The Business Analyst working on this user story will come up with an acceptance test which verifies the details pertaining to the user which would be returned by the service call. Everything seems very simple. But business users have a habit of making easier things difficult or we the developer community has made it a habit of complicating simple user requirements.

Imagine in the above scenario, the business users want to see the user name in concatenated form as first name + last name. In the fitnesse test the BA would set an expectation in the same manner. Here is a screenshot of the test business analyst would write in Fitnesse.

FitnesseTestWithConcatenation

We can develop the required functionality in multiple ways.

Approach 1 : The simplest way to fulfil this expectation is to return the result from the service in the concatenated form and the fitnesse tests will be all green.

Approach 2 : Use an extension method in the client side.

Approach 3 : Use separation of concerns principle and make use of ViewModel from MVVM pattern.

I would say more than 90% of the developers would follow the first approach. Technically there is nothing wrong with that approach. But from an architectural point of view as well as from the point of implementing best practices it has some serious flaws. Lets dive into the code to see what and how. Lets look at the service implementation.

namespace UserManagementServices

{

    public class UserManagementService : IUserManagementService

    {

        public UserDto GetUserdetails(int userId)

        {

            User domainUser = new User

                {

                    Id = userId,

                    FirstName = "James",

                    LastName = "Bond"

                };

 

            return new UserDto

                {

                    Id = domainUser.Id,

                    UserName = string.Format("{0} {1}", domainUser.FirstName, domainUser.LastName)

                };

        }

    }

}

The service exposes a method named GetUserDetails which takes the userId as the input parameter. Then we instantiate a domain object  which would ideally be returned from the persistent layer using some sort of ORM like Entity Framework or NHibernate or at least a hand coded Data Access Layer (DAL). These are outside the scope of this demo and hence I am instantiating a domain object directly. We then transform this domain object into a data transfer object (DTO) to return to the caller of the service. Note how the domain object has separate properties for Firstname and Last but the DTO has only one property called UserName.

Lets look at the code which acts as the glue between the Fitnesse test and the service call. We call these as Fixtures. Here is the implementation of UserFixture.

namespace AntiPattern1.FitnesseFixtures

{

    public class UserFixture

    {

        public List<Object> Query()

        {

            UserManagementServices.UserManagementService service = new UserManagementServices.UserManagementService();

 

            UserDto userdetails = service.GetUserdetails(1);

 

            return new List<Object>

                {

                    new List<object>

                        {

                            new List<string> { "Id", userdetails.Id.ToString() },

                            new List<object> { "UserName", userdetails.UserName }

                        }

                };

        }

    }

}

The code is self explanatory. We create an instance of the service and call the GetUserDetails with userId as 1. The result is then transformed into the format which is required for the Fitnesse to parse the results. Lets run the fitnesse test and see the result.

FitnesseResultWithConcatenation

The service works as expected. So lets make use of this service in a client. Lets build a Silverlight UI to display these results.

            <Grid>
                <Grid.RowDefinitions>
                    <RowDefinition />
                    <RowDefinition />
                    <RowDefinition />
                </Grid.RowDefinitions>
                <Grid.ColumnDefinitions>
                    <ColumnDefinition />
                    <ColumnDefinition />
                </Grid.ColumnDefinitions>

                <TextBlock Text="Id"
                          Grid.Row="0"
                          Grid.Column="0" />

                <TextBlock Text="{Binding Id}"
                          Grid.Row="0"
                          Grid.Column="1" />

                <TextBlock Text="User Name"
                          Grid.Row="1"
                          Grid.Column="0" />

                <TextBlock Text="{Binding UserName}"
                          Grid.Row="1"
                          Grid.Column="1" />

            </Grid
>

We have a bare minimal UI controls to display two labels and two properties retrieved from the service. Lets look at the output.


SilverlightOutputWithConcatenation


I haven’t paid much attention to the layout of fields here, as can be seen from above screenshot the user name is displayed as expected in the concatenated form “James Bond”. Everything is working as expected.


But I personally have a problem with this approach. I believe we are mixing two concerns here. In every article related to Domain Driven Design (DDD), we’ll find that the object model that we define for our application should be as close to the domain model of the application. In that sense we have ensured that the User domain object in our application defines two properties for individually storing the first and the last name. But the other aspect is related to the presentation layer. The user wants to see concatenated data. And we are doing the concatenation in the service layer. This is where the separation of concerns should address our needs.


If we use MVVM pattern, the concatenation part should be taken care by the view model and the service should return two separate fields for first and the last name. Many people take a shortcut of using an ExtensionMethod or a partial class which exposes an additional property which does the same. To me it looks like an overkill to have a partial class. Its confusing in the first place and also a maintenance headache. I strongly recommend managing this through the view model.


To better understand my viewpoint, lets consider a use case where the user management service is accessed by another client and we are using WPF for this.  In this application the business users need the same data but the representation is different. Lets look at the UI for this WPF application.


WPFClientForUserManagement


Oops, the users want to see first name and last name separately. What do we do now? Add another method to the service which returns first name and last name separately? In future we might have another client who wants an altogether different representation of the same data. The user details could be displayed in an desktop application or inside a desktop browser, it could be displayed on a smart phone or it could be a tablet. The possibilities are many. With each permutation and combination we can have different form factors and screen resolutions. Not all places would be best suited to display the users full name. There would be situations where we might need to display the names separately.


Worst case, even if there are no multiple tenants to the service, the same business users might decide to represent the details separately. If we concatenate the names in the service layer, it would force us to change the service implementation when the client needs changes. This I feel is against the principles of Service orientation and should be strictly avoided by the people practicing Service Oriented Architecture (SOA).


Every time we think of refactoring the service code, first and foremost question that should come to our mind is what will be the impact of refactoring if the same service is to be used with more that one client. That should give us a clear indication whether the changes needs to be done in the service layer or on the client side. The thumb rule is that it the change needs to be consistent across all clients it should be done in the service layer. If only one or two clients are impacted its better to do it in the client side.


My preferred way of solving this issue is to refactor the service and remove the concatenation logic from the service layer and handle it in the client side. Here is how I would refactor the service implementation.



        public UserDto GetUserdetails(int userId)


        {


            User domainUser = new User


                {


                    Id = userId,


                    FirstName = "James",


                    LastName = "Bond"


                };


 


            return new UserDto


                {


                    Id = domainUser.Id,


                    FirstName = domainUser.FirstName,


                    LastName = domainUser.LastName


                };


        }


The UserDto now has two separate properties. Lets use this in the WPF client.

    <StackPanel>

        <Label Content="Id" />

        <TextBlock Text="{Binding Id}" />

        <Label Content="First Name" />

        <TextBlock Text="{Binding FirstName}" />

        <Label Content="Last Name" />

        <TextBlock Text="{Binding LastName}" />

    </StackPanel
>

The service refactoring has fixed the issue with the WPF client but what about the Silverlight client which needs to display the name in the original concatenated form. Currently this is what we are doing to bind the results of the service call to the UI



        private void Home_Loaded(object sender, System.Windows.RoutedEventArgs e)


        {


            UserManagementExposedServiceClient client = new UserManagementExposedServiceClient();


 


            client.GetUserdetailsCompleted += (o, args) => client_GetUserdetailsCompleted(o, args);


 


            client.GetUserdetailsAsync(10);


        }


 


        private void client_GetUserdetailsCompleted(object sender, GetUserdetailsCompletedEventArgs args)


        {


            UserDto userDto = args.Result;


            this.DataContext = userDto;


        }


We cannot continue doing this because there is no UserName property on the DTO which can be databound to the Textblock. So we introduce a view model which mediates between the view and the model. The view model is set as the data context of the view as



        private void client_GetUserdetailsCompleted(object sender, GetUserdetailsCompletedEventArgs args)


        {


            UserDto userDto = args.Result;


 


            UserViewModel viewModel = new UserViewModel(userDto);


 


            DataContext = viewModel;


        }


As we can see rom the above change, the view model takes the UserDto as constructor argument. The view is not even aware of the change that has happened. We can use the view model to format the data in whatever way we like. Here is the view model implementation.



    public class UserViewModel


    {


        private readonly UserDto _userDto;


 


        public UserViewModel(UserDto userDto)


        {


            _userDto = userDto;


        }


 


        public int Id


        {


            get


            {


                return _userDto.Id;


            }


        }


 


        public string UserName


        {


            get


            {


                return string.Format("{0} {1}", _userDto.FirstName, _userDto.LastName);


            }


        }


    }


There is nothing special in this code. It is all self explanatory. So we have overcome the problem in Silverlight application using the ViewModel approach. What about the Ftnesse from where we all started. If we run the test in its current state it would fail because the Dto no longer has the username property. How do we make the fitnesse test pass?


We can do the concatenation in the fixture code and it will work as expected the test would be green. But that is not recommended as it duplicates the code and the fixture has a logic built into it. The fixture should be used as a glue between the test and the service. There should not be any logic inside the fixture. If we relook at the purpose of Fitnesse, it is to validate the output of the service. So in its current state the service is returning FirstName and LastName but the fitnesse test is expecting the concatenated text. The Fitnesse test should really be validating that the service returns the first and last names as expected. I updated the fitnesse test and verify the output as follows


FitnesseWithoutConcatenation


Conclusion


With latest advances in client side computing the views are becoming more and more dumb. One common approach used is to follow an architectural pattern like Model View Presenter (MVP) or Model View ViewModel (MVVM) which relies more on the data binding capabilities. These patterns clearly help us in separating the UI concerns and the business logic. As a result we can safely say that if we validate the output of the service we are at least sure that the binding will take care of the presentation concerns and the functional data would be represented correctly.


I fully understand and support the need for making the software more and more testable. The more ways we can test the software, the better understanding we can have about the system that we are building. Also different perspectives can help us discover different issues related to the quality of the software. The more amount of testing and specially the automated testing we can build around the software will help us in reducing regression issues and enhances the quality of the product or system being developed. All this is possible only if we use the tools available at our disposal in the right manner.


As one of my colleague said having a unit test doesn’t mean that you don’t write bad code, I would say having an acceptance test doesn’t mean we have tested the business functionality in the right manner. In my opinion there has to be the right balance. A tool like Fitnesse should be used for testing the service interfaces. It is not a tool to be used for doing GUI based testing. If you are using fitnesse for doing that I am sure you’ll have to violate many of the best practices and industry standards. And if you manage to still follow the best practices and industry standards, you might have to write a lot of boilerplate code to make the fitnesse tests pass. One way or the other it can lead to maintenance problems in the long run. My suggestion is to keep things simple and efficient. GUI based testing can be done using a browser plugin called Silenium. We need to use the right tool for the right purpose to get the best results possible.


As always I have uploaded the complete working solution which can be downloaded for further reference. Note that I have not included the fitnesses binaries and the fitnesse test code in this zip file. The test is only three lines and I preferred to just copy those 3 lines here because you’ll need to setup infrastructure for running Fitnesse test which is out of scope of this post. So here are the lines from the Fitnesse test


!|import|
|AntiPattern1.FitnesseFixtures|


!|query:UserFixture|
|Id|FirstName|LastName|
|1|James|Bond|


Until next time Happy Programming Smile


Further Reading


Here are some books I recommend related to the topics discussed in this blog post.







spacer

NGTweet Part 16–Implicit Data Templates

In the 9th part of this NGTweet Silverlight learning series I had demonstrated the use of Data Triggers along with Microsoft.Expressions.Interaction dll. In that post I had applied different settings to the properties of the Images based on whether a tweet was normal tweet or a retweet. Although the code was doing what was expected, I personally felt it was too complicated to achieve such a small result. That was in Silverlight 4. In Silverlight 5 we have support for Implicit Data Templates. This is not to be mistaken with Implicit Styles covered in part 13.

How to use Implicit Styles in Silverlight 5

We already have a use case of Implicit Data Templates. We have two types of tweets in the NGTweet application. The basic one is the normal tweet and the other one is the Retweet. We display the retweet with the distinction that the original profile image is blurred and the retweeted user’s profile image is overlaid on top of it. Also the screen name of the user is displayed differently in case of retweet. The images sizes are also different  for the profile image in case of retweet.

The way Implicit Data Templates work is very much similar to that of Implicit Styles. Implicit styles are applicable to controls like Textbox, TextBlock etc. Likewise we can define a Data Template for a class. When that class is used in the application, it will automatically get the Data Template associated with it. In order to understand it better, lets look at the code we had used in Part 9.

                <i:Interaction.Triggers>
                    <ei:DataTrigger Binding="{Binding Path=IsRetweet}"
                                   Value="True">
                        <ei:ChangePropertyAction TargetObject="{Binding ElementName=ProfileImage}"
                                                TargetName="Height"
                                                Value="30"
                                                PropertyName="Height" />
                        <ei:ChangePropertyAction TargetObject="{Binding ElementName=ProfileImage}"
                                                TargetName="Width"
                                                Value="30"
                                                PropertyName="Width" />
                        <ei:ChangePropertyAction TargetObject="{Binding ElementName=ProfileImage}"
                                                TargetName="Margin"
                                                Value="15, 40, 10, 10"
                                                PropertyName="Margin" />
                        <ei:ChangePropertyAction TargetObject="{Binding ElementName=ProfileImage}"
                                                TargetName="VerticalAlignment"
                                                Value="Top"
                                                PropertyName="VerticalAlignment" />
                        <ei:ChangePropertyAction TargetObject="{Binding ElementName=ProfileImage}"
                                                TargetName="HorizontalAlignment"
                                                Value="Left"
                                                PropertyName="HorizontalAlignment" />

                        <ei:ChangePropertyAction TargetObject="{Binding ElementName=RoundedCornerRectangle}"
                                                TargetName="Rect"
                                                Value="0,0,30,30"
                                                PropertyName="Rect" />
                    </ei:DataTrigger
>

As we can see from the above snippet, we are setting different values to properties like Height and Width of the profile image based on the value of IsRetweet property from the view model. I have copied only the piece related to settings when IsRetweet is true. We have similar settings with different values when the value is false. One problem I feel with the above code is that its not the natural XAML syntax. We’ll refactor this piece of code into a manageable chunk using implicit data template.


In the above piece of code, the distinction is based on whether the tweet is normal or a retweet. Currently our TweeterStatusViewModel is the one deciding whether to show the ScreenName as only the profile name or a formatted name. Also a similar logic exists to populate the OriginalProfileImage property. In case of normal tweet, the OriginalProfileImage property is not applicable.


In its current context the TweeterStatusViewModel violates the Single Responsibility Principle (SRP). It has multiple reasons to change. Lets refactor this a bit. We’ll do some cleanup and rename the TweeterStatusViewModel as TweetViewModel to represent a basic tweet. And add another class RetweetViewModel which extends the TweetViewModel. The properties like Tweet, CreatedDate and Id are common to both Tweet and ReTweet. The ScreenName is applicable to both but is displayed differently. Lets look at the definition of ScreenName in the base class TweetViewModel



        public virtual string ScreenName


        {


            get


            {


                return TweeterStatus.User.ScreenName;


            }


        }


We define it as a virtual property so that the derived class can provide its own implementation. Lets look at the implementation of the derived class RetweetViewModel.



    public class RetweetViewModel : TweetViewModel


    {


        public RetweetViewModel(NGTweeterStatus tweeterStatus)


            : base(tweeterStatus)


        {


        }


 


        public string OriginalProfileImageSource


        {


            get


            {


                return TweeterStatus.RetweetedStatus.User.ProfileImageUrl;


            }


        }


 


        public override string ScreenName


        {


            get


            {


                return string.Format(


                               "{0}, (RT by {1})",


                               TweeterStatus.RetweetedStatus.User.ScreenName,


                               TweeterStatus.User.ScreenName);


            }


        }


    }


As we can see from the above code, this class contains only two properties. The OriginalProfileImageSource returns the respective image URL. Note that we did not have to do a null check to see if the RetweetedStatus is null. The code looks much cleaner this way. Similarly for the ScreenName, we override it to return the formatted string. Once again we don’t do the null check like we used to do in Part 9.


With this refactoring in place we need to also modify the logic for populating the correct view model. This needs to happen when we add the view model to the list. Based on whether the RetweetedStatus is null or not we instantiate the correct view model type as shown below



        public void AddNewTweets(ObservableCollection<NGTweeterStatus> tweeterStatuses)


        {


            tweeterStatuses.Reverse();


            DispatcherHelper.CheckBeginInvokeOnUI(


                () => tweeterStatuses.OrderBy(ts => ts.CreatedDate).ToList().ForEach(


                    t =>


                    {


                        if (t.RetweetedStatus == null)


                            _tweeterStatusViewModels.Insert(0, new TweetViewModel(t));


                        else


                        {


                            _tweeterStatusViewModels.Insert(0, new RetweetViewModel(t));


                        }


                    }));


        }


Now its time to modify the data template. In our previous examples we had defined a data template and applied it to the ItemTemplate of the list box. To use an implicit data template, we follow the same process that we used for implicit styles. We create a data template but do not assign a Key to it. Instead we specify the DataType property to specify which data type will use the specific template. Lets define the Imaplict Data Template for TweetViewModel.

    <DataTemplate DataType="model:TweetViewModel">

        <Border Style="{StaticResource ThickBorderStyle}">

            <Grid>

                <Grid.ColumnDefinitions>
                    <ColumnDefinition Width="Auto" />
                    <ColumnDefinition Width="60*" />
                    <ColumnDefinition Width="20*" />
                </Grid.ColumnDefinitions>

                <Grid.RowDefinitions>
                    <RowDefinition />
                    <RowDefinition />
                    <RowDefinition />
                </Grid.RowDefinitions>

                <Image Source="{Binding ProfileImageSource, Mode=OneTime}"
                      x:Name="ProfileImage"
                      Margin="5"
                      Height="50"
                      Width="50"
                      HorizontalAlignment="Center"
                      VerticalAlignment="Top"
                      Grid.RowSpan="2"
                      Grid.Row="0"
                      Grid.Column="0">
                    <Image.Clip>
                        <RectangleGeometry Rect="0,0,50,50"
                                          RadiusX="5"
                                          RadiusY="5" />
                    </Image.Clip>

                </Image>

                <TextBlock Text="{Binding ScreenName, Mode=OneTime}"
                          TextTrimming="WordEllipsis"
                          FontWeight="Bold"
                          Grid.Row="0"
                          Grid.Column="1">
                           <ToolTipService.ToolTip>
                                <ToolTip Content="{Binding ScreenName, Mode=OneTime}" />
                           </ToolTipService.ToolTip>

                </TextBlock>

                <TextBlock Text="{Binding CreatedDate, Mode=OneWay, Converter={StaticResource dateTimeToRelativeTimeConvertor}}"
                          TextAlignment="Right"
                          FontWeight="Thin"
                          FontStyle="Italic"
                          FontSize="10"
                          Grid.Row="0"
                          Grid.Column="2" />

                <TextBlock Text="{Binding Tweet, Mode=OneTime}"
                          TextWrapping="Wrap"
                          VerticalAlignment="Top"
                          Grid.Row="1"
                          Grid.Column="1"
                          Grid.ColumnSpan="2"
                          Margin="2, 0, 2, 2" />
                <StackPanel Grid.Row="2"
                           Grid.Column="1"
                           Grid.ColumnSpan="3">
                    <Button Command="{Binding RetweetCommand, Mode=OneTime}"
                           Height="20"
                           Width="20">
                        <Button.Content>
                            <Image Source="../Assets/Images/retweet.png"
                                  Height="16"
                                  Width="16" />
                        </Button.Content>
                    </Button>
                </StackPanel>
            </Grid>

        </Border>
    </DataTemplate
>

Note the first line of the definition above. Because our view models reside in different assembly, we add a XAML namespace alias called model. Rest of the template is the standard definition. Its much more cleaner and easier to understand as well as maintain.


Similar to this we have an Implicit Data Template for RetweetViewModel.

    <DataTemplate  DataType="model:RetweetViewModel">

        <Border Style="{StaticResource ThickBorderStyle}">

            <Grid>

                <Grid.ColumnDefinitions>
                    <ColumnDefinition Width="Auto" />
                    <ColumnDefinition Width="60*" />
                    <ColumnDefinition Width="20*" />
                </Grid.ColumnDefinitions>

                <Grid.RowDefinitions>
                    <RowDefinition />
                    <RowDefinition />
                    <RowDefinition />
                </Grid.RowDefinitions>

                <Image Source="{Binding OriginalProfileImageSource, Mode=OneTime}"
                      Visibility="{Binding IsRetweet, Converter={StaticResource converter}}"
                      Height="50"
                      Width="50"
                      Margin="5"
                      Grid.RowSpan="2"
                      Grid.Row="0"
                      Grid.Column="0"
                      HorizontalAlignment="Center"
                      VerticalAlignment="Top">
                    <Image.Clip>
                        <RectangleGeometry RadiusX="5"
                                          RadiusY="5"
                                          Rect="0,0,50,50" />
                    </Image.Clip>
                    <Image.Effect>
                        <BlurEffect />
                    </Image.Effect>
                </Image>

                <Image Source="{Binding ProfileImageSource, Mode=OneTime}"
                      x:Name="ProfileImage"
                      Height="30"
                      Width="30"
                      Margin="15, 40, 10, 10"
                      HorizontalAlignment="Left"
                      VerticalAlignment="Top"
                      Grid.RowSpan="2"
                      Grid.Row="0"
                      Grid.Column="0">
                    <Image.Clip>
                        <RectangleGeometry Rect="0,0,30,30"
                                          RadiusX="5"
                                          RadiusY="5" />
                    </Image.Clip>

                </Image>

                <TextBlock Text="{Binding ScreenName, Mode=OneTime}"
                          TextTrimming="WordEllipsis"
                          FontWeight="Bold"
                          Grid.Row="0"
                          Grid.Column="1">
                            <ToolTipService.ToolTip>
                                <ToolTip Content="{Binding ScreenName, Mode=OneTime}" />
                            </ToolTipService.ToolTip>

                </TextBlock>

                <TextBlock Text="{Binding CreatedDate, Mode=OneWay, Converter={StaticResource dateTimeToRelativeTimeConvertor}}"
                          TextAlignment="Right"
                          FontWeight="Thin"
                          FontStyle="Italic"
                          FontSize="10"
                          Grid.Row="0"
                          Grid.Column="2" />

                <TextBlock Text="{Binding Tweet, Mode=OneTime}"
                          TextWrapping="Wrap"
                          VerticalAlignment="Top"
                          Grid.Row="1"
                          Grid.Column="1"
                          Grid.ColumnSpan="2"
                          Margin="2, 0, 2, 2" />
                <StackPanel Grid.Row="2"
                           Grid.Column="1"
                           Grid.ColumnSpan="3">
                    <Button Command="{Binding RetweetCommand, Mode=OneTime}"
                           Height="20"
                           Width="20">
                        <Button.Content>
                            <Image Source="../Assets/Images/retweet.png"
                                  Height="16"
                                  Width="16" />
                        </Button.Content>
                    </Button>
                </StackPanel>
            </Grid>

        </Border>
    </DataTemplate
>

This has the additional markup for displaying the Original profile image. Rest of the markup is almost same with minor changes to the properties of profile image. Now that we have the data templates defined for both the tweet and retweet view models lets look at how to use them in the view. Lets look at the ListBox which displays these items.

                <ListBox Width="400"
                        Height="500"
                        HorizontalAlignment="Center"
                        HorizontalContentAlignment="Stretch"
                        ScrollViewer.HorizontalScrollBarVisibility="Disabled"
                        ItemContainerStyle="{StaticResource ListBoxItemStyle}"
                        ItemTemplate="{StaticResource TimeLineListDataTemplate}"
                        ItemsSource="{Binding TweeterStatusViewModels}"
/>

This has the ItemTemplate specified for the items. Since we want to use the Implicit Data Template defined for TweetViewModel and RetweetViewModel, we need to get rid of the explicit item template as below

                <ListBox Width="400"
                        Height="500"
                        HorizontalAlignment="Center"
                        HorizontalContentAlignment="Stretch"
                        ScrollViewer.HorizontalScrollBarVisibility="Disabled"
                        ItemContainerStyle="{StaticResource ListBoxItemStyle}"
                        ItemsSource="{Binding TweeterStatusViewModels}"
/>

And the final result of this is the same like we had before


NGTweet with implicit data templates


Conclusion


The templates features of Silverlight, WPF and Windows Phone 7 are all the same. With the introduction of Implicit Data Templates in Silverlight 5, the differences between WPF and Silverlight versions has reduced to a great extent. Implicit data templates are very helpful when we have inheritance relationships and there is a need to display the representation differently based on the type of the class. A very common example found in many other blog posts is that of an Employee and a Manager. Employee is the base class and Manager is a variant of that. We need to display Manager differently. On the same lines, I hope this post was helpful in making the use of implicit data templates clear.


As always I have uploaded the complete working solution to Dropbox.


Until next time Happy Programming Smile


Further Reading


Here are some books I recommend related to the topics discussed in this blog post.







spacer