My experience in building the First Windows Phone App

 

Recently I published my first Windows Phone 7 application in the Marketplace. This app is named Singapore Public Holidays.I also adapted the same app to cater to the needs of my country of origin India and created an Indian version of the app which is called Indian Public Holidays. This post is about my experience in building the first Windows Phone 7 app.

Why this app?

Before I decided to developed the Singapore Public Holidays app, there were couple of other options that came to my mind. I thought of extending the NGTweet app which I had done as part of the Silverlight Learning Series. Another thought that came to my mind was to use some of the REST based public API’s and create some app with them. Something like Thought Of The Day service which can pull quotes from any of the public quotes services. Another thought was to build an app for some techie site like StackOverflow client for Windows Phone. All these ideas were either already implemented or were slightly complex. So I thought of building something which was very minimal in features and would not depend too much on external data. Also at the same time it had to be something useful. That’s when the idea of Singapore Public Holidays came to my mind.

The holiday list for the year 2012 is published online by the Ministry of Manpower of Singapore. I decided to build a small app which would list these holidays. Instead of just showing the static date, I thought of displaying the number of days remaining from the current date would be interesting for users. As a user it would also be helpful if I can set the reminder for a holiday. So the reminder functionality was also added in the updated version.

Just after I finished the first version of the app, Jesse Liberty suggested in his post published in MSDN magazine January 2012 Issue that “your first app should be interesting enough to be meaningful, yet simple enough to get underway”. I think my effort was inline with Jesse’s suggestions. Instead of spending weeks on developing an app I spend few hours of the day to get started. My intentions behind doing this was to understand the whole process of creating Windows Phone apps. I wanted to understand what all things are required to publish an app into the Marketplace and how long does it take for the complete process. Lets see how did I manage to do this.

Different steps involved in developing and publishing the app

First and foremost, we need the development environment to get started with building the intended application. Visual Studio 2010 was my preferred choice. Install the tools required for developing Windows Phone application using Visual Studio. The MSDN article about Create Your First Silverlight Application for Windows Phone is a good starting point.

Apart from the standard Silverlight for Windows Phone features we might need additional features. I wanted to use features like ToggleButton which are not natively supported by Silverlight 5. So I downloaded the Windows Phone Toolkit which has additional controls. This is exactly same as the toolkit available for Silverlight.

Once we have all the tools develop the application. This will take some time based on the complexity of the application. Once the application is developed we need to publish it to the central place where others can download it and install it on their Windows Phone devices. Marketplace is that central place which allows us to publish applications to the end users.

Before we can submit an app, we need to register as a Windows Phone Developer at App hub. This costs $99 USD for annual subscription using which you can submit up to 100 apps. Once the account is created you are ready to go.

The submission process is quite straightforward. You need to build the Silverlight application in Release mode and submit the .xap file along with the artwork. The artwork are a set of icons in different sizes which are used to represent the app in the Marketplace. These are also the icons which gets displayed on the Windows Phone device in the application list as well as on the home screen when the application is pinned to the start screen.

Once the application is submitted, you’ll have to wait for the certification process to get completed. This can take long time depending on how big or small the application is. The testing is done on multiple Windows Phone devices from different vendors like Nokia, HTC, LG, Samsung etc. to make sure that the app confirms to all the requirements defined in the Application Certification Requirement.

If everything goes fine, the application will be certified and you’ll be notified by email about the completion of the certification process. At the time of submitting the app to app hub, you can decide how should the app be published once it has passed the certification. I chose to automatically publish it but if you wish you can do it manually as well. The application submission walkthrough describes the whole process in more details.

 

Things to remember

When we do something for the first time there is always the tendency to forget something. Here are some of the things I missed out in my 3 submissions so far. Some of these things might be very silly but they can cause a delay of 1 week in publishing the app as the resubmission or an update goes through the same certification process.

Icons and Screenshots

The artwork required for marketplace is very important for the submission process. Make sure the size of the images is exactly the same as defined in the certification requirements document. The dimensions of the icons are clearly mentioned in the MSDN doc Application Submission Requirement. I had the agony of the app being rejected because I forgot to replace the default application icons.

Also the screenshots needs to be taken with care. I forgot to to turn off the frame rate counters in the emulator and that was also highlighted in the rejection reasons. But this was not very consistent because one of the apps got approved when the screenshots had the frame rate counters displayed on them. I guess this depends on the vigilance of the tester testing the app.

Choose the targeted market

By default none of the supported markets are selected when we submit the app. We need to either choose all the supported markets for publishing our app or only the ones which we are interested in. If you forget to choose the target marketplace, the app might get certified but nobody will be able to download it.

Be Patient

The management interface for handling the published apps and also to check the status of submissions in app hub is bit slow. It takes hell lot of time to refresh. Even after the certification is completed it might take more than 48 hours for the app to actually appear on the marketplace for the users to download. I received emails about the successful completion of certification for my apps but had to wait for more than 2 days to download the apps.

For the published apps we can get the statistics about the number of downloads, the crashes that has happened for users etc. Even this data is not available until a week or so after the app is published. On the whole this area seems to be very slow to react.

Conclusion

It is very easy to get started with the Windows Phone application development if you have prior experience in working with Silverlight. The complete process takes about 5-7 days to compete once the app is submitted until it is published in the Marketplace. There are not many apps available in the Windows phone Marketplace compared to iPhone or iPad. This might be an opportunity for people to develop interesting apps using the Silverlight technology which we are already aware of. If you don’t want to waste time in building native apps because you wish to build apps which can run on multiple platforms with minimal changes you can look at other options like offline HTML 5 apps. Thats a different topic altogether. But to get started with that approach you can look at PhoneGap which can be used to build native apps with web technologies.

There is no code associated with this post, so don’t look for any download from dropbox Smile

Until next time Happy Programming Smile

 

Further Reading

Following books might be helpful in developing Windows Phone applications.

spacer

Rhino Mocks : Partially mock Internal method

In my previous post on Partial Mocks, I had made a note that the method which we intend to partially mock has to be a public virtual method. Immediately after publishing the post, I came across a post by Matt Roberts which demonstrated how to use InternalsVisibleTo attribute to expose protected methods to certain assemblies. In this post I’ll demonstrate how to use InternalsVisibleTo attribute to unit test internal methods.

What’s the problem with earlier approach?

If we look at the code from the previous post, we had to make the methods public to be able to mock them using Rhino Mocks. In my opinion, this violates the encapsulation as we are forced to expose methods which are not really supposed to be public. We can overcome this by using the InternalsVisibleTo attribute. This attribute enables us to expose the types and methods which are defined as internal to certain assemblies only.

Apply InternalsVisibleTo attribute

We add the following two lines to the AssemblyInfo.cs of the main project file.

[assembly: InternalsVisibleTo("PartialMocksExample.UnitTest")]

[assembly: InternalsVisibleTo(RhinoMocks.NormalName)]

We are going to access the methods from the UnitTest assembly. So the first line is understandable. The second line is bit confusing. This is required for exposing the internal methods to Rhino Mocks. Internally Rhino Mocks creates a proxy which intercepts the calls to the methods during mocking. Because of this reason we need to make our methods virtual so that the proxy can intercept these calls. Once we have added these lines, we can modify the access levels of the methods which need not be public. Here is the modified code for the CalculateDueAmount method

        internal virtual void CalculateTotalDueAmount(PhoneBill generateBill)

        {

            double totalDueAmount = generateBill.BilledAmount - generateBill.DiscountedAmount;

 

            generateBill.TotalDueAmount = Math.Round(totalDueAmount, ROUNDED_DIGITS, MidpointRounding.AwayFromZero);

        }

Similarly I have modified the other methods as well from the previous post.

Conclusion

While following unit testing and TDD approach sometimes we are forced to violate some of the principles of encapsulation to satisfy the external tools like Rhino Mocks. Here we saw how we can make use of InternalsVisibleTo attribute to reduce the visibility of the method while testing. I agree that the methods we are mocking here are supposed to be private by default. We need to have a trade off between writing length unit tests or maintainable and more focussed tests. In my experience I have seen many TDD practitioners use internal methods to make use unit tests which are maintainable.

As always the complete working solution is available for download at Dropbox.

Until next time Happy Programming Smile

 

Further reading

Following book might be helpful in understanding the topics discussed in this post.

spacer

Rhino Mocks : When to use Partial Mocks

I have been using Rhino Mocks for past couple of years as the Dynamic mocking framework. It has support for different types of mocks like Strict mocks, Dynamic mocks and Partial mocks. The use of these types for specific use is bit confusing. Recently I had a wow moment when I found a real use of the Partial mock. This post is about that enlightenment.

Most of the time I have relied on using Strict mock and stubs. As mentioned in the documentation of the Rhino mocks, my understanding of the partial mock was that it is used to mock the abstract classes and methods. But that's not the only use of Partial mock. It can be used to also test methods by mocking only the methods we are interested in. Lets dive into an example.

How to use Partial Mocks

Lets assume we are building a solution for calculating the phone bill  for a hypothetical phone company. As of now the company has two types of customers. The Normal ones and the Corporate customers. While calculating the total due amount for a customer, the phone company does not give any discount for the normal customers but corporate customers are eligible for the 25% discount on the billed amount. Here is an implementation

    public class PhoneBillCalculator

    {

        private const int ROUNDED_DIGITS = 2;

 

        private const double CORPORATE_DISCOUNT_PERCENTAGE = 0.25;

 

        public PhoneBill GenerateBill(Customer customer)

        {

            PhoneBill generateBill = new PhoneBill

                {

                    CustomerType = customer.CustomerType,

                    BilledAmount = customer.BilledAmount

                };

 

            if (customer.CustomerType == "Normal")

            {

                generateBill.DiscountedAmount = 0;

            }

            else

            {

                generateBill.DiscountedAmount = Math.Round(

                    customer.BilledAmount * CORPORATE_DISCOUNT_PERCENTAGE, ROUNDED_DIGITS, MidpointRounding.AwayFromZero);

            }

 

            double totalDueAmount = generateBill.BilledAmount - generateBill.DiscountedAmount;

 

            generateBill.TotalDueAmount = Math.Round(totalDueAmount, ROUNDED_DIGITS, MidpointRounding.AwayFromZero);

 

            return generateBill;

        }

    }

Lets build a small set of tests to test these conditions.

    [TestClass]

    public class PhoneBillCalculatorTest

    {

        [TestMethod]

        public void GenerateBill_WithCustomerTypeAsNormal_ApplyNoDiscountOnTotalBilledAmount()

        {

            Customer customer = CreateCustomer("Normal", 170.50);

 

            PhoneBillCalculator billCalculator = new PhoneBillCalculator();

 

            PhoneBill phoneBill = billCalculator.GenerateBill(customer);

 

            Assert.IsNotNull(phoneBill);

            Assert.AreEqual("Normal", phoneBill.CustomerType);

            Assert.AreEqual(170.50, phoneBill.BilledAmount);

            Assert.AreEqual(0, phoneBill.DiscountedAmount);

            Assert.AreEqual(170.50, phoneBill.TotalDueAmount);

        }

 

        [TestMethod]

        public void GenerateBill_WithCustomerTypeAsCorporate_ApplyCorporateDiscountOnTotalBilledAmount()

        {

            Customer customer = CreateCustomer("Corporate", 170.50);

 

            PhoneBillCalculator billCalculator = new PhoneBillCalculator();

 

            PhoneBill phoneBill = billCalculator.GenerateBill(customer);

 

            Assert.IsNotNull(phoneBill);

            Assert.AreEqual("Corporate", phoneBill.CustomerType);

            Assert.AreEqual(170.50, phoneBill.BilledAmount);

            Assert.AreEqual(42.63, phoneBill.DiscountedAmount);

            Assert.AreEqual(127.87, phoneBill.TotalDueAmount);

        }

 

        private static Customer CreateCustomer(string customerType, double billedAmount)

        {

            return new Customer

                {

                    CustomerType = customerType,

                    BilledAmount = billedAmount

                };

        }

There are two tests. The first one which tests that there is no discount applied for the Normal customer’s total due amount. And the second test validates that the customer is given his due 25% discount on the original billed amount. Both the tests are pretty straightforward. Because this post is related to Partial mocks, lets create a scenario which forces us to make use of Rhino mocks.

Imagine there is an update to the original user requirement. The phone company has come up with more classifications for the customers. Based on various factors which are outside the scope of this post lets say the customers are classified as Normal, Corporate, Gold, Silver and Platinum. The discount varies for newly added types which are Gold, Silver and Platinum. Lets refactor the code to suite this requirement.

In its current state, the GenerateBill method only works for Normal and Corporate customers. We can refactor the if else block into a switch case statement.  The first part which copies the values from customer to bill entity is common to all types of customers. It would be nice to refactor this into a separate method. Same is the case with the final part which computes the total due amount. Instead of going through step by step refactoring, I’ll directly show the final code snippet.

        public PhoneBill GenerateBill(Customer customer)

        {

            PhoneBill generateBill = GetGenerateBillWithDefaultValues(customer);

 

            CalculateDiscount(customer, generateBill);

 

            CalculateTotalDueAmount(generateBill);

 

            return generateBill;

        }

After this refactoring, the GenerateBill method acts as template method which calls other methods. If we run the unit tests after this refactoring, they still run as expected. This ensures that we haven’t broken any of the existing functionality. There is one problem though. If we look at the tests they are testing the wrong thing. Observe carefully the data being setup in the test like the customer type and the billed amount. These values are not used in the GenerateBill method. We are setting data which is not in the scope of this method. If we follow TDD we should be testing only those things which are in the scope of the method under test. The existing unit tests are clearly violating this principle.

Lets see how to fix this. The template method is calling the methods within the same class. These are not external dependencies which can be mocked using dynamic mocks. Rhino Mocks has a special mock which can be used for cases like this. Its called Partial Mock which allows us to selectively mock parts of the class. Go ahead and add reference to the latest version of Rhino Mocks. I used NuGet package manager to add the dependency to the project. Lets see how we can make use of it in the code.

        [TestMethod]

        public void GenerateBill_WithCustomer_GeneratesBill()

        {

            Customer customer = CreateCustomer("Normal", 170.50);

 

            PhoneBillCalculator billCalculator = CreatePhoneBillCalculator();

 

            PhoneBill expectedBill = new PhoneBill();

 

            billCalculator.Expect(x => x.GenerateBillWithDefaultValues(customer)).Return(expectedBill);

 

            billCalculator.Expect(x => x.CalculateDiscount(customer, expectedBill));

 

            billCalculator.Expect(x => x.CalculateTotalDueAmount(expectedBill));

 

            PhoneBill phoneBill = billCalculator.GenerateBill(customer);

 

            billCalculator.VerifyAllExpectations();

 

            Assert.IsNotNull(phoneBill);

        }

We have set expectations on the instance of a PhoneBillCalculator itself. These methods are Then we exercised the method under test which is GenerateBill and finally we verify that the phone bill which is returned is not null. We also verify that all the expectations set on the bill calculator are met successfully. All this magic is possible because of the partial mock which is created in the helper method

        private PhoneBillCalculator CreatePhoneBillCalculator()

        {

            return MockRepository.GeneratePartialMock<PhoneBillCalculator>();

        }

Note that we are not creating an instance of the PhoneBillCalculator class, instead we are generating a partial mock using the MockRepository. This gives us the flexibility of setting up expectations on the methods of PhoneBillCalculator class. Only prerequisite for using this approach is that the method we are going to set the expectation must be a public virtual method. Please refer to my other post to check how we can partially mock the internal methods using Rhino Mocks.

Lets modify the earlier two tests to use the helper method instead of creating a new instance inside the test method.

            PhoneBillCalculator billCalculator = CreatePhoneBillCalculator();

We can run all the unit tests now and see that all 3 tests are run successfully. How is that  possible? We modified the two tests to use the partial mock and did not set any expectations on the methods. Still the tests ran successfully. This is because if we don’t set any expectations on a partial mock, Rhino Mocks reverts to the actual implementation and executes the real code.

With the refactored code, the scope of the earlier tests changes accordingly. I’ll leave it to the readers as an exercise to refactor those tests and also add new ones for the other pieces of the code which resulted as part of the refactoring. And by the way we did not add any code related to the discount for Gold, Silver and Platinum type of customers. Well that’s an exercise for you. You can add the required code to the switch statement.

Conclusion

Sometimes we need to mock some methods of the class which is under test. In general while unit testing we create a concrete instance of the class under test. We cannot set expectations on the methods of a concrete instance of a class. Under these circumstances we can make of PartialMocks provided by RhinoMocks to mock only few methods of a class. Those methods which are not mocked will be executed as normal.  This helps us in focusing on the methods under test and not to worry about the other methods which might be called internally. This can be helpful in building code which is more readable and maintainable. Small methods are always easier to maintain and test compared to the lengthy ones.

Note : I have tried to followed Roy Osherove’s advice to make the unit tests more readable and maintainable. Roy’s post in 2006 MSDN magazine is worth a read.

As always I have uploaded the complete working solution to Dropbox.

Until next time happy programming Smile

 

Further Reading

For other topics related to unit testing, you can refer to the following book.

spacer

Measuring Cyclomatic Complexity

This post is about the differences in the way Visual Studio 2010 calculates the Cyclomatic Complexity and the way NDepend does it. There is a difference between the way these two tools calculate the same metrics which is used as one of the standard code quality metrics. I will share my experience in using both these tools.

If you have followed my previous post, you would realise that I personally like to pay  bit more attention to the code quality. From my previous experiences, I have seen that a good quality code is not just the one that is easy to read and self explanatory, but also the one which is structured correctly. I had the experience of working on a project which was in development and enhancement for more than 8 years. You can imagine how difficult it will be to maintain such codebase. I would say our team was fortunate to have some of the best developers  who ensured that the code was of high standards. My definition of quality code is the one which is easier to maintain over a long period of time.

On a recent project our team was using Cyclomatic Complexity to ensure that the code is not very complex to maintain. We had certain thresholds defined for the complexity value. After running the Visual Studio Code Analysis, we encountered certain methods which were breaching the thresholds. We discussed among team members and found out that the developers who developed these methods had used NDepend as the tool for measuring the Cyclomatic Complexity values. As per NDepend analysis, the values were acceptable within the defined range. So why is that the tools used to measure the same metrics (Cyclomatic Complexity) report different values on the same piece of code?

An Example

Lets look at an example. Lets build a small program which displays a person details. We have a list of persons and we can filter them based on multiple criteria. The Person class is the simplest of all and has properties for storing FirstName, LastName and Age. We have a helper or a utility class which creates a static list of persons. We can name this as PersonDataFactory.

    internal static class PersonDataFactory

    {

        public static IList<Person> CreatePersons()

        {

            return new List<Person>

                {

                    new Person { FirstName = "James", LastName = "Bond", Age = 50 },

                    new Person { FirstName = "Harry", LastName = "Potter", Age = 20 },

                    new Person { FirstName = "Bill", LastName = "Gates", Age = 70 },

                };

        }

    }

In our main program we do the magic of filtering and displaying the person details.

        public static void Main(string[] args)

        {

            IList<Person> persons = PersonDataFactory.CreatePersons();

 

            WriteDetails(persons.Where(x => x.Age > 35));

        }

 

        private static void WriteDetails(IEnumerable<Person> persons)

        {

            foreach (Person person in persons)

            {

                Console.WriteLine("Name : {0} {1}", person.FirstName, person.LastName);

                Console.WriteLine("Age : {0}", person.Age);

                Console.WriteLine();

            }

        }

In the above code, we use a Lambda to filter the persons above the Age of 35 years. In the WriteDetails function we just enumerate the persons and print their details to the console. At this point lets run the Visual Studio 2010 Code Metrics and see the results.

image

At the class Program level we get Cyclomatic Complexity as 7. We see that the Main method has the Cyclomatic Complexity of 3 and the WriteDetails method has the CC of 3. Lets run NDepend analysis tool on the same code and compare the results.

image

At the class level, NDepend shows CC as 4 and Main method as 1 and WriteDetails as 2.

image

image

As we can see there is a difference of 3 points at the class level and a difference in the points at method level as well between Visual Studio and NDepend. Which one of the two is correct in this case? We can’t really gauge this without understanding how the two tools calculate the complexity.

How is Cyclomatic Complexity Calculated?

Lets look at how Visual Studio 2010 calculates it first. As per the MSDN article on code metrics values, it is defined as

  • Cyclomatic Complexity – Measures the structural complexity of the code. It is created by calculating the number of different code paths in the flow of the program. A program that has complex control flow will require more tests to achieve good code coverage and will be less maintainable.

Here is another Code Project article which describes in more detail what factors constitute the measure of Cyclomatic Complexity. If you read through completely, we know the method itself starts at 1 and every decision point like if, for, while etc. contribute to the complexity.

Now lets look at how NDepend calculates the same metrics. As per NDepend’s definition of Cyclomatic Complexity, is calculates as

Concretely, in C# the CC of a method is 1 + {the number of following expressions found in the body of the method}:
if | while | for | foreach | case | default | continue | goto | && | || | catch | ternary operator ?: | ??
Following expressions are not counted for CC computation:
else | do | switch | try | using | throw | finally | return | object creation | method call | field access

NDepend specifies little bit in detail about things which do not contribute  to the complexity as well. But in general both the tools use similar methods to calculate the complexity. So lets add few of these decision points to our method and see how it impacts.

We refactor the WriteDetails method to add a message if no persons are found after filtering. Here is the refactored code which displays a different message if the count of elements is zero.

        private static void WriteDetails(IEnumerable<Person> persons)

        {

            if (persons.Any())

            {

                foreach (Person person in persons)

                {

                    Console.WriteLine("Name : {0} {1}", person.FirstName, person.LastName);

                    Console.WriteLine("Age : {0}", person.Age);

                    Console.WriteLine();

                }

            }

            else

            {

                Console.WriteLine("No matching records found.");

            }

        }

Lets run the analysis again and compare the values. This is Visual Studio 2010 output

image

The Cyclomatic Complexity for WriteDetails method is re-evaluated to 4 points.

And this is NDepend output

image

NDepend re-evaluates the complexity to 3 points.

So adding the If condition added 1 point to both the analysis done in Visual Studio as well as NDepend. So both the tools are consistent when it comes to evaluating the complexity of the If condition. That still leaves us with the initial question of why are they different before we added the if condition. NDepend is ignoring some condition which Visual Studio takes into account in its calculation. So lets try to refactor the WriteDetails method little bit more and see the impact.

I extracted the 3 lines of code which writes the details into a method and compared the results again.

        private static void WriteDetails(IEnumerable<Person> persons)

        {

            if (persons.Any())

            {

                foreach (Person person in persons)

                {

                    WritePersonDetails(person);

                }

            }

            else

            {

                Console.WriteLine("No matching records found.");

            }

        }

There is no change in the values. I have not shown the screenshot here, but you can run the analysis again to compare the results. Both the tools do not take into account the method call. Let me try to refactor the Main method and add one more condition to the filter criteria. Along with the Age, I also want to filter the person’s whose names begin with letter “B”. So here is the refactored code

        public static void Main(string[] args)

        {

            IList<Person> persons = PersonDataFactory.CreatePersons();

 

            //WriteDetails(persons.Where(x => x.Age > 35));

 

            WriteDetails(persons.Where(x => x.Age > 35 && x.FirstName.StartsWith("B")));

        }

Now running the analysis in Visual Studio shows the complexity as 4 for the Main method. And the same method in NDepend shows as 1. This means that Visual Studio takes into account the Lambda expression and the complexity associated with it. Whereas NDepend does not count it in the complexity of the method.

image

Observe carefully in the above screenshot, there is a separate method which is highlighted. This is named as <Main>b__0(Person). This is the anonymous method generated automatically as a result of defining the Lambda expression. NDepend calculates its complexity as 2. This is not added to the complexity of the parent method. This is the reason Visual Studio differs in the value for the measure of the Cyclomatic Complexity.

Conclusion

In comparison to NDepend, Visual Studio gives a very limited set of Code Metrics. NDepend can give up to 82 different code metrics as of this writing. But since Cyclomatic Complexity is used quite commonly in many projects, it would have been better if the metrics was consistent among the tools. Referring back to the classic definition of the Cyclomatic Complexity, I believe the tool should take into account all the decision points for a method. From the experiments in this post, I can say that Visual Studio does it and NDepend doesn’t. When I look at the NDepend value of the Cyclomatic Complexity for a method, it does not give me the complete picture. For a real world scenario, it would be really difficult to maintain and monitor all the anonymous methods and delegates used as Lambdas. From the point of maintainability, I would say that Visual Studio does a decent job. It can help me see the impact of my refactoring on the complexity of the method which uses Lambdas. I can’t do it with the same ease while using NDepend.

As always when it comes to standards and best practices, people have their own preferences. I leave it to the developers to decide whether Lambdas should be considered in their complexity points or not. On a personal note, I prefer to include them because a Lambda in itself is a decision point. Imagine a method with 10-15 lines of code. If these 15 lines are mostly consisting of complex Lambdas, you can easily imagine the actual complexity of this method. Just as an exercise you can try running the Visual Studio and NDepend on the same code. You’ll realise the value of what i am trying to suggest here.

Visual Studio Code Analysis has a limitation that it cannot be run as part of the continuous build. This is where NDepend scores over Visual Studio. NDepend can be integrated nicely with the automated build process. If that is the reason you or your team is using NDepend then we can have a difference in the values reported by Visual Studio  and NDepend. You can decide to choose one of these metrics as the basis.

Apart from build integration, NDepend can also be run as a separate tool. You can choose to integrate it with the Visual Studio IDE or run it as a standalone application. I personally think that NDepend has lots to offer with its complete set of metrics. It is always better to have multiple options to choose from.

As always the complete working solution is available for download at Dropbox.

Until next time Happy Programming Smile

spacer