Distributed Agile

Background

Over the last few years the software development process has been changing a lot. Organizations have started moving away from traditional methods and many companies are adopting one of the other form of Agile like Extreme Programming(XP), Scrum, lean development etc. With more and more people adopting Agile, tools and vendors have also come up with products which suite this form of software development. One of the fundamental requirement for a team to be agile is to communicate properly between different stakeholders. I have been working on a team which is developing software in different geographic locations. Here are  some of the challenges we face in day today life.

My Experiences

One of the fundamental principle of Agile is communication between team members. There is also a school of thought which suggests that Agile works best when team members are co located. This improves communication which results in faster turnaround time and improved velocity. In many cases teams are seated in different locations. On such occasion team in one location could be ahead or behind their counterparts in a different time zone. The difference in time zones could lead to delay in communication. One team might be about to finish their day when the other team is just beginning theirs.

Another problem I have personally felt when teams are distributed is that its not always possible to get all the stakeholders in one place at the right time. In such cases if key stakeholders are missing from certain meetings it could delay certain key decisions being taken. This increases the risk of project delivery and could impact timelines.

There are different ways to reduce the delay in getting the right information. We can use scrum of scrum calls to get updates from all the scrum teams. The medium of communication used for scrum of scrums calls becomes important. I have used tele conference calls. At times it become difficult to catch all the communication and many a times these calls end up being just a status update calls. We can use other methods like Telepresence. This is a viable option but unfortunately getting access to telepresence rooms everyday is a Herculean task. Another alternative to use is email communication. The problem I see with this form is people tend to ignore it if they are busy with some other task.

We tend to use email communication a lot to clarify doubts. That adds a risk because if the information transmitted through emails is not always updated in the respective documents like business requirements document (BRD) or functional requirements etc. Things which can be answered by product / feature owners if they were face to face takes longer duration because things can be misinterpreted over emails. At times there are communication gaps because people don't understand certain language. It can also be irritating for a developer to reply to someone over an email for some trivial tasks which can be completed in minutes if they were sitting in the same location. Having the teams co-located also helps in building relationships among team members which is very difficult to build otherwise. Distributed teams also increase the risk of duplication of effort for certain tasks.

These are some of the issues which I have experienced personally during last few months. In order to address these issues I can think of some measures. To reduce dependencies on business folks and to get answers to business  clarifications we can have a representative of the product owner sitting with the offshore team. We can also have a business analyst (BA) replacing the actual customer or product owner. But from my experience I can say that the BA merely acts as a  proxy who interacts with business folks to help development team answer their questions. Having a business representative working closely with development team also reduces lot of effort and adds much more value to the whole development cycle. If we have the business representative working with development team it also enables them to get early feedback at the end of the sprint which can help in improving the overall product quality.

In order to reduce dependencies between teams there needs to a high level release plan at a project level. This plan should highlight the various dependencies between various features of the product. This will help the product owner as well as development teams to build features in such a way that the dependencies are sorted out in time before stating something which is totally dependent on some other feature. This is where the active participation of product owner is required in day to day activities. Because if there is a delay in implementing certain feature product owner can re-prioritize the product backlog so that timelines are not impacted.

Conclusion

My experiences for past few months suggest that Agile doesn't work really well in a distributed environment. We loose some of the advantages of agile if teams are not co-located. Having seen all the pain and being at the center stage of many of the decisions we had to take during past few weeks I would suggest people to be very careful before going in for Agile teams in distributed locations.

My experiences might not be really good, but I would love to hear from people who might have had success working on distributed Agile teams. I would like to know what methods or tools they used to succeed in their projects.

 

spacer

PrintDocument vs Printer Control Language

Background

I have been involved in a project which has a requirement for printing labels using label printers. That sounds to be a very simple requirement. But as always there are always things which make our life miserable. My team is responsible for building a label printing solution which can handle Internationalized / localized text. Currently we are building the solution for Korean system with a possibility of taking it to other countries where we have retail presence.

 

Challenges involved in label printing

Some of the challenges involved in developing this solution are

  • Ability to print various types of barcodes based on industry standards like EAN13 / 128, UPCA
  • Ability to print from a central server to any label printer connected in any store
  • Ability to print localized text
  • Ability to print both text and symbols on labels

 

Solution Approach

There are some of paid software and libraries which help us in doing stuff like print industry standard barcodes, localized text and symbols etc. But these software packages did not satisfy all our requirements as we wanted the labels to be printed differently based on various conditions. The biggest problem I feel in label design is the estate area of label. There is very limited space available in which we needed to print many details. hence positioning of text and symbols is very important.

We can use also use the driver or API provided by printer vendor to print labels. Epson supports such an API. But in my case I was am using Sato printer and there is no driver / API which I could use directly from C# program.

We can use the GDI+ features and PrintDocument object to position the text. The problem with this approach is that PrintDocument relies on the printer driver to convert the text and images into the Printer Control / Command Language (PCL) which is very specific to any printer. This is a combination of ASCII characters which are used to program the printer. This results in performance hit as the PrintDocument object needs to convert the GDI instructions into PCL. Also it defeats the purpose of having a central server as in our case. It also means the central server needs to have all the printer drivers installed on it which I don't think is very scalable approach.

Another disadvantage of this approach is that there is no built in support for barcode printing in GDI+ which means that we need to use third party libraries like Neodynamic. Label printers have very limited support for fonts and comes with default fonts. If we want to use commonly used windows fonts such as Aerial or Times New Roman these are treated as images which again slows down the printing process.

To overcome these limitation we can use the printer Control / Command Language (PCL). It is bit tedious to understand this language at first but since the functionality is so small that we can get familiarized with it in a day or two.  This requires us to send raw data to the printer in the form of ASCII text which are interpreted as commands. Since we are sending raw data there is no need for the printer driver to be installed. As long as we can send data to a printer using a network port we can send these commands as byte streams.

This gives a great advantage in terms of speed of printing as there is no conversion required at the printer end like in case of GDI+ graphics. The label printer supports printing of industry standards barcode using default fonts without any dependency on external libraries. We can control the speed and quality of printing. We can use features like partial edits which enables editing few sections of data within a label.

One of the disadvantages of this approach is that for every type of printer that we want to support we'll need to build a sort of adapter which can send these raw data to the printer. For e.g. Sato has a different language /  instruction set as compared to that of Epson or Zebra label printers.  Adapting to each language can be a real pain. Given the performance benefits of PCL I decided to go ahead and try that approach for Sato CX400 printer development.

All was fine and I was able to print text in different sizes, with different orientation, different fonts etc. I hit a road block while trying to print an image onto the label. I am not sure how many people use Sato printers for label printing. The information available online is very limited and the documentation provided with the installation disk / online are also very basic. I found a method where BMP or PCX images can be loaded into extended memory and printed using some command. But the printer that we have did not have extended memory.

Somehow I came across embedding binary data within the payload which is sent to the printer. This seem to have worked. After some trial and error I was able to achieve what I had set to do and it was a very satisfying thing. I used a evaluation software CODESOFT from Teklynx  for converting the images into byte streams. This software also helps you to convert a PCL file into a label and vice versa. It was very helpful in designing the initial layout and then modify it as per the needs. It supports conversion between various formats and various printers. Thanks to the developers of this software. It saved me at least 2 days of effort to get the hex / binary representation and alignment of images. Please note that this is one time job which I used to convert all 10-15 images I wanted to use on my labels into a string which can be used with SATO printers.

 

Conclusion

Understanding PCL is really challenging. But the performance benefits it offers over PrintDocument are manifolds. If you have a choice I would always suggest writing raw data directly to the printer. It will give a performance boost to the application and you need not depend on third party tools for simple task like drawing barcodes. Until next time Happy Programming!!!

 

spacer

Powertools - November 2009

Background

In my earlier post in Powertools - October 2009, I had posted about some of the useful tools I use in day today life. In this short post I would like to list few utilities for virtual cd /DVD reader.

Virtual disks

A decade back I remember buying magazines related to computers like Digit which provided free CDs with utilities and shareware like WinZIP, some free games, shareware, some antivirus, basic games etc etc. With the Internet penetration growing day  by day and days of dial ups gone into history books all that seems a distant memory. Broadband has become so affordable that people don't mind staying connected to Internet for longer durations.
With the advent of Internet and broadband services much of the software is distributed in digital form now a days in the form of ISO images. We can mount these images onto the virtual CD or DVD reader and install software without having a physical CD or DVD ROM connected to the PC. I assume in the next few years CD and DVD ROM's will be extinct just like floppy disks.

Virtual CD

Virtual CD has been one of the most simple and elegant utility for many years. It can be used very easily to mount disks and has a very simple configuration. You can get started using virtual CD within minutes. Below is a screen shot of virtual CD control panel.
virtual CD image
With just 7 buttons this is one of the simplest tool I have ever used. Please note that this screen shot is of the version 2.0.1.1. Latest versions of this tool might offer new features.

MagicISO / MagicDisc

Recently while trying to mount SQL server 2008 ISO image I faced some issues and tried MagicISO or MagicDisc as its also known as. This is a feature rich product with a very good user interface. Its easy to use and supports various formats.

Conclusion

  • One of the biggest advantage of using the virtual drives is that it allows access to multiple drives simultaneously.
  • Most of the vendors also claim that the speed is around 200 times faster as compared to physical drives.

There might be other freely available tools for virtual optical discs. I would love to know about those.
 

spacer

Unit Test Session State in MVC using TestHelpers and Moq

Background

In my earlier post about Unit Test Session State in ASP.NET MVC I had demonstrated how to mock intrinsic object like Session state in ASP.NET MVC using Moq. The idea there was to abstract the access to SessionState out into separate class called SessionHelper and then use Moq to mock those method or property calls. In this post I would be using an elegant approach to achieve the same results using MVCContrib TestHelpers.

Using MVCContrib TestHelpers to mock intrinsic objects

The previous approach might be helpful if we are using only one or two intrinsic object within the controller actions. Most of the times we'll be using more than one of the ASP.NET MVC intrinsic object like
  • HttpContext
  • HttpRequest
  • HttpResponse
  • HttpSession
  • Form
  • TempData
  • QueryString
Abstracting each of these classes and their methods and properties can be very cumbersome process. But some smart people have already encountered such issues and have come up with a solution in the form of TestHelpers.
These TestHelpers use RhinoMocks internally to mock the calls to ASP.NET MVC intrinsic objects.
I have modified my example from the last post to suit the changes for TestHelpers. Another advantage of using these test helpers is that we get a set of extension methods which can be used to assert results in a fluent manner. I'll demonstrate that as well in the assert section of the tests.
Without wasting too much time let me dive into the code straight away. Here are the changes to the controller.
private readonly IUserRepository _userRepository;
//private SessionHelper _sessionHelper;

public LoginController() : this (new UserRepository())
{}

public LoginController (IUserRepository userRepository)
{
    _userRepository = userRepository;
    //_sessionHelper = sessionHelper;
}


As can be seen, I have removed the dependency on SessionHelper and inject the LoginController with IUserRepository. Based on these changes there is change in the LogOn action as well.
public ActionResult LogOn(FormCollection formCollection)
{
    string userName = formCollection["UserName"];
    string password = formCollection["Password"];

    bool validUser = _userRepository.ValidateUser(userName, password);

    if(validUser)
    {
        Session.Add("UserName",userName);
        return RedirectToAction("Index", "Home");
    }
    else
    {
        this.ModelState.AddModelError("InvalidUser", "Invalid user name or password");
        return View();   
    }
}

Please note the changes in bold and italics. I have used the HTTPSession object directly in the controller action. As discussed in earlier post this causes problems while unit testing this particular action. But TestControllerBuilder class from TestHelpers comes to the rescue. It takes the responsibility of handling intrinsic objects during unit testing. We can directly use those intrinsic objects in our production code as usual. Before calling the controller action from the unit test we need to set the values for intrinsic objects. This is handled by the InitializeController method of the TestControllerBuilder class as shown in the code snippets below
private Mock<IUserRepository> _mockUserRepository;
private TestControllerBuilder _builder;

private LoginController _loginController;

[SetUp]
public void SetUp()
{
    _mockUserRepository = new Mock<IUserRepository>();

    _builder = new TestControllerBuilder();

    _loginController = new LoginController(_mockUserRepository.Object);

    _builder.InitializeController(_loginController);
}
Please note the 2nd and 4th line above. This is all we need to do for setting up the intrinsic objects for controller action. Finally in my test I can assert that the Session variable was set properly.
[Test]
public void ValidUserDetailsAreStoredinSession()
{
    //Arrange
    string userName = "Nilesh";
    string password = "abc@123";

    FormCollection formCollection = new FormCollection();
    formCollection["UserName"] = userName;
    formCollection["Password"] = password;

    _mockUserRepository.Setup(userRepository => userRepository.ValidateUser(userName, password))
    .Returns(true);

     //Act
     ActionResult result =_loginController.LogOn(formCollection);

     //Assert
     //Assert.That(result.RouteValues["controller"], Is.EqualTo("Home"));
     //Assert.That(result.RouteValues["action"], Is.EqualTo("Index"));
     result.AssertActionRedirect()
           .ToController("Home")
           .ToAction("Index");

     Assert.That(_loginController.Session["UserName"], Is.EqualTo(userName));
}
You can see the changes highlighted above. I have used the AssertActionRedirect method to assert that Home controllers Index action is called. Finally I am asserting that UserName session value is same as what it was set to. This makes the code more readable and easy to understand as compared to earlier code which is commented out.

Conclusion

Using TestHelpers makes it very easy to unit test the intrinsic objects of ASP.NET MVC. Additional methods to assist during asserts also makes the code more readable and easier to understand. We no longer need to abstract all the intrinsic objects. The learning curve is also very less. We can also make use of these TestHelpers to write framework agnostic Asserts. This means that we need not depend on the syntax of the unit testing framework being used to Unit Test the code. I am sure I'll come across situations in my project to demonstrate the other objects like HTTPRequest, Form, QueryString etc over next few days.
Until next post happy programming :)

Note:

TestHelpers internally use RhinoMocks dll. make sure you copy that dll from which resides in the dependencies folder after MVCContrib files are extracted to the disk. Copy the RhinoMocks dll into same directory as MVCContrib.TestHelpers.dll.
You can download the complete source code from dropbox - SessionStateUnitTest_WithMVCTestHelpers.zip.

spacer

Powertools - October 2009

I have seen many people listing the tools that they use quite often in their day today life. Here is a small list of tools that I use quite often. These tools are related to software development work as well as for personal use. I like to call them as powertools.

 

Development tools

Visual Studio 2008 team System.

Not much choice there. I am working on Microsoft technologies and in my opinion VS 2008 is one of the best IDE for C# development.

VS commands & Powertoys for VS 2008 

These powertoys and power commands add functionality to existing loads of features in VS 2008. I personally like Collapse All, Open Containing Folder and Open Command Prompt, Remove and Sort Usings commands. In many of our applications we have 10-15 projects in a solution. Navigating through various files in each of the project file can be cumbersome.  Collapse All comes in handy in this case. Other commands are self explanatory.

Source Code Outliner PowerToy

This VS 2008 extension provides the tree view of source code. I have it installed but use it very rarely.

Sticky Notes

This add in provides the sticky notes functionality to project and project items within Visual Studio 2008. It can be very handy to put some comments or hints.

Team Foundation Server Power Tools

These power tools allow check in/out from windows explorer. It has many command line extensions as well and requires Power Shell to be installed as a prerequisite.

TestDriven.NET

TestDriven.NET is a wonderful add in which comes bundled with NCoverExplorer. This addin is used to run unit tests from within the VS 2003/2005/2008 IDE. I tend to use the coverage feature quite often to measure code coverage. The best thing I like about this add in is that it supports multiple unit testing framework. I use the personal edition of this add in.

 

General Utilities and Tools

Compression utilities

One of the most common task we do to share personal as well as official files is to compress them. I have been using WinZip for a long duration. Off late I have switched over to 7 Zip as its free and performance wise I feel its better as compared to WinZip. Apart from WinZip and 7 Zip I have also used Winrar.

 

To do lists / reminders

I have tried different tools and methods for managing to do list right from hand written notes to Microsoft outlook tasks. Recently I started using TodoList Resource which has a very simple user interface and is sufficient for my current requirement. This offers a very simple user interface in windows. If you are looking for a web based todo list manager you can have a look at Evernote or TODOIST.

 

File Comparison

I have an official laptop, a personal laptop, a 2 portable hard drives. Keeping data in synch on all these devices is problematic. I use Beyond compare to compare files and folders.

 

File Synchronization

Microsoft's SyncToy offers good support for synchronizing contents of two folders. Apart from that you can also try some other utility like Alwaysync

 

Email Signature

I like to gather quotes from various sources. I was looking for a utility to share these quotes with others by means of a footnote in my email signature. Qliner quotes offers a plugin which can be configured with multiple mail clients like Microsoft Outlook. This plugin can be configured to randomly pick quotes from different files and also at different intervals like daily/weekly / hourly basis.

 

Bulk File Rename

Many a times we need to rename multiples file at the same time. Bulk File Rename is a very good utility  which is freely available and has many options. You can preview the changes before applying final settings.

 

Dictionary software

Wordweb is a dictionary software which has both the windows application as well as an online dictionary. If you are installing the windows version please be careful while entering the number of miles you have traveled or number of time you have taken flight journey value. The free version has some weird restriction with regards to air miles or some factor which I don't  remember exactly. But if you enter a value which is above the limit you'll not be allowed to use the free software. Its bets to enter the value as zero.

 

Online file sharing

Most of the email clients have restriction on the size of the attachments. If there is a need to share a large file we can use an online file sharing service like DropBox. I use this service for hosting the solution files which I used during my blog entries. It offers public as well as restricted sharing access to the files.

 

Calendar Sync

I use Microsoft Outlook as mail client for accessing my gmail mails. I also have a habit of adding reminders to the google calendar online. Google Calendar Sync provides a way of synchronizing outlook calendar with google calendar. 

 

Desktop search

Google desktop search is my go to tool for searching files on the PC.

 

Wallpaper changer

Another utility I love on my PC is the wallpaper changer which is a Windows XP power toy. I like to change the wallpaper image on my desktop periodically. I can do that very easily using this power toy.

 

Blog Writer

I haven't tried any other blog writer other than Windows Live Writer. I use it to edit both my windows live as well as Blogger blog entries. I like its simple user interface and ease of use.

 

PDF Writer / Printer

Cute PDF writer is a free open source which can convert any printable format into a PDF file. It installs a PDF printer which can be used by applications to convert their documents as PDF files.

 

Conclusion

I am sure people are using different utilities to achieve many of the common tasks mentioned above. I would love to try those and choose the best one. I hope this was helpful.

spacer

Unit Test Session State in ASP.NET MVC with Moq

Background

ASP.NET MVC is built with the intention of enabling developers to test drive a web application using TDD. We can unit test the model as well as controller code. In most cases unit testing the controller action is quite straightforward. Although ASP.NET MVC discourages use of server controls and state management technique like ViewState there are scenarios where we might need to use some sort of state management across different pages or views. I encountered one such scenario where my team was using custom login instead of built in membership provider to validate users. I had to display currently logged in user information on every view page.

Steps for unit testing SessionState in ASP.NET MVC

Lets try to first see where the problem lies. Assume I have a controller for Login with an action called LogOn. This LogOn method is responsible for validating the user and redirecting the user to appropriate view if successful. We should be able to display the user name on all the subsequent pages. Lets build this sample using TDD.

I am going to use the Repository Pattern to validate users against a user repository. In this example I am not really worried about how the repository is populated with the user objects. In a real scenario it could be a call to data access layer or a service which will provide a list of users to the repository. Many of the examples on the net show repository being implemented with LINQ to SQL code. I will set the repository with initial set of user entity which is called UserEntity. UserEntity is the domain object for storing different user properties.

I have created a separate project in my solution to store all the repository and domain entity objects called Repositories. You can have these classes directly under App_Data folder as well. I need only UserName and Password properties in this example. So for the time being I'll build my domain UserEntity object with only those two properties. The UserEntity class looks like

public class UserEntity

{

    public string Name { get; set; }

    public string Password { get; set; }

}

Following the Agile methodology of writing only the code which will be required for my method, let me create a UserRepository class with a ValidateUser method. This method will take user name and password as input parameters and return true or false depending on whether the user exists in repository or not. I want to inject this repository in the controller using Dependency Injection principle. In order to follow the best practices of programming to an interface as compared to that of a concrete class, lets abstract this functionality into an interface called IUserRepository.

public interface IUserRepository

{

     bool ValidateUser(string userName, string password);

}

 

Lets create a test to verify that the ValidateUser method behaves as expected. I would want to test for both the positive as well as negative scenarios. Hence I have used the initialization method of NUnit framework called SetUp to initialize the instance of UserRepository.

private IUserRepository _userRepository;

 

[SetUp]

public void Setup()

{

    _userRepository = new UserRepository();   

}

If we try to build at this stage we'll get a  build error. The reason for that is we don't have the UserRepository class implemented as yet. Lets go ahead and do that. I would like to initialize the repository with one user.

public class UserRepository:IUserRepository

    {

        private IList<UserEntity> _users;

 

        public UserRepository()

        {

            _users = new List<UserEntity>()

                         {

                             new UserEntity {Name = "Nilesh", Password = "abc@123"}

                         };

        }

}

Since I am implementing the IUserRepository interface I need to implement the ValidateUser method. Add the following method definition to above class and build the solution.

public bool ValidateUser(string userName, string password)

{

    var validUser = _users.SingleOrDefault(user => user.Name == userName && user.Password == password);

 

    return !(validUser == null);

}

Now that we have the method as well implemented, lets test both positive as well as negative scenarios.

[Test]

public void ValidateUserShouldReturnTrueForValidUser()

{

    //Arrange

    string userName = "Nilesh";

    string password = "abc@123";

 

    //Act

    bool result = _userRepository.ValidateUser(userName, password);

 

    //Assert

    Assert.That(result, Is.EqualTo(true), "Result is false");

}

 

[Test]

public void ValidateUserShouldReturnFalseForInvalidUser()

{

    //Arrange

    string userName = "Nilesh";

    string password = "abcd";

 

    //Act

    bool result = _userRepository.ValidateUser(userName, password);

 

    //Assert

    Assert.That(result, Is.EqualTo(false), "Result is not null");

}

I could have tested the negative scenario by having an additional test where password matches but the user name is invalid. I'll leave that as an exercise for the readers to implement that method.

Lets get our attention back towards to LoginController. Lets start by writing a test which will test for the constructor to accept IUserRepository as a dependency.

[TestFixture]

public class LoginControllerUnitTests

{

    private Mock<IUserRepository> _mockUserRepository;

    private LoginController _loginController;

 

    [SetUp]

    public void SetUp()

    {

        _mockUserRepository = new Mock<IUserRepository>();

 

        _loginController = new LoginController(_mockUserRepository.Object);

    }

}

If I build the application now, compiler complains about the constructor which doesn't have IUserRepository as parameter. Lets add an overloaded constructor which taken IUserRepository as parameter and make default constructor call pass this instance as shown below

private readonly IUserRepository _userRepository;

 

public LoginController() : this (new UserRepository())

{}

 

public LoginController (IUserRepository userRepository)

{

    _userRepository = userRepository;

}

 

Now if we build the solution it will work fine. In the above test code I have used Moq framework to dynamically mock the IUserRepository interface. Now that we have the IUserRepository injected into the LoginController lets write a test which will validate the user against the repository by calling the ValidateUser method. I'll need to collect the user details from the forms collection which are passed from the view to the LogOn action method.  If the user is valid we'll redirect the user to home controller's Index view. If the user credentials are invalid, we'll show the error message by updating the ModelState of controller.

[Test]

public void ValidUserDetailsAreStoredinSession()

{

    //Arrange

    string userName = "Nilesh";

    string password = "abc@123";

 

    FormCollection formCollection = new FormCollection();

    formCollection["userName"] = userName;

    formCollection["password"] = password;

 

    _mockUserRepository.Setup(userRepository => userRepository.ValidateUser(userName, password))

    .Returns(true);

 

    //Act

    RedirectToRouteResult result =_loginController.LogOn(formCollection) as RedirectToRouteResult;

 

    //Assert

    Assert.That(result.RouteValues["controller"], Is.EqualTo("Home"));

    Assert.That(result.RouteValues["action"], Is.EqualTo("Index"));

}

If there is no definition of LogOn method the compilation will fail. Let me add the LogOn action as follows

public ActionResult LogOn(FormCollection formCollection)

{

   return View();

}

In the test code I have populated the form collection with required values and have set an expectation on the ValidateUser to return true value which means that the user is valid. After that assert that the user is redirected to the Index view of the Home controller. Lets run this test and see the result. I get the following error

Moq.MockVerificationException: The following setups were not matched:
IUserRepository userRepository => userRepository.ValidateUser("Nilesh", "abc@123") (ValidUserDetailsAreStoredinSession() in LoginControllerUnitTests.cs

This can be rectified by calling the respective method on the repository. But for that we'll need to read the values from the form collection. Lets implement the same in our LogOn action. Also if the user is valid we'll redirect the user to Home controller and Index method.

public ActionResult LogOn(FormCollection formCollection)

{

    string userName = formCollection["UserName"];

    string password = formCollection["Password"];

 

    bool validUser = _userRepository.ValidateUser(userName, password);

 

    if(validUser)

    {

        return RedirectToAction("Index", "Home");

    }

    else

    {

        return View();   

    }

 

}

 

These changes to the code makes the test pass. At this point I want to store the user name who has logged in to a Session variable so that i can use it in all the pages. Lets add the value to Session and see what happens when we rerun the test. I'll add the following line just before calling the RedirectToAction method within the if block.

Session["UserName"] = userName;

 

Rerun the test and check the impact of this change.  Immediately I get an null reference exception "System.NullReferenceException: Object reference not set to an instance of an object."

The reason for this is that we are trying to use an intrinsic object (Session) in out LogOn action. If we make use of intrinsic objects like HTTPSession, HTTPRequest, HTTPResponse etc within our code it becomes tricky to test that code. one method of overcoming this is to abstract these functionality into a wrapper class. This wrapper class can be injected into the controller and can be unit tested using a mock object. I am going to do the same with Session object. Lets wrap it into another class called SessionHelper using two methods Add and Get

public class SessionHelper

{

    public void Add(string sessionKey, object sessionValue)

    {

        HttpContext.Current.Session.Add(sessionKey, sessionValue);

    }

 

    public object Get(string sessionKey)

    {

        return HttpContext.Current.Session[sessionKey];

    }

}

 

Once we have this class we can inject it into the LoginController and make the required changes to the LogOn action as well as unit test method. Instead of showing the complete test and action methods I'll show only the changes. For the complete code you can download the code.

_mockSessionHelper.Setup(sessionHelper => sessionHelper.Add("UserName", userName));

 

_sessionHelper.Add("UserName",userName);

 

The first line needs to be added to test method and the second one to the LogOn action. We'll also need to instantiate these two variables in the respective classes. Also the dependency injection of Session helper needs to be taken care. i have added all these steps to the sample code which is attached at the end of this post.  

ASP.NET MVC supports sharing data between successive request by means of a data structure called TempData. But the limitation of this approach is that it can share data only between two successive requests.

Conclusion

My intention here is to show how we can unit test controller action involving SessionState. There are alternatives to this approach. We can make use of TestHelpers from MVCContrib which can mock any of the framework classes like HTTPContext, Session, Request, Response etc. I haven't gone into the details of creating the views for this sample as I wanted to concentrate on unit testing the controller bit. In some future post I might demonstrate the views for this sample.

You can download the code from my dropbox.

If you are having problems accessing the link you could also try pasting the url http://dl.getdropbox.com/u/1569964/SessionStateUnitTest.zip in your browser.

spacer

ASP.NET MVC Sample Applications

Introduction

Supporting any new technology with end to end samples to understand the concepts related to the technology is quite essential. I have been working with ASP.NET MVC off late. While going through various blogs and learning resources I found some good examples. I think its helpful is we get samples related to different domains and scenarios. Here is a small list samples which I found useful. Some of these samples can be found at the ASP.NET MVC site http://www.asp.net/mvc/learn/

 

MVC Sample Applications

Store Front -

Rob Conery came up with an wonderful series which demonstrated building a online web application called Store Front. Rob demonstrated various concepts in his series of screencasts right from basic ASP.NET MVC to Helpers and Routing, Pipes and Filters, dependency injection, TDD, implementation of repository pattern, Domain Driven Design (DDD), Globalization, Dynamic Data, Entity Framework etc.

The whole application was built bit by bit and I felt every blog post was worth reading even if you had prior knowledge of the concepts that Rob tries to explain. You can download the latest code of the applications at http://www.codeplex.com/mvcsamples/SourceControl/ListDownloadableCommits.aspx

 

Nerd dinner

This particular application was built by the team which includes people like Scott Guthrie, Phil Haack, Rob Conory and Scott Hanselman. This application also finds mention in the book Professional ASP.NET MVC 1.0. This is an open source project hosted on Codeplex.

You can also find it live at www.nerddinner.com. There are plans to add monthly feature to the existing site in the form of OpenID, explore virtual earth API's, RSS feed for all pages etc. You can download the walk through of this application in PDF format from http://tinyurl.com/aspnetmvc.

 

CodeCampServer

This is another open source application written by Jeffrey Palermo the author of ASP.NET MVC in Action book. This book has many references to code camp server. The working example of code camp server can be found at Austin .NET Users Group. This application makes use of NHibernate as OR mapper and includes extensive unit tests.You can download the code for this application from http://code.google.com/p/codecampserver/

 

Suteki Shop E-Commerce

Suteki Shop is a fully featured self service eCommerce Application. The latest code for Suteki can be downloaded from http://code.google.com/p/sutekishop/. Some of the features used in this application are MVC Contrib, Windsor IoC container and Rhino Mocks.

 

CarTrackr

This applications helps users to measure fuel usage and kilometers driven using concepts like dependency injection and repository pattern. This application is designed and developed by Maarten Balliauw. You can catch the glimpse of this application at http://cartrackr.codeplex.com/

 

FlickrXplorer

FLickr explorer is very interesting to me. It uses loads of things like a custom LINQ provider, jQuery and ASP.NET MVC. It also uses cloud computing. It offers users a fast photo explorer and search tool to browse millions of photos in flickr. You can find the details about this application at http://flickrxplorer.codeplex.com/

 

Oxite

Oxite is a blog engine. Oxite has very good support for essentials like MetaWebLog API which can be used with tools like LiveWriter to manage blogs, trackbacks, sitemaps for better search engine optimizations (SEO), RSS and ATOM feeds, web admin features, tags etc etc. the oxite code can be downloaded from http://www.codeplex.com/oxite

 

Conclusion

Each of these samples have their own advantages. I wouldn't like to go too much into the disadvantages as my main intention at this point is to get familiarized with the ASP.NET MVC technology. If step by step approach is needed I would suggest follow Store Front or Nerd Dinner. If you would like to use mix of MVC and open source tools like JQuery FlickrXplorer might be a good option. If unit testing is the focus then I would suggest having look at CodeCampServer or Suteki Shop. Each of the application has a purpose of its own and highlights one or more feature.

I would like to know if there are any more good samples apart from these listed here.

 

spacer

Tips for improving traffic to blog

I have been blogging for quite some time now. One thing I observed was that the traffic to my blog was very less. I never gave any attention to it. But off late I thought of getting bit serious about blogging. While looking for options to improve traffic to the blog I came across a wonderful article which was a series of posts to demonstrate 10 different ways of improving traffic to blog.

I user blogger as my blog account service. Here is a series which I found very useful for customizing the blog. I have tried many of the steps mentioned in the post.

http://blogknowhow.blogspot.com/2009/03/10-tips-to-build-traffic-for-blogspot.html

 

Some of these steps were simple like providing links for readers to subscribe by means of RSS. Another good suggestion was to submit the blog to search engines like Google, MSN and Yahoo. Adding the blog to blog directories is also beneficial.

Some of these activities can be time consuming. It took me almost a days effort to accomplish this. I hope this step will help me to attract some more readers.

spacer

ASP.NET MVC 2 preview 2 released

I am currently working on ASP.NET MVC 1.0 framework. My team is involved in developing a web UI for a localized application. Our team was lucky to have the choice of choosing the technology stack that would suite our requirements. We decided to use ASP.NET MVC so that we can use TDD to test drive the project work. Earlier I had worked on web application with ASP.NET 2.0 with MVP implementation. I felt it wasn't very TDD friendly. Since ASP.NET MVC 1.0 offered very good support for TDD we decided to try it out.

With whatever little I have worked with MVC 1.0 I felt that on its own it had few limitations like client side validation, support for some of the common functionality like Grid controls which a ASP.NET developer is well worse with. But many of these limitations are addressed by the MVCContrib community project. I am still working on getting to know the finer nuances of MVC 1.0 framework but the next version MVC 2.0 is released for preview 2. From whatever I have read it seems to be improving all the time and I feel it will still take a while to be stable and more usable. But it is headed in the right direction and the community is adding a great value to the things which might be missing right now.

Another advantage I see with MVC framework is it is bundled by default with JQuery. Since JQuery is widely accepted standard life becomes easier. From Phil Haack's blog I read that some of these limitations have been taken care in the Preview 2 of ASP.NET MVC 2. I look forward to try those things.

 

Here are few articles which highlight the features of ASP.NET MVC

http://haacked.com/archive/2009/10/01/asp.net-mvc-preview-2-released.aspx - Phil Haack

http://weblogs.asp.net/scottgu/archive/2009/07/31/asp-net-mvc-v2-preview-1-released.aspx - Scott Guthrie

http://dotnetslackers.com/articles/aspnet/A-First-Look-at-ASP-NET-MVC-2.aspx - Ben Scheirman

 

spacer

.Net Mocking Frameworks comparison

Introduction

As I mentioned in the last post, that I have been working on Agile methodologies for more than 18 months now. During this time I been involved in project development using TDD. As part of writing and executing automated Unit Tests, our team has been using Dynamic Mocking Frameworks like NMock 2 and Rhino Mocks. Here is an attempt to compare different mocking frameworks I have worked with.

Different Dynamic Mocks

In simple terms mock is an object which acts as a proxy for some real world object. Its a dummy object which is used to mimic the real world object. Every time I try to explain some person who is new to TDD the usage of mocks the first question that pops out is why do we need mock objects?

1. One of the basic need for having mock object is to follow the unit testing principle which says that we should be testing the System Under Test (SUT) in isolation. If we follow this principle only the logic related to one class or method should be tested using a unit test and we shouldn't be really worried about the dependencies of that class which we are testing. All the dependencies should be tested separately. In order to manage the dependencies we use mock objects. We can replace the dependencies with fake objects or mocks.

2. Another reason why we use mock objects is to improve the performance of executing the unit tests. If we were to construct the real objects or dependencies it might be time consuming.

3. This also helps us in removing dependencies like external web services or the physical database server. We can mimic these behaviours using a mock object as long as the mocks use the same interface.

4. Another alternative to using the dynamic mock object is to create a fake object which are also called as static mocks. But the problem I see with this approach is that every time there is a change to the service contract or the interface which the mock implements, the hard coded mock need to be updated as well. This results in maintenance problem.

Initially my team was using NMock 2 for mocking. It has got some nice features like it provides a very fluent interface for setting expectations as well as for different types of matchers. The problem I feel with NMock 2 is that it uses magic strings for the names of methods and properties. This does not provide much type safety. In terms of Rhino Mocks matchers are referred as argument constraints.

Type safety is the biggest advantage Rhino Mocks has over NMock 2. Apart from type safety Rhino Mocks also has a Record and Playback option. It can also mock concrete as well as abstract classes. Rhino Mocks has 3 types of mocks Strict Mock, Partial Mock and Dynamic Mock. It also has the facility to set expectations in ordered and unordered sequence.

As compared to these two commonly used Dynamic mocking frameworks, Moq is relatively new. It has the distinction of being the only library for .Net which is developed from scratch using the features of C# 3.0 and 3.5 framework. It relies heavily on LINQ and Lambda expressions. So knowledge and experience of these C# features is a must if someone wants to try Moq as the preferred framework.

Instead of creating something from the scratch I decided to use the hypothetical sample which was published on NMock site as a quickstart. You can read more about it from http://www.nmock.org/quickstart.html. I have made minor modifications to the test code to follow the AAA syntax. I have attached the solution code at the end of the article. Instead of going into the details of how to create a mock repository and explain each and every method, I would like to highlight the main differences in the way expectations are set on the mock instance in these 3 frameworks.

NMock2

Expect.Once.On(_mockCurrencyService).

                Method("GetConversionRate").

                With("GBP", "CAD").

                Will(Return.Value(conversionRate));

Rhino Mocks

Expect.Call(_mockCurrencyService.GetConversionRate(britishAccount.Currency, canadianAccount.Currency)).

                Return(conversionRate);

Moq

_mockCurrencyService.Setup(

                mockCurrencyService =>

                mockCurrencyService.GetConversionRate(britishAccount.Currency, canadianAccount.Currency)

                ).Returns(conversionRate);

 

As we can see from the above statements, Rhino Mocks and Moq have lot of similarity in the way the expectations are set on the mock currency service. But in case of NMock2 we need to specify explicitly what is the method name, the parameters and also the return type. In one way the syntax is quite intuitive, but lacks type safety. Also I feel that having to type something like

Will(Return.Value(conversionRate))

is bit too much just to set a return value.

Conclusion

In some future post I would like to highlight how to verify constraints and also do argument matching using matchers. But for the time being I feel Rhino Mocks has advantages over NMock 2. I haven't worked much with Moq. So I am really not in a position to comment which one is good among Rhino Mocks and Moq. Both might be used in different contexts. If a completely new development with .Net 3.5 framework is to be started Moq seems to be a good option.

This article wouldn't be complete without mentioning the difference between a mock and a stub. This is well documented by Martin Fowler in his post Mocks Aren't Stubs. At a very high level I would say you would use mocks to test interactions between different layers or classes of your applications. Stubs are used to stub some behaviour. Using a stub will not fail the unit tests. But mocks can cause tests to fail if all the interactions are not performed as per the expectations.

You can download the complete source code from this location. If you are facing some problem downloading the code from the link, you could also try pasting the following URL in your browser

http://dl.getdropbox.com/u/1569964/MockingComparison.zip alt="http://dl.getdropbox.com/u/1569964/MockingComparison.zip"

 

spacer

Comparing Unit Testing Framework

Introduction

I have been doing Unit Testing for past 18 months or so and have been ripping benefits of the same. I would suggest every developer should adapt unit testing methods if they have not done so far. Because we follow Agile in our development activities its mandatory for us to unit test our code. But even otherwise I feel its a disciplined effort to test your code. I have personally used NUnit as well as MS Test frameworks for unit testing and would like to highlight some differences here in this post.

Unit Testing Framework comparison

Recently I conducted a training session for the people in my organization who were interested in implementing it in their future projects. One of the feedback I had received during my previous training program was that I should have used NUnit for demo instead of MS Test because people felt that unit testing without NUnit was a crime. Just kidding. It was just that NUnit has become so synonymous with unit testing that people think its not worth attending a training if you are not going to demo NUnit. 

I personally don't find much difference between NUnit and MS test. Both use similar concepts with the exception of attributes. Following are some of the attributes which I have used during unit tests written in those frameworks

NUnit
MS Test
SetUp
TestInitialize
TearDown
Cleanup
TestFixture
TestClass
Test
TestMethod


Many purists feel that Microsoft has copied the NUnit framework and added their own attributes. Only thing that MS Test has advantage over NUnit is that it integrates nicely into the Visual studio 2005 / 2008 IDE. It also provides the additional feature of Code Coverage. A jazzy UI is a good way to impress people to use MS Test.

Another advantage I find with MS Test is the generic syntax for the Asserts. We could specify the data type that we want to check while checking for methods like AreSame or AreEqual. Instead of checking objects, we could specify the exact type, for e.g. Assert.AreSame<string>(expected, actual, "Expected and actual values are not same);


One thing which I find irritating with MS Test though is the speed of execution of unit tests. Its comparatively very slow. Just to measure the performance, I ran the same set of tests both with MS Test and NUnit. To my surprise NUnit was at least 3-4 times faster than MS Test. I found a link which explains why is it so.

Conclusion

Given the fact that NUnit is open source there are more possibilities of it being updated with latest changes to frameworks and related stuff. Whereas MS Test being part of Visual Studio, we need to wait for major releases to get updates for it. In order to overcome the Code Coverage feature of MS Test we could use the TestDriven.Net tool which is an add-in for Visual Studio. It provides code coverage with the help on NCover. Recent versions of NUnit also support the concept of Fluent Interface which allows constraints checking using more readable format. It helps developers write tests and specially asserts in a more natural way.

Along with the two frameworks that I have compared here, there is also a recent addition to this list in the for of XUnit. The learning curve is slightly higher in this case as it doesn't follow the same approach to unit testing as that of NUnit or MS Test. It is still in its nascent stages but someone wanting to test pure .Net code can opt to use it provided there is enough time available to learn new syntax and concepts like Facts.

BankAccount.zip is the solution file which I used to compare the MS test and NUnit.

spacer

Do you Spike?

Its been close to a year and half since my team started working on projects which were developed using Agile methodologies. Its been a great experience and a good one as well. In my current project we have hired some consultants from ThoughtWorks to help us improve our processes.

This is the first time we are implementing a project using Agile right from the scratch. We are planning to use various latest technologies for the development purposes. As part of up skilling process we are trying out some sample project. But in Agile terms it seems there is a term called Spike.

The official definition of Spike says, A very simple program to explore potential solutions to address a technical problem. The idea here is to put a pair of developers on the task and reduce the risk which will help in estimating the user story in a reliable way.

This is also the first time we are experimenting with the Agile estimation technique which involves the whole team while estimating for each user story. Traditionally estimation has been done at a very high level by experienced people which might include project manager, tech Lead and or a senior developer. I personally feel using agile estimation helps every member of the team to get a rough estimate of each user story. Every member gives his estimate and the scrum master takes into account an estimate which is acceptable to all in the team. If there is difference in opinion about the efforts required to complete a story, a healthy discussion is held as to why a certain team member feels it should take more or less time as compared to that of another person.

The way we estimated for a user story was based on the acceptance criteria. We were doing an estimation for a web based user interface system. We decided to estimate a story based on following

  • 1 - if the task can be completed within  a day
  • 2 - if the task can be completed in just over a day
  • 3 - if the task can be completed in 2-3 days time and
  • 5 - if the story is too big and needs to broken down into sub stories, or we might not have enough clarity and need more information about it.

The biggest advantage I see in this approach is that everyone's view is taken into consideration and it reduces the risk of one or other person influencing the estimated efforts. It also gives the team an overall idea of all the work involved in the developing a product. It makes the team more self reliant and reduces dependencies on individuals. It definitely helps in the collective ownership of the project.

spacer