Sunday, February 10, 2008

Windows Workflow Integration Testing - Part I



And now for my first "official" post. In this post we are going to take a look at Integration Testing with a bunch of technologies, in particular Windows Workflow Foundation, MSTest, and Rhino Mocks (a little). In my next post I may get into more with NCover, but I figure it is too much at this point.

To start, why do I use integration testing and not unit testing. Recently, both in work and among many professionals and hobbyists I've seen a lot of people distinguishing the two, which I whole heartedly agree with. I think a good testing strategy is a combination of both, as well as a number of other types of testing as well.

Unit testing a workflow activity doesn't really make sense. After all a workflow represents some series of activities executing at specified times or response to events external to the system. A powerful tool though is the ability to repeat the tests that the workflow will go through in an automated fashion. This post looks to cover how I approached the situation and some reasonings behind it.

The workflow we are using is a rather simple sequential workflow but more complex than some other workflows we see in a lot of demos/tutors. We use a Delay Activity, a couple CallExternalMethodActivity 's, Listen, HandleExternalEventActivity, Policy, and a few loops. Something a little closer to what we would see in the real world, as well as not being related to an order. ;) Below is a screen shot of the workflow. Click on it for a larger image.



What does it do?

This is the first workflow from my new project on CodePlex called YakShaver.NET. The purpose of this workflow is to take an item submitted from YaKapture (a Screen Capture & Work Item Entry User Control) run it through an IAnalysisService (used to map context information (user,page) to the workitem tracker data such as Component, Release in CodePlex and/or TFS). Let's walk through it from a high level perspective.

1) A Consumer Service (YaKapture Screen Capture/Work Item Entry User Control) submits an instance of IWorkItemDataContract. This in turn is passed into the workflow when it is created.
2) CallExternalMethodActivty is used to call the IAnalysisService.AnalyzeSubmittedWorkItem which looks at the data contract and returns an instance of YakShaverWorkItem. I'm not sure if I want to do this or I want to just use the same data contract throughout the entire workflow. For now, it's separate.
3) From the results of this service, we run a policy over the data to determine if the item needs intervention, meaning that someone must come in and assign either one of all of the required items for a CodePlex issue, this being Component, Release, Work item Type, Impact Type, Title and Description. The goal is to infer as much data as possible by information quietly passed from YaKapture along with the user data submitted. Such as anything with the path /qabuild1_2_07 knows it is associated with a release called QA Build 1.2.07.
4) If intervention is needed, we call out to a noticiation service (INotificationService) which can implement any method it wants to notify someone to intervene on this item so it can go in the Work Item tracker (or if needs to go to the helpdesk?)
5) A Listen tree is implemented, a Delay Activity on one side (which you can pass in the delay between notifications through the parameters passed in at CreateWorkflow) and a HandleExternalEventActivity on the other. At this point the workflow will continue to notify the service for eternity until it is handled. A more realistic approach in the future may be to set up some sort of approval tree in the notification service, unncessary for now.
6) When the handleExternalEvent happens the workflow sets the intervened flag to true, thus ending the inner while loop. This then takes us to the top of the workflow where we re-run our analysis service and policy, determing again if we need intervention.
7) When everything is finally taken care of, we call a IWorkItemProviderService.CreateWorkItem which creates the work item in the issue tracker. At this point the outer while loop knows the break, and the workflow completes as normal.

We could incorporate this workflow into a larger workflow or perhaps into a state machine, that is one of the beauties of WWF. At least I think so.

Now we know what the workflow does, how do we test it?

There are three integration tests that I have setup

1) Test to see that the notification service is called and data contract data is read
2) Test to see that when notified a user can resubmit the data contracted with updated information, and when it does, the work item created process is called and the workflow completes.
3) Test to see when a notified user does not respond in a certain time, that notification is sent again. Maintaining the number of times the service has been called.

WWF is supringly easy to setup for integration testing within any testing framework, whether that is NUnit, MBUnit, or MSTest. I am using MSTest in this example, but an NUnit conversion would be a minimal effort.

Let's look at the first test, to see if the notification service is called and data contract is read. A delay activity keeps the workflow from going into hyper drive and repeating the loop at a ridiculous rate. Plus, we are just attempting to test the workflow, we want to keep as many things constant in order to reduce the number of failure points in the test (especially important as we regression test).

I've divided my services into pretty granular pieces, each having one method and maybe one event (not all interfaces had events at least at the time of this blogging). This provides a couple advantages:

1) By being granular I can narrow the points of failure in my process, this also allows me to toy with code both in and out of process and more easily measure code coverage (at least I think so)
2) Independently mocking each service to test how each one performs under conditions independent of other factors (i.e. services), including the workflow process itself.

There are a few things to consider when testing workflow (I love lists)

1) Requires the WorkflowRuntime to be instantiated, suprisingly I found you can host multiple runtimes in a process (at least you can start multiple ones, something to explore in a later post...) This part is as simple as creating an instance and starting the runtime.
2) A lot of people (and I was certainly one of them) think that communication is as simple as calling certain .NET classes. It is close to that, but there is a communication layer within workflow that has to be considered. The simplest way to communicate with the hosting application (i.e. the testing host) is using the ExternalEvent and ExternalMethod activities. However this requires having an interface defined to communicate, implementing a service and those activities. A bit much for testing and then possibly having to rip it out for production. Say hello to Tracking Services!
3) Versioning - this can cause oddities if you test a workflow, modify it and run the new version again without changing the version #. I find it easiest to simply delete all necessary data in the tracking and persistence db, or redeploy the database script on each test run (effective, but can get out of control if you don't manage it)
4) WorkflowMonitor - comes with the SDK, probably one of your best visual test coverage tools, puts little check marks over every activity that gets called. For fun you could extend this application more to count times it was called (useful inside of loops)

With that out of the way.

First, I need to setup the workflow runtime. Using a Unit Test template from MSTest I uncomment out the [ClassInitialize]. Because I am using events and methods to communicate, I need to add a data exchange service.


   1:     [ClassInitialize()]

   2:          public static void MyClassInitialize(TestContext testContext) {

   3:              sqlTrackingConnectionString = "Data Source=.\\SQLExpress;Initial Catalog=WFTracking;integrated security=true;";

   4:              sqlPeristenceConnectionString = "Data Source=.\\SQLExpress;Initial Catalog=WFPersistence;integrated security=true;";

   5:   

   6:   

   7:              // recreate the tracking db

   8:   

   9:   

  10:              // recreate the persistance db

  11:   

  12:              // initialize the runtime

  13:              workflowRuntime = new WorkflowRuntime();

  14:              dataExchangeService = new ExternalDataExchangeService();

  15:              workflowRuntime.AddService(dataExchangeService);

  16:              

  17:              SqlTrackingService sqlTrackingService = new SqlTrackingService(sqlTrackingConnectionString);

  18:              SqlWorkflowPersistenceService sqlPersistanceService = new SqlWorkflowPersistenceService(sqlPeristenceConnectionString);

  19:   

  20:              workflowRuntime.AddService(sqlTrackingService);

  21:              workflowRuntime.AddService(sqlPersistanceService);

  22:   

  23:              workflowRuntime.StartRuntime();

  24:  }


In the example above I've added a tracking service and a persistence service as well as the data exchange service. These are default services that come with WF, and work perfectly for our testing situation.

This is only fired when the test run starts. We can remove all these services and shutdown the workflow runtime on another method for [ClassCleanup]. Shutdown can be somewhat important because it will persist your workflows, a nice thing when your trying to debug what exactly is happening.

Finally a test!



   1:  [TestMethod, TestProperty("Category", @"Workflow\Integration\NotificationServiceTests")]

   2:          public void TestNotifyForIntervention()

   3:          {

   4:              INotificationDummyService notificationService = new INotificationDummyService();

   5:              IAnalysisDummyService analysisService = new IAnalysisDummyService();

   6:              AutoResetEvent wfIdledEvent = new AutoResetEvent(false);

   7:              Dictionary<string, object> namedParameters = new Dictionary<string, object>();

   8:              IWorkItemDataContract dataContract = new WorkItemDataContractDummyObject();

   9:   

  10:              DataExchangeService.AddService(notificationService);

  11:              DataExchangeService.AddService(analysisService);

  12:   

  13:              namedParameters.Add("WorkItemDataContract", dataContract);

  14:              

  15:              WorkflowInstance workflowInstance = WorkflowEngine.CreateWorkflow(typeof(WorkItemEntryWorkflow), namedParameters);

  16:   

  17:              // attach events if I want.

  18:   

  19:              WorkflowEngine.WorkflowPersisted += delegate (object sender, WorkflowEventArgs args) 

  20:              {

  21:                  if (args.WorkflowInstance.InstanceId == workflowInstance.InstanceId)

  22:                      wfIdledEvent.Set();

  23:              };

  24:   

  25:              workflowInstance.Start();

  26:   

  27:              wfIdledEvent.WaitOne(new TimeSpan(0, 0, 60), true);

  28:              //grab the latest from the persistence store.  If it doesn't find it, an exception will be thrown

  29:              workflowInstance = WorkflowEngine.GetWorkflow(workflowInstance.InstanceId);

  30:   

  31:              Assert.IsTrue(notificationService.NotifyForInterventionMethodCallCount > 0, "Notification Service did not return that it sent notification");

  32:          }

Lets start by looking at a few key lines. First, lines 4 & 5. We declare two services with the naming convention DummyService. These are pretty much exactly what they sound like, stubbed implementations of a number of interfaces that could exist. We use dummy services in this case to again reduce the points of failure in our testing scenario. We'll have a chance later on to plug in "real' services to see how they respond when integrated into the workflow.

If we look at these implementations we similar they are not much different than something we could do with a mocking framework. However, I didn't have enough experience with mocking frameworks, and second I was having trouble because of the need for the SerializableAttribute. Not as pretty, but accomplishes the same thing. Instead of verifying mock expectations, we assert that a method was called (proof the workflow is doing what we expect it to).

Line 8 instantiates a dummy instance of IWorkItemDataContract, which is used in multiple places in our application. I shouldn't say dummy object in this sense because it does have some intelligence. By that I mean depending the how many times a method is called, it responds with different output to aid the workflow in later tests. I realize I could keep this in yet another dummy class however I just don't see the need right now.

On line 13 we add the semi smart datacontract to the parameter collection that we are going to pass into the start of the workflow. One of the important things to remember about WWF is there are specific ways to communicate with the instances in the runtime. I won't go into detail in this post, but using a Dictionary is an easy way to set any public property on the workflow class. We need to do this because our workflow demands it almost immediately for the analysis service.

Some may notice I'm using an anonymous method on the workflow persisted event. This checks to see if the workflow that was persisted was the one we created, and if so to trigger an AutoResetEvent. Why persisted? It certainly isn't necessary, and not possible without adding a persistence service to the runtime. Idled would work just as well but I want to make sure that I can get the workflow from the persistence store. Which I do in line 29.

Finally I do one assertion on the service to make sure the workflow called it.

I run my tests through MStest on the test list and bingo! We go green!

You can download this code from the latest change from the codeplex project changeset 4960

No comments:

Post a Comment