Friday, February 29, 2008

Interesting Distinction Between BizTalk 2006 and Workflow

I've been working with Workflow Foundation quite a bit lately and even had the opportunity to be teaching it to my design team at work about it's intricacies (which has been a lot of fun as well).

Coming from a BizTalk background also makes it exciting to see the differences between it and WF and how they are similar. Something neat about BizTalk is its use of Send Ports and Send Groups, which allows multiple services to be informed of an event within an orchestration (the equivalent to a sequential workflow in WF). With a send port, I could theoretically have multiple instances of the same type of service with small differences between the two and use correlation to deal with events back from multiple data sources.

However in WF, it isn't quite the same. Given everything is interface driven that is what the WF Runtime is looking at, so if I try to add multiple services of the same interface to the DataExchangeService an exception is thrown with the following message

"An instance of ExternalDataExchangeService of type TestWorkflowApp.IApproverService already exists in the runtime container."

The code that I used to do this is pretty simple. I created another class (although it would appear I could just instantiate another instance of my ApproverService class) which was the exact same implementation (just a different name). Then tried to add both to the Data Exchange Service as so

dataExchangeService.AddService(approverService);
dataExchangeService.AddService(secondApproverService);

Exception thrown.

I suppose there should be some major distinctions in capability between a free product that is part of the framework like Workflow vs. a 25k+ product like BizTalk. Certainly BizTalk offers much more than a workflow runtime (far beyond the scope of this blog) but thought it was important to point out some differences.

Tuesday, February 26, 2008

Nested State Activities in Windows Workflow

By pure accident today I stumbled upon Nested State Activities, and of course my curiosity got the best of me in finding out what exactly these things are, how they worked, and who had done anything with it.

A quick Google search for Nested State Activity didn't turn up much, except an article on MSDN stating that they were beyond the scope of this article. Still, good article on state machines, you can find it here.

Searching a little further, I found an article that somewhat covered it although they didn't use WWF, instead their own state machine toolkit. Also a good read and can be found here.

However I still haven't found much on these things in WWF... So, I decided to play a little. Why not right?


First, lets just look at what a nested state looks like compared to the other in the designer.


Given of course this is a simplistic example at first, so lets explore it a little more. The first question that I asked was "What would I ever need this for?" And to tell you the truth, I don't have a for sure answer for that.

What I have seen so far is that if a state houses other state (in our example above stateActivity1 containing stateActivity2 and stateActivity3) you cannot set that state from another SetState activity. An example of this is from the BuildPublished State I can set the next state to stateActivity2 and stateActivity3 or itself for that matter. however, I cannot set the state to stateActivity1.

Something intereting however, stateActivity1 can have events handling scopes associated with it. So it appears the event can respond when the the current state is within one of the sub states.

I haven't found a great use of this yet, however the ability to have substates does add a new level of capability that I never realized. I guess an example of its use may be a help desk application in which case when an issue is resolved you may have substates such as post-mortum review, develop action plan, and something else.

I may explore this more later if I see a need, but thought it was cool nonetheless.

Thursday, February 21, 2008

Integration Testing Woes with TFS WorkItemStore

Working with YakShaver I've been worried about how I was going to test without setting up a full Team Foundation environment (which is a challenge sometimes in itself, well, it's just a tedious job). Also if I make a mistake, I'm looking at quite a bit of time to rebuild my environment.

This was a challenge until I realized that Microsoft has released VPC images of TFS for download that is the full environment all setup and ready to go. Yes, the image expires on April 1, but they will release more images after that. And I'm not as worried about keeping code there as I'm' testing against working with the TFS API.

Add in the fact that Virtual PC 2007 is now a free download and we have an instant environment for testing. The machine name stays the same (TFSRTM08) so I can just setup simple constants in my unit tests for connectivity. However, this isn't what this entry is about. Just letting people know its useful.

In working with this image I ran into an issue where I would create an area and want to instantly use it. I know from experience with the Team Explorer that if you create a new work item, then add an area or iteration and want to use that in the work item you created previously you must refresh the work item page so the new Area's and Iterations show up.

However, there seems to be a bit of delay with using the VPC (at least I think so). Using the .RefreshCache() or .SyncToCache() methods on the WorkItemStore class seem a bit sketchy. They would work 1/3 times in my testing.

I didn't want my implementation to constantly check for an area because that could get it caught in an infinite loop. So in my testing class I create the area, and then wait until the workItemStore.Projects[ProjectName].AreaRootNodes[areaName] is available (you have to wrap in a try catch because if you try to get it and it doesn't exist you will get a ClientDeniedOrNotFoundException.

Here is an example:




bool areaFound = false;
while (!areaFound)
{
try
{
System.Threading.Thread.Sleep(3000);
tfsWorkItemStore.RefreshCache();
string tempArea = tfsWorkItemStore.Projects[currentProject.Name].AreaRootNodes[area].Path;
areaFound = true; // iif it isn't found an exception is thrown never getting to this.

} catch (Exception ex) {
areaFound = false;
}
}


I could use the contains method, but it looks for an instance of a Node and not just a string representation. This just seemed easier at the time to solve my problem without causing huge issues.

Friday, February 15, 2008

TDD + VP + DNN

Phil Beadle from DNN has a new post up adding in a layer of abstraction in the UI for better testability. At work we've been working with the same model (although I believe Phil has modified it some) and found it pretty good for testing the UI and reducing places for failure.

He mentions using Rhino Mocks which I've been taking a look at as well recently. Pretty cool stuff, but make sure you read up on identifying Mocks vs. Stubs. This framework can do a lot of things, it's strongly typed, has natural syntax, and pretty much endless possibilities.

I've personally been struggling between this model and MVC (such as Microsoft's new ASP.NET MVC pattern that Scott Guthrie and Phil Haack have been blogging about). They use very similar language but are pretty different in the end. I think MVP is a little bit easier to implement but MVC is easier to test. Both provide great mechanisms for testing though.

Monday, February 11, 2008

Cool Automated Testing Tool

Found this tool on a buddy of mine's blog. Looks pretty cool. One of the downfalls I've seen in a lot of webtesting tools (including MSTest, HP QTP, etc) is the lack of good support for Ajax calls. QTP offers the ability to set "delays" however this is more of a hack than a good Ajax testing solution.

By no means do I think testing Ajax, or for that matter even developing an Ajax testing system is a small task. Why do you think I'm looking for tools?

http://www.artoftest.com/Products.aspx

That is the link. Hopefully I'll get some time to play with this thing a little later. I still need to finish a post on integration testing with Workflow this evening, so that will take priority. But next weekend is a 3 day weekend, so you never know.

Sunday, February 10, 2008

Windows Workflow Integration Testing - Part I



And now for my first "official" post. In this post we are going to take a look at Integration Testing with a bunch of technologies, in particular Windows Workflow Foundation, MSTest, and Rhino Mocks (a little). In my next post I may get into more with NCover, but I figure it is too much at this point.

To start, why do I use integration testing and not unit testing. Recently, both in work and among many professionals and hobbyists I've seen a lot of people distinguishing the two, which I whole heartedly agree with. I think a good testing strategy is a combination of both, as well as a number of other types of testing as well.

Unit testing a workflow activity doesn't really make sense. After all a workflow represents some series of activities executing at specified times or response to events external to the system. A powerful tool though is the ability to repeat the tests that the workflow will go through in an automated fashion. This post looks to cover how I approached the situation and some reasonings behind it.

The workflow we are using is a rather simple sequential workflow but more complex than some other workflows we see in a lot of demos/tutors. We use a Delay Activity, a couple CallExternalMethodActivity 's, Listen, HandleExternalEventActivity, Policy, and a few loops. Something a little closer to what we would see in the real world, as well as not being related to an order. ;) Below is a screen shot of the workflow. Click on it for a larger image.



What does it do?

This is the first workflow from my new project on CodePlex called YakShaver.NET. The purpose of this workflow is to take an item submitted from YaKapture (a Screen Capture & Work Item Entry User Control) run it through an IAnalysisService (used to map context information (user,page) to the workitem tracker data such as Component, Release in CodePlex and/or TFS). Let's walk through it from a high level perspective.

1) A Consumer Service (YaKapture Screen Capture/Work Item Entry User Control) submits an instance of IWorkItemDataContract. This in turn is passed into the workflow when it is created.
2) CallExternalMethodActivty is used to call the IAnalysisService.AnalyzeSubmittedWorkItem which looks at the data contract and returns an instance of YakShaverWorkItem. I'm not sure if I want to do this or I want to just use the same data contract throughout the entire workflow. For now, it's separate.
3) From the results of this service, we run a policy over the data to determine if the item needs intervention, meaning that someone must come in and assign either one of all of the required items for a CodePlex issue, this being Component, Release, Work item Type, Impact Type, Title and Description. The goal is to infer as much data as possible by information quietly passed from YaKapture along with the user data submitted. Such as anything with the path /qabuild1_2_07 knows it is associated with a release called QA Build 1.2.07.
4) If intervention is needed, we call out to a noticiation service (INotificationService) which can implement any method it wants to notify someone to intervene on this item so it can go in the Work Item tracker (or if needs to go to the helpdesk?)
5) A Listen tree is implemented, a Delay Activity on one side (which you can pass in the delay between notifications through the parameters passed in at CreateWorkflow) and a HandleExternalEventActivity on the other. At this point the workflow will continue to notify the service for eternity until it is handled. A more realistic approach in the future may be to set up some sort of approval tree in the notification service, unncessary for now.
6) When the handleExternalEvent happens the workflow sets the intervened flag to true, thus ending the inner while loop. This then takes us to the top of the workflow where we re-run our analysis service and policy, determing again if we need intervention.
7) When everything is finally taken care of, we call a IWorkItemProviderService.CreateWorkItem which creates the work item in the issue tracker. At this point the outer while loop knows the break, and the workflow completes as normal.

We could incorporate this workflow into a larger workflow or perhaps into a state machine, that is one of the beauties of WWF. At least I think so.

Now we know what the workflow does, how do we test it?

There are three integration tests that I have setup

1) Test to see that the notification service is called and data contract data is read
2) Test to see that when notified a user can resubmit the data contracted with updated information, and when it does, the work item created process is called and the workflow completes.
3) Test to see when a notified user does not respond in a certain time, that notification is sent again. Maintaining the number of times the service has been called.

WWF is supringly easy to setup for integration testing within any testing framework, whether that is NUnit, MBUnit, or MSTest. I am using MSTest in this example, but an NUnit conversion would be a minimal effort.

Let's look at the first test, to see if the notification service is called and data contract is read. A delay activity keeps the workflow from going into hyper drive and repeating the loop at a ridiculous rate. Plus, we are just attempting to test the workflow, we want to keep as many things constant in order to reduce the number of failure points in the test (especially important as we regression test).

I've divided my services into pretty granular pieces, each having one method and maybe one event (not all interfaces had events at least at the time of this blogging). This provides a couple advantages:

1) By being granular I can narrow the points of failure in my process, this also allows me to toy with code both in and out of process and more easily measure code coverage (at least I think so)
2) Independently mocking each service to test how each one performs under conditions independent of other factors (i.e. services), including the workflow process itself.

There are a few things to consider when testing workflow (I love lists)

1) Requires the WorkflowRuntime to be instantiated, suprisingly I found you can host multiple runtimes in a process (at least you can start multiple ones, something to explore in a later post...) This part is as simple as creating an instance and starting the runtime.
2) A lot of people (and I was certainly one of them) think that communication is as simple as calling certain .NET classes. It is close to that, but there is a communication layer within workflow that has to be considered. The simplest way to communicate with the hosting application (i.e. the testing host) is using the ExternalEvent and ExternalMethod activities. However this requires having an interface defined to communicate, implementing a service and those activities. A bit much for testing and then possibly having to rip it out for production. Say hello to Tracking Services!
3) Versioning - this can cause oddities if you test a workflow, modify it and run the new version again without changing the version #. I find it easiest to simply delete all necessary data in the tracking and persistence db, or redeploy the database script on each test run (effective, but can get out of control if you don't manage it)
4) WorkflowMonitor - comes with the SDK, probably one of your best visual test coverage tools, puts little check marks over every activity that gets called. For fun you could extend this application more to count times it was called (useful inside of loops)

With that out of the way.

First, I need to setup the workflow runtime. Using a Unit Test template from MSTest I uncomment out the [ClassInitialize]. Because I am using events and methods to communicate, I need to add a data exchange service.


   1:     [ClassInitialize()]

   2:          public static void MyClassInitialize(TestContext testContext) {

   3:              sqlTrackingConnectionString = "Data Source=.\\SQLExpress;Initial Catalog=WFTracking;integrated security=true;";

   4:              sqlPeristenceConnectionString = "Data Source=.\\SQLExpress;Initial Catalog=WFPersistence;integrated security=true;";

   5:   

   6:   

   7:              // recreate the tracking db

   8:   

   9:   

  10:              // recreate the persistance db

  11:   

  12:              // initialize the runtime

  13:              workflowRuntime = new WorkflowRuntime();

  14:              dataExchangeService = new ExternalDataExchangeService();

  15:              workflowRuntime.AddService(dataExchangeService);

  16:              

  17:              SqlTrackingService sqlTrackingService = new SqlTrackingService(sqlTrackingConnectionString);

  18:              SqlWorkflowPersistenceService sqlPersistanceService = new SqlWorkflowPersistenceService(sqlPeristenceConnectionString);

  19:   

  20:              workflowRuntime.AddService(sqlTrackingService);

  21:              workflowRuntime.AddService(sqlPersistanceService);

  22:   

  23:              workflowRuntime.StartRuntime();

  24:  }


In the example above I've added a tracking service and a persistence service as well as the data exchange service. These are default services that come with WF, and work perfectly for our testing situation.

This is only fired when the test run starts. We can remove all these services and shutdown the workflow runtime on another method for [ClassCleanup]. Shutdown can be somewhat important because it will persist your workflows, a nice thing when your trying to debug what exactly is happening.

Finally a test!



   1:  [TestMethod, TestProperty("Category", @"Workflow\Integration\NotificationServiceTests")]

   2:          public void TestNotifyForIntervention()

   3:          {

   4:              INotificationDummyService notificationService = new INotificationDummyService();

   5:              IAnalysisDummyService analysisService = new IAnalysisDummyService();

   6:              AutoResetEvent wfIdledEvent = new AutoResetEvent(false);

   7:              Dictionary<string, object> namedParameters = new Dictionary<string, object>();

   8:              IWorkItemDataContract dataContract = new WorkItemDataContractDummyObject();

   9:   

  10:              DataExchangeService.AddService(notificationService);

  11:              DataExchangeService.AddService(analysisService);

  12:   

  13:              namedParameters.Add("WorkItemDataContract", dataContract);

  14:              

  15:              WorkflowInstance workflowInstance = WorkflowEngine.CreateWorkflow(typeof(WorkItemEntryWorkflow), namedParameters);

  16:   

  17:              // attach events if I want.

  18:   

  19:              WorkflowEngine.WorkflowPersisted += delegate (object sender, WorkflowEventArgs args) 

  20:              {

  21:                  if (args.WorkflowInstance.InstanceId == workflowInstance.InstanceId)

  22:                      wfIdledEvent.Set();

  23:              };

  24:   

  25:              workflowInstance.Start();

  26:   

  27:              wfIdledEvent.WaitOne(new TimeSpan(0, 0, 60), true);

  28:              //grab the latest from the persistence store.  If it doesn't find it, an exception will be thrown

  29:              workflowInstance = WorkflowEngine.GetWorkflow(workflowInstance.InstanceId);

  30:   

  31:              Assert.IsTrue(notificationService.NotifyForInterventionMethodCallCount > 0, "Notification Service did not return that it sent notification");

  32:          }

Lets start by looking at a few key lines. First, lines 4 & 5. We declare two services with the naming convention DummyService. These are pretty much exactly what they sound like, stubbed implementations of a number of interfaces that could exist. We use dummy services in this case to again reduce the points of failure in our testing scenario. We'll have a chance later on to plug in "real' services to see how they respond when integrated into the workflow.

If we look at these implementations we similar they are not much different than something we could do with a mocking framework. However, I didn't have enough experience with mocking frameworks, and second I was having trouble because of the need for the SerializableAttribute. Not as pretty, but accomplishes the same thing. Instead of verifying mock expectations, we assert that a method was called (proof the workflow is doing what we expect it to).

Line 8 instantiates a dummy instance of IWorkItemDataContract, which is used in multiple places in our application. I shouldn't say dummy object in this sense because it does have some intelligence. By that I mean depending the how many times a method is called, it responds with different output to aid the workflow in later tests. I realize I could keep this in yet another dummy class however I just don't see the need right now.

On line 13 we add the semi smart datacontract to the parameter collection that we are going to pass into the start of the workflow. One of the important things to remember about WWF is there are specific ways to communicate with the instances in the runtime. I won't go into detail in this post, but using a Dictionary is an easy way to set any public property on the workflow class. We need to do this because our workflow demands it almost immediately for the analysis service.

Some may notice I'm using an anonymous method on the workflow persisted event. This checks to see if the workflow that was persisted was the one we created, and if so to trigger an AutoResetEvent. Why persisted? It certainly isn't necessary, and not possible without adding a persistence service to the runtime. Idled would work just as well but I want to make sure that I can get the workflow from the persistence store. Which I do in line 29.

Finally I do one assertion on the service to make sure the workflow called it.

I run my tests through MStest on the test list and bingo! We go green!

You can download this code from the latest change from the codeplex project changeset 4960

Thursday, February 7, 2008

First Post

I've always wanted a blog. I've read so many over the years that have helped me as a developer I figured I might as well try to participate as well. I realize it will be a long time until any (if ever) reads this. It's nice to think they would, but if not, hey, It's for my own sanity.

The basics. I work for a large company that shall remain nameless in Chicago, good people. MASSIVE development team. Much larger than anything I have ever worked with and we are masters of documentation. Many of us joke about how much of a waterfall approach it is, even though we try to implement little agile methodologies such as continuous integration, unit testing, integration testing, and recently getting into mock objects. However releases are many months apart, and most of the time in a release is spent documenting and having meetings. Given the nature of our business, I see that as important, and we are trying to find a good approach to giving the old guard what they want, while trying to adopt more agile methods to give users what they are demanding.

We suffer from the same things many others do, system that was converted over from a good legacy system which would have been just fine if they had gone ASP -> ASP.NET. But alas, no. Some crazy architect (also to remain nameless) wanted to make it all enterprisey by creating his vision of transcendent/super pluggable framework software. You know, the software that is perfect for any application and can scale, yada yada. That sales pitch you almost dread, well this is the end result of that sales pitch. I will say that I applaud the creators effort as I think it is academically brilliant, and I'm impressed by how he put it together. But it creates unnecessary overhead and his definition of loose coupling was "everything is a dataset" ;)

I'm not complaining, I just find it comical. I also find it challenging, although in a way I didn't expect. The challenge isn't delivering code, it is making sure you understand how data transforms the many layers that exist in the infrastructure. Plus, it's a massive amount of data to analyze which can be exciting (at least I think so). Plus it pays great and is strict on 40 hours so I can go home and spend time with my wife.

This blog isn't about that though, however I may relate some work experiences. No, this is more about my open source work that I've recently jumped into. I've actually been using open source software for quite some time (I've used DotNetNuke extensivly for the past 4 years) and with it have built some pretty cool stuff.

I've joined the DNN Announcements module development team which I am very excited to get to participate on such a great level with DNN. We've got some good people on the team right now and actually a pretty good size for a open source module. Plus we have people from the US, Netherlands, and I believe Spain. I probably have that wrong, but I know its a pretty spread team.

I've also created a couple projects on CodePlex that I work on in my spare time. One sharing the name of this blog - YakShaver.NET - which is an exploration at integration testing, unit testing, windows workflow, TFS API, CodePlex API, DNN Forge API, a number of help desk applications as well as going to a knowledge base and more. Basically I want to look at software builds by looking at the life of a work item from start to finish (when does it need to be one? When is it a support item? how do we keep people involved? ) I'm really hoping to use this in combination with the DNN development. The project Wiki goes over the goals of the project. The other is DeliveryBoy. Which I will comment on as well, but probably not for awhile. I started that project looking for something to do, but then I was reading a book and wanted to do some YakShaving (Building tools for tools, could be considered a useful task, but not always necessary). I'm hoping people find it necessary.

That's it for now, I just finished my first integration tests for Windows Workflow and will post the results if I wasn't so tired and had to go to work early tomorrow. This weekend though.