Thursday, September 25, 2008

Deploying Workflow Foundation Part 1.5 - Thread Safety and ExternalDataExchange

*Sigh* another late night debugging a CI build that has been failing all day. And just as I fix the portion I know is wrong, I find that another delivery not related to anything I do broke another part of the build. And, it's going to take some digging to find the problem.

So, before I get into that beast, I wanted to at least share a note on Thread Safety and using ExternalDataExchange interfaces.

The first important thing to note is ExternalDataExchange services live as singletons in the workflow runtime. However, unlike our powerful new toy WCF, you are not guarunteed a singleton with proper thread blocking.

What does that mean?

If you haven't heard of Free Threading, that's pretty much what this is. Think about when you add an ExternalDataExchange service to the workflow runtime. When you do it in code, it would look something to the effect of.


using (WorkflowRuntime runtime = new WorkflowRuntime("WorkflowRuntimeConfigurationPT"))
{

ExternalDataExchangeService dataExchangeService = new ExternalDataExchangeService();
MyWorkflowService service1 = new MyWorkflowService();
runtime.addService(dataExchangeService)
dataExchangeService.AddService(service1);

}



The important thing to note above is the construction of our Workflow service that we will use to perform whatever function we wish. From our previous example it could be to update the database with information from the workflow state about an order, such as adding or removing an item.

We construct an instance of that service and let the runtime and dataexchange take that over. But keep in mind, we constructed the instance (instead of referencing a type in which case the runtime would create instances as needed).

Often times we find ourselves being a bit lazy and not thinking about what exactly is going to be accessing instances of our classes. Especially when we are not necessarily thinking about multi-threaded programming (I think WCF spoiled this for us). After all, the application that I wrote doesn't use the ThreadPool or anything else touching System.Threading.

But Workflow does.

What can happen is two workflows may reach a state in which they call the same method on the same interface. Because you can only have only 1 instance of any interface loaded into a single runtime this can cause both threads from each workflow instance to "cross paths". When debugging, some may notice this as "code jumping", where your running F10, and then all of a sudden your 3 lines before you were and not paying attention to your thread monitor window. ;)

What can happen is the same thing in any free threaded application. Always remember the basics of computers, a processor can only process a single instruction at a time. And a line of code does not necessarily equal a single instruction (no where close ;)) so, the Thread Scheduler deep inside the OS will swap out different sequences of instructions from different processes at different times, not necessarily completing a logical set of instructions as we humans see it.

Sometimes this doesn't matter, however in most instances you may be using a class level or even method level variable, in which case if both threads are accessing the method at the same time, thread A sets a variable to 1 and thread B sets it to 2. When thread A gets scheduled its next instruction, values that it originally had may not be there, leading to "weird" or "random" results and or exceptions. Also, the most common example is that both thread A and B create a race condition where neither can exit a loop because of what the other one is doing.

I will not go into a huge speech of the importance of thread safety and how to do it, there are much better examples out there than what I will give you. The key to take from this post is always think when developing your workflow services, "two workflow may access this code at the exact same time, what have I done to protect myself". The easy thing to do is use of the lock statement. This will save you hours of headache later.

HTH

Saturday, August 30, 2008

Deploying Workflow Foundation Part 1 - Introduction and Solution Setup



It's been a long time since I've gotten a chance to write. I certainly had a number of topics to discuss, just no time to sit down and put them into words. Well, words that made sense.


I've decided to write about a topic that I thought needed some attention. I've seen a number of posts of people struggling with deploying and upgrading Workflow Foundation as well as dealing with the same issue in my professional life. Before that though...

Over the past 8 weeks, my life has taken considerable changes both personally and professionally. No, I didn't leave my job, however, if you remember in previous posts, I work for a large financial institution, and my first few months were really nothing documentation. In my opinion not necessarily the most useful documentation, but I digress. That all changed about 12 weeks ago with me being the onshore lead for an offshore project at work.


Back to that in a moment...


I wouldn't normally share something as personal on a technical blog, but for this, a small snippet is acceptable. Just over a year ago my wife and I lost our son Ryan at birth (he was 38 weeks). Of course, a tragic time and difficult for both of us. However in April we found out my wife was pregnant again. We were incredibly excited and as we made it past the first trimester we were anxiously awaiting the day to find out the gender. Of course, like any parents we started shopping for names, we quickly selected a girls name without any question (just as we had selected Ryan's). But we were stuck between two boys names. Well, that day came and looks like there was a reason we couldn't decide... We need 'em both. ;)




Back to our regularly scheduled programming...




A little background on this project. We had 4 workflows, each being very simliar in soliciting some responses from various stakeholders in an operation and modifying data accordingly. Nothing too crazy. Now I obviously cannot share those workflows, so I've created a beginning to end + iterations example of using Windows Workflow in *any* environment which I will be sharing over the next few posts.


Ok let us take a process we all know. An online shopping cart system. To start off, I've created the most simple cart system that I can. It contains no data other than an ID (and that Id will be provided by the workflow engine). Below is what my state machine looks like.






The workflow has 3 states. ShoppingCartState, OrderPlacedState, and OrderShippedState. The workflow responds to events from ExternalDataExchange decorated interfaces.



But before we get into all that. I want to talk about the solution layout and a very important thing about workflow that is overlooked. That my friends, is versioning, and how to properly version Workflow.

Microsoft's Provided Workflow Templates for VS2005 & VS2008

I'm a huge Microsoft fan, so this isn't a complaint, just a clarification. The Workflow project templates that are included with the Workflow Extensions for VS2005 and VS2008 are not meant to be used in an almost *any* production environment other than one where either

  • The workflow never changes
  • If upgraded, all existing data is either destroyed or migrated (sometimes always painfully).

This doesn't sound like a very good situation in a large application or small. Truthfully, I've found that following the BizTalk solution layout seems to be the best way to easily change and upgrade workflows. And what exactly is that?

  • Interfaces Project - Version # should never change (next post I will cover how it can). This represents interfaces Workflow uses to communicate with External Services using the ExternalDataExchangeService.
    • NOTE: Your interfaces project should also include enum's and inherited ExternalDataEventArgs. If you don't, you will need to verify and possibly modify
  • Workflow Project - Version # should change on nearly every deployment
  • Services Implementation Project - Version # should change on an as needed basis.

Or, a picture is worth 1000 words.



I have three projects of interest represented here. Now I have actually broken one of my own rules in which case I have joined the services implementation with my Workflow project. Though it isn't necessarily a best practice, the version # on the services is not nearly as important as the interface version # for this example. So we can cheat a little in this example.

Also notice I've added a Strong Name key to my Workflow Project as well as the interface project. I have to add it to the interface project because it is referenced by the workflow project. The workflow project is strong named because I'm going to install it into the GAC. I could install the interface library in the GAC as well.

Why should the interface version never change?

This seems to be the biggest "gotcha" when developing Workflow enabled applications. It isn't a bug, but something very much meant by design. Under the hood WF has it's own runtime that handles events through it's own messaging infrastructure. If you look deep enough, you'll find the eventing model uses something similar to MSMQ to deliver an event message to a specified workflow. It may actually be MSMQ, but I cannot say for sure at this point. This way, the runtime handles the message delivery of an event without us having to worry about if the workflow is in the proper state to receive the event. This post will not go into the details of guarunteed asynchronous message delivery, but it's a fun topic if you have the time.


Why install the Workflow DLL in the GAC?

Simple, in most cases (even something like this shopping cart) we will have long running transactions. The process of getting the order into our warehouse may be fast, however, if we are shipping a physical item there may be a few days between shipping and updating it. Workflow offers persistence services for keeping these transactions alive through reboots, shutdowns, and whatever else may happen.

Lets say we add a new step to the process, such as AtVendorState where we ship part of our order from a 3rd party vendor. When we deploy this, we still want our existing workflow to finish running the way they started, that way we don't have to come up with some sort of upgrade program or a completly separate deployment of the services and workflows.

In order to allow both the old workflows and new workflows run in the same runtime with independent versions, we must install both workflow assemblies in the GAC and allow the runtime to determine which one it wants at any particular time.



We all admit, the workflow's have to change, not only that, but long running processes subscribe to one verison of the workflow and are intended to finish that way. After all, the runtime cannot determine how to properly transition version 1.0 of your workflow to version 1.1.


But you know what doesn't change? Interfaces. Specifically I'm talking about the interfaces that are used to raise events into the workflow instance, and for the workflow instance to contact external systems (I know you can use WCF with 3.5, but we are only allowed to use up to 3.0 at work, and thankful for that). This was where I found the most "problems" when deploying workflow.


However, it wasn't as if the deployment was hard, it was realizing the exceptions we were getting even though we never changed anything but a version were because of strong rules in place and not problems other than that which the code is delivered.


Workflow is a lot like BizTalk, in the way that it is *very* version dependent. However, BizTalk provides some niceties for binding between versions of send ports, receive ports and orchastrations. If you want that in workflow, you are doing it yourself.

Look forward to part 2 where I talk more about ExternalDataExchange services and Thread Safety. This is a huge topic and very easy to glaze over, but will start causing you more problems as the # of workflows (especially long running) continues to grow.





Tuesday, May 20, 2008

YakShaver.NET = UCM+ for TFS

I've been quiet recently. Mainly because I've been flipping my house. No, not to sell, but because I want to enjoy it that way. We moved in just about 2 years ago, and never really touched a thing. After August I had little motivation until recently, in which case I had more than I could imagine (perhaps quitting smoking helped).

That isn't what this entry is about however (I'll post pics at a later time). Well sort of. My reason for taking the break was both because of the house and because I wasn't exactly sure what I wanted of YakShaver.

I should rephrase that, I knew what I wanted, however I didn't really have a method to my madness. It was going in a number of different directions with no solid foundation to any of it. Or for that matter, what was my starting point?

At work, we use Rational ClearCase UCM (Unified Change Management). This is slightly different than Base ClearCase (which is really just a glorified version of SubVersion from what some have told me) in the sense that it provides an overlay to the SCM tool. Unfortunately at work, our ClearCase admins do not necessarily explain all the intricacies of what UCM and how its terminology applies to more commonly known terms (such as Stream == Branch, View == Workspace, Deliver == Copy + Merge).

Because there is very little explanation, most people seem to have a bad taste in their mouth about CC UCM, when in fact, it's rather brilliant. ClearCase is still slow and kludgy and I'm more of a fan of TFS with the direct work item integration (rather than Rational's other products such as RequisitePro and ClearQuest which integrate into CC+UCM).

Brilliant? IBM product? Isn't this blasphemy from a .NET programmer and MS lover? C'mon, we're all coders, whether that is Java or .NET, Ruby or Python. IBM has good idea, MS has good ideas, Avanade comes up with good ideas and gives them to MS, the cycle goes on and on. But I digress.

So what exactly is the 50,000 foot view of UCM? Let's look at what I consider to be the fundamental part, which is UCM's branching strategy. Again, UCM isn't anything special other than the fact it does some tasks for you. So when you create a Stream, your creating a branch that can be isolated for the developer, they can check in their work without affecting anyone elses code.

We are all *supposed* to follow a branching strategy, however many times we get lazy and forget, UCM kind of prevents this, creating your private development stream and those changes get merged into an Integration Stream (aka Trunk. though I should be careful, it doesn't necessarily mean trunk, especially if you have feature integration streams).

When the developer has completed their activities (another term in UCM world, aka Work Items) and all works well, they deliver this to the integration stream, where all developers deliver work and do merging between elements to get a finalized product.

The nice thing about Rational is they have a decent Merge/Diff tool that can handle most merging on it's own. TFS offers the same feature, though I haven't truly seen how well its merging capabilities are. My previous experience on TFS was in a 3 person team that rarely every collided. My current environment has 75+ developers, all doing their own thing and colliding quite often, UCM helps filter some of that.

Now that I've said all that, to the point of the blog. As I started to understand UCM more and more (especially during a CC upgrade recently) I realized this was exactly what I wanted out of YakShaver.NET. A UCM like tool (that provided automatic branching strategy and merging, while taking advantage of other TFS tools), but provides a bit more. This is where the idea of "Sandboxing" comes into play.

If I could take an integration stream/branch and deploy that version to a sandbox environment (isolated much like my development environment is) then I can accomplish the goal of working with Testers/QA/Clients in an isolated environment and keeping congruency between all of it using a friendly application. This is where the UCM+ comes in (UCM plus a little more).

I'm not sure if I can use this name or not, I'll have to find out, but it is the goal to turn YakShaver.NET into a UCM type front end for TFS.

I'm glad I can finally put that in a single sentence. Now that I know what I want to build, time to actually do it. ;)

Thursday, April 3, 2008

Another Awesome Tool for Remote Development

I happened to catch this little gem inside Visual Studio at work. I've been reading more by Scott Hanselman recently. He had a great link to this wicked little tool called Microsoft SharedView.

http://connect.microsoft.com/site/sitehome.aspx?SiteID=94

Wow... Be anywhere the world and get a code review. Sick...

Wednesday, March 5, 2008

CC.NET + MS Test 2008 + New Template

In a bit of YakShaving today I found the need to modify the MSTestSummary.xsl file for CC.NET so that it worked properly with the output from mstest.exe 2008. I haven't seen if 2005 uses the same format or not.

Anyways, hope some find this useful. You can download it from my CodePlex site here. I'll see if ThoughtWorks wants to put it up on their site.

Friday, February 29, 2008

Interesting Distinction Between BizTalk 2006 and Workflow

I've been working with Workflow Foundation quite a bit lately and even had the opportunity to be teaching it to my design team at work about it's intricacies (which has been a lot of fun as well).

Coming from a BizTalk background also makes it exciting to see the differences between it and WF and how they are similar. Something neat about BizTalk is its use of Send Ports and Send Groups, which allows multiple services to be informed of an event within an orchestration (the equivalent to a sequential workflow in WF). With a send port, I could theoretically have multiple instances of the same type of service with small differences between the two and use correlation to deal with events back from multiple data sources.

However in WF, it isn't quite the same. Given everything is interface driven that is what the WF Runtime is looking at, so if I try to add multiple services of the same interface to the DataExchangeService an exception is thrown with the following message

"An instance of ExternalDataExchangeService of type TestWorkflowApp.IApproverService already exists in the runtime container."

The code that I used to do this is pretty simple. I created another class (although it would appear I could just instantiate another instance of my ApproverService class) which was the exact same implementation (just a different name). Then tried to add both to the Data Exchange Service as so

dataExchangeService.AddService(approverService);
dataExchangeService.AddService(secondApproverService);

Exception thrown.

I suppose there should be some major distinctions in capability between a free product that is part of the framework like Workflow vs. a 25k+ product like BizTalk. Certainly BizTalk offers much more than a workflow runtime (far beyond the scope of this blog) but thought it was important to point out some differences.

Tuesday, February 26, 2008

Nested State Activities in Windows Workflow

By pure accident today I stumbled upon Nested State Activities, and of course my curiosity got the best of me in finding out what exactly these things are, how they worked, and who had done anything with it.

A quick Google search for Nested State Activity didn't turn up much, except an article on MSDN stating that they were beyond the scope of this article. Still, good article on state machines, you can find it here.

Searching a little further, I found an article that somewhat covered it although they didn't use WWF, instead their own state machine toolkit. Also a good read and can be found here.

However I still haven't found much on these things in WWF... So, I decided to play a little. Why not right?


First, lets just look at what a nested state looks like compared to the other in the designer.


Given of course this is a simplistic example at first, so lets explore it a little more. The first question that I asked was "What would I ever need this for?" And to tell you the truth, I don't have a for sure answer for that.

What I have seen so far is that if a state houses other state (in our example above stateActivity1 containing stateActivity2 and stateActivity3) you cannot set that state from another SetState activity. An example of this is from the BuildPublished State I can set the next state to stateActivity2 and stateActivity3 or itself for that matter. however, I cannot set the state to stateActivity1.

Something intereting however, stateActivity1 can have events handling scopes associated with it. So it appears the event can respond when the the current state is within one of the sub states.

I haven't found a great use of this yet, however the ability to have substates does add a new level of capability that I never realized. I guess an example of its use may be a help desk application in which case when an issue is resolved you may have substates such as post-mortum review, develop action plan, and something else.

I may explore this more later if I see a need, but thought it was cool nonetheless.

Thursday, February 21, 2008

Integration Testing Woes with TFS WorkItemStore

Working with YakShaver I've been worried about how I was going to test without setting up a full Team Foundation environment (which is a challenge sometimes in itself, well, it's just a tedious job). Also if I make a mistake, I'm looking at quite a bit of time to rebuild my environment.

This was a challenge until I realized that Microsoft has released VPC images of TFS for download that is the full environment all setup and ready to go. Yes, the image expires on April 1, but they will release more images after that. And I'm not as worried about keeping code there as I'm' testing against working with the TFS API.

Add in the fact that Virtual PC 2007 is now a free download and we have an instant environment for testing. The machine name stays the same (TFSRTM08) so I can just setup simple constants in my unit tests for connectivity. However, this isn't what this entry is about. Just letting people know its useful.

In working with this image I ran into an issue where I would create an area and want to instantly use it. I know from experience with the Team Explorer that if you create a new work item, then add an area or iteration and want to use that in the work item you created previously you must refresh the work item page so the new Area's and Iterations show up.

However, there seems to be a bit of delay with using the VPC (at least I think so). Using the .RefreshCache() or .SyncToCache() methods on the WorkItemStore class seem a bit sketchy. They would work 1/3 times in my testing.

I didn't want my implementation to constantly check for an area because that could get it caught in an infinite loop. So in my testing class I create the area, and then wait until the workItemStore.Projects[ProjectName].AreaRootNodes[areaName] is available (you have to wrap in a try catch because if you try to get it and it doesn't exist you will get a ClientDeniedOrNotFoundException.

Here is an example:




bool areaFound = false;
while (!areaFound)
{
try
{
System.Threading.Thread.Sleep(3000);
tfsWorkItemStore.RefreshCache();
string tempArea = tfsWorkItemStore.Projects[currentProject.Name].AreaRootNodes[area].Path;
areaFound = true; // iif it isn't found an exception is thrown never getting to this.

} catch (Exception ex) {
areaFound = false;
}
}


I could use the contains method, but it looks for an instance of a Node and not just a string representation. This just seemed easier at the time to solve my problem without causing huge issues.

Friday, February 15, 2008

TDD + VP + DNN

Phil Beadle from DNN has a new post up adding in a layer of abstraction in the UI for better testability. At work we've been working with the same model (although I believe Phil has modified it some) and found it pretty good for testing the UI and reducing places for failure.

He mentions using Rhino Mocks which I've been taking a look at as well recently. Pretty cool stuff, but make sure you read up on identifying Mocks vs. Stubs. This framework can do a lot of things, it's strongly typed, has natural syntax, and pretty much endless possibilities.

I've personally been struggling between this model and MVC (such as Microsoft's new ASP.NET MVC pattern that Scott Guthrie and Phil Haack have been blogging about). They use very similar language but are pretty different in the end. I think MVP is a little bit easier to implement but MVC is easier to test. Both provide great mechanisms for testing though.

Monday, February 11, 2008

Cool Automated Testing Tool

Found this tool on a buddy of mine's blog. Looks pretty cool. One of the downfalls I've seen in a lot of webtesting tools (including MSTest, HP QTP, etc) is the lack of good support for Ajax calls. QTP offers the ability to set "delays" however this is more of a hack than a good Ajax testing solution.

By no means do I think testing Ajax, or for that matter even developing an Ajax testing system is a small task. Why do you think I'm looking for tools?

http://www.artoftest.com/Products.aspx

That is the link. Hopefully I'll get some time to play with this thing a little later. I still need to finish a post on integration testing with Workflow this evening, so that will take priority. But next weekend is a 3 day weekend, so you never know.

Sunday, February 10, 2008

Windows Workflow Integration Testing - Part I



And now for my first "official" post. In this post we are going to take a look at Integration Testing with a bunch of technologies, in particular Windows Workflow Foundation, MSTest, and Rhino Mocks (a little). In my next post I may get into more with NCover, but I figure it is too much at this point.

To start, why do I use integration testing and not unit testing. Recently, both in work and among many professionals and hobbyists I've seen a lot of people distinguishing the two, which I whole heartedly agree with. I think a good testing strategy is a combination of both, as well as a number of other types of testing as well.

Unit testing a workflow activity doesn't really make sense. After all a workflow represents some series of activities executing at specified times or response to events external to the system. A powerful tool though is the ability to repeat the tests that the workflow will go through in an automated fashion. This post looks to cover how I approached the situation and some reasonings behind it.

The workflow we are using is a rather simple sequential workflow but more complex than some other workflows we see in a lot of demos/tutors. We use a Delay Activity, a couple CallExternalMethodActivity 's, Listen, HandleExternalEventActivity, Policy, and a few loops. Something a little closer to what we would see in the real world, as well as not being related to an order. ;) Below is a screen shot of the workflow. Click on it for a larger image.



What does it do?

This is the first workflow from my new project on CodePlex called YakShaver.NET. The purpose of this workflow is to take an item submitted from YaKapture (a Screen Capture & Work Item Entry User Control) run it through an IAnalysisService (used to map context information (user,page) to the workitem tracker data such as Component, Release in CodePlex and/or TFS). Let's walk through it from a high level perspective.

1) A Consumer Service (YaKapture Screen Capture/Work Item Entry User Control) submits an instance of IWorkItemDataContract. This in turn is passed into the workflow when it is created.
2) CallExternalMethodActivty is used to call the IAnalysisService.AnalyzeSubmittedWorkItem which looks at the data contract and returns an instance of YakShaverWorkItem. I'm not sure if I want to do this or I want to just use the same data contract throughout the entire workflow. For now, it's separate.
3) From the results of this service, we run a policy over the data to determine if the item needs intervention, meaning that someone must come in and assign either one of all of the required items for a CodePlex issue, this being Component, Release, Work item Type, Impact Type, Title and Description. The goal is to infer as much data as possible by information quietly passed from YaKapture along with the user data submitted. Such as anything with the path /qabuild1_2_07 knows it is associated with a release called QA Build 1.2.07.
4) If intervention is needed, we call out to a noticiation service (INotificationService) which can implement any method it wants to notify someone to intervene on this item so it can go in the Work Item tracker (or if needs to go to the helpdesk?)
5) A Listen tree is implemented, a Delay Activity on one side (which you can pass in the delay between notifications through the parameters passed in at CreateWorkflow) and a HandleExternalEventActivity on the other. At this point the workflow will continue to notify the service for eternity until it is handled. A more realistic approach in the future may be to set up some sort of approval tree in the notification service, unncessary for now.
6) When the handleExternalEvent happens the workflow sets the intervened flag to true, thus ending the inner while loop. This then takes us to the top of the workflow where we re-run our analysis service and policy, determing again if we need intervention.
7) When everything is finally taken care of, we call a IWorkItemProviderService.CreateWorkItem which creates the work item in the issue tracker. At this point the outer while loop knows the break, and the workflow completes as normal.

We could incorporate this workflow into a larger workflow or perhaps into a state machine, that is one of the beauties of WWF. At least I think so.

Now we know what the workflow does, how do we test it?

There are three integration tests that I have setup

1) Test to see that the notification service is called and data contract data is read
2) Test to see that when notified a user can resubmit the data contracted with updated information, and when it does, the work item created process is called and the workflow completes.
3) Test to see when a notified user does not respond in a certain time, that notification is sent again. Maintaining the number of times the service has been called.

WWF is supringly easy to setup for integration testing within any testing framework, whether that is NUnit, MBUnit, or MSTest. I am using MSTest in this example, but an NUnit conversion would be a minimal effort.

Let's look at the first test, to see if the notification service is called and data contract is read. A delay activity keeps the workflow from going into hyper drive and repeating the loop at a ridiculous rate. Plus, we are just attempting to test the workflow, we want to keep as many things constant in order to reduce the number of failure points in the test (especially important as we regression test).

I've divided my services into pretty granular pieces, each having one method and maybe one event (not all interfaces had events at least at the time of this blogging). This provides a couple advantages:

1) By being granular I can narrow the points of failure in my process, this also allows me to toy with code both in and out of process and more easily measure code coverage (at least I think so)
2) Independently mocking each service to test how each one performs under conditions independent of other factors (i.e. services), including the workflow process itself.

There are a few things to consider when testing workflow (I love lists)

1) Requires the WorkflowRuntime to be instantiated, suprisingly I found you can host multiple runtimes in a process (at least you can start multiple ones, something to explore in a later post...) This part is as simple as creating an instance and starting the runtime.
2) A lot of people (and I was certainly one of them) think that communication is as simple as calling certain .NET classes. It is close to that, but there is a communication layer within workflow that has to be considered. The simplest way to communicate with the hosting application (i.e. the testing host) is using the ExternalEvent and ExternalMethod activities. However this requires having an interface defined to communicate, implementing a service and those activities. A bit much for testing and then possibly having to rip it out for production. Say hello to Tracking Services!
3) Versioning - this can cause oddities if you test a workflow, modify it and run the new version again without changing the version #. I find it easiest to simply delete all necessary data in the tracking and persistence db, or redeploy the database script on each test run (effective, but can get out of control if you don't manage it)
4) WorkflowMonitor - comes with the SDK, probably one of your best visual test coverage tools, puts little check marks over every activity that gets called. For fun you could extend this application more to count times it was called (useful inside of loops)

With that out of the way.

First, I need to setup the workflow runtime. Using a Unit Test template from MSTest I uncomment out the [ClassInitialize]. Because I am using events and methods to communicate, I need to add a data exchange service.


   1:     [ClassInitialize()]

   2:          public static void MyClassInitialize(TestContext testContext) {

   3:              sqlTrackingConnectionString = "Data Source=.\\SQLExpress;Initial Catalog=WFTracking;integrated security=true;";

   4:              sqlPeristenceConnectionString = "Data Source=.\\SQLExpress;Initial Catalog=WFPersistence;integrated security=true;";

   5:   

   6:   

   7:              // recreate the tracking db

   8:   

   9:   

  10:              // recreate the persistance db

  11:   

  12:              // initialize the runtime

  13:              workflowRuntime = new WorkflowRuntime();

  14:              dataExchangeService = new ExternalDataExchangeService();

  15:              workflowRuntime.AddService(dataExchangeService);

  16:              

  17:              SqlTrackingService sqlTrackingService = new SqlTrackingService(sqlTrackingConnectionString);

  18:              SqlWorkflowPersistenceService sqlPersistanceService = new SqlWorkflowPersistenceService(sqlPeristenceConnectionString);

  19:   

  20:              workflowRuntime.AddService(sqlTrackingService);

  21:              workflowRuntime.AddService(sqlPersistanceService);

  22:   

  23:              workflowRuntime.StartRuntime();

  24:  }


In the example above I've added a tracking service and a persistence service as well as the data exchange service. These are default services that come with WF, and work perfectly for our testing situation.

This is only fired when the test run starts. We can remove all these services and shutdown the workflow runtime on another method for [ClassCleanup]. Shutdown can be somewhat important because it will persist your workflows, a nice thing when your trying to debug what exactly is happening.

Finally a test!



   1:  [TestMethod, TestProperty("Category", @"Workflow\Integration\NotificationServiceTests")]

   2:          public void TestNotifyForIntervention()

   3:          {

   4:              INotificationDummyService notificationService = new INotificationDummyService();

   5:              IAnalysisDummyService analysisService = new IAnalysisDummyService();

   6:              AutoResetEvent wfIdledEvent = new AutoResetEvent(false);

   7:              Dictionary<string, object> namedParameters = new Dictionary<string, object>();

   8:              IWorkItemDataContract dataContract = new WorkItemDataContractDummyObject();

   9:   

  10:              DataExchangeService.AddService(notificationService);

  11:              DataExchangeService.AddService(analysisService);

  12:   

  13:              namedParameters.Add("WorkItemDataContract", dataContract);

  14:              

  15:              WorkflowInstance workflowInstance = WorkflowEngine.CreateWorkflow(typeof(WorkItemEntryWorkflow), namedParameters);

  16:   

  17:              // attach events if I want.

  18:   

  19:              WorkflowEngine.WorkflowPersisted += delegate (object sender, WorkflowEventArgs args) 

  20:              {

  21:                  if (args.WorkflowInstance.InstanceId == workflowInstance.InstanceId)

  22:                      wfIdledEvent.Set();

  23:              };

  24:   

  25:              workflowInstance.Start();

  26:   

  27:              wfIdledEvent.WaitOne(new TimeSpan(0, 0, 60), true);

  28:              //grab the latest from the persistence store.  If it doesn't find it, an exception will be thrown

  29:              workflowInstance = WorkflowEngine.GetWorkflow(workflowInstance.InstanceId);

  30:   

  31:              Assert.IsTrue(notificationService.NotifyForInterventionMethodCallCount > 0, "Notification Service did not return that it sent notification");

  32:          }

Lets start by looking at a few key lines. First, lines 4 & 5. We declare two services with the naming convention DummyService. These are pretty much exactly what they sound like, stubbed implementations of a number of interfaces that could exist. We use dummy services in this case to again reduce the points of failure in our testing scenario. We'll have a chance later on to plug in "real' services to see how they respond when integrated into the workflow.

If we look at these implementations we similar they are not much different than something we could do with a mocking framework. However, I didn't have enough experience with mocking frameworks, and second I was having trouble because of the need for the SerializableAttribute. Not as pretty, but accomplishes the same thing. Instead of verifying mock expectations, we assert that a method was called (proof the workflow is doing what we expect it to).

Line 8 instantiates a dummy instance of IWorkItemDataContract, which is used in multiple places in our application. I shouldn't say dummy object in this sense because it does have some intelligence. By that I mean depending the how many times a method is called, it responds with different output to aid the workflow in later tests. I realize I could keep this in yet another dummy class however I just don't see the need right now.

On line 13 we add the semi smart datacontract to the parameter collection that we are going to pass into the start of the workflow. One of the important things to remember about WWF is there are specific ways to communicate with the instances in the runtime. I won't go into detail in this post, but using a Dictionary is an easy way to set any public property on the workflow class. We need to do this because our workflow demands it almost immediately for the analysis service.

Some may notice I'm using an anonymous method on the workflow persisted event. This checks to see if the workflow that was persisted was the one we created, and if so to trigger an AutoResetEvent. Why persisted? It certainly isn't necessary, and not possible without adding a persistence service to the runtime. Idled would work just as well but I want to make sure that I can get the workflow from the persistence store. Which I do in line 29.

Finally I do one assertion on the service to make sure the workflow called it.

I run my tests through MStest on the test list and bingo! We go green!

You can download this code from the latest change from the codeplex project changeset 4960

Thursday, February 7, 2008

First Post

I've always wanted a blog. I've read so many over the years that have helped me as a developer I figured I might as well try to participate as well. I realize it will be a long time until any (if ever) reads this. It's nice to think they would, but if not, hey, It's for my own sanity.

The basics. I work for a large company that shall remain nameless in Chicago, good people. MASSIVE development team. Much larger than anything I have ever worked with and we are masters of documentation. Many of us joke about how much of a waterfall approach it is, even though we try to implement little agile methodologies such as continuous integration, unit testing, integration testing, and recently getting into mock objects. However releases are many months apart, and most of the time in a release is spent documenting and having meetings. Given the nature of our business, I see that as important, and we are trying to find a good approach to giving the old guard what they want, while trying to adopt more agile methods to give users what they are demanding.

We suffer from the same things many others do, system that was converted over from a good legacy system which would have been just fine if they had gone ASP -> ASP.NET. But alas, no. Some crazy architect (also to remain nameless) wanted to make it all enterprisey by creating his vision of transcendent/super pluggable framework software. You know, the software that is perfect for any application and can scale, yada yada. That sales pitch you almost dread, well this is the end result of that sales pitch. I will say that I applaud the creators effort as I think it is academically brilliant, and I'm impressed by how he put it together. But it creates unnecessary overhead and his definition of loose coupling was "everything is a dataset" ;)

I'm not complaining, I just find it comical. I also find it challenging, although in a way I didn't expect. The challenge isn't delivering code, it is making sure you understand how data transforms the many layers that exist in the infrastructure. Plus, it's a massive amount of data to analyze which can be exciting (at least I think so). Plus it pays great and is strict on 40 hours so I can go home and spend time with my wife.

This blog isn't about that though, however I may relate some work experiences. No, this is more about my open source work that I've recently jumped into. I've actually been using open source software for quite some time (I've used DotNetNuke extensivly for the past 4 years) and with it have built some pretty cool stuff.

I've joined the DNN Announcements module development team which I am very excited to get to participate on such a great level with DNN. We've got some good people on the team right now and actually a pretty good size for a open source module. Plus we have people from the US, Netherlands, and I believe Spain. I probably have that wrong, but I know its a pretty spread team.

I've also created a couple projects on CodePlex that I work on in my spare time. One sharing the name of this blog - YakShaver.NET - which is an exploration at integration testing, unit testing, windows workflow, TFS API, CodePlex API, DNN Forge API, a number of help desk applications as well as going to a knowledge base and more. Basically I want to look at software builds by looking at the life of a work item from start to finish (when does it need to be one? When is it a support item? how do we keep people involved? ) I'm really hoping to use this in combination with the DNN development. The project Wiki goes over the goals of the project. The other is DeliveryBoy. Which I will comment on as well, but probably not for awhile. I started that project looking for something to do, but then I was reading a book and wanted to do some YakShaving (Building tools for tools, could be considered a useful task, but not always necessary). I'm hoping people find it necessary.

That's it for now, I just finished my first integration tests for Windows Workflow and will post the results if I wasn't so tired and had to go to work early tomorrow. This weekend though.