Saturday, November 16, 2013

Bringing back Team Foundation Server Build Folders in 2013!

I've mentioned before my job is pretty cool for a number of reasons but most often it's for the people I get to work with.

A few weeks ago I had the pleasure of being part of a team that was migrating 50+ active projects from Team Foundation Server 2008 as well literally hundreds of build definitions.  Beyond the build definitions they had created a number of custom tasks to extend some of the build capabilities for complex deployment to multiple environments, custom metrics, you name it.

They did it with one TFSBuild.proj file.  Make no mistake.  One. Impressive work Tim Stall.

The challenge of hundreds of build definitions 


I wish I was making that part up, but I'm not. These guys had hundreds of definitions. One may argue that it could have been set up differently, however until you've witnessed the system in action, walked through the complexity, or understood all of the outputs, don't judge. This was the best way.

The company had structured their build definitions by product and then by branch, then branch environment (per branch).   Perhaps this picture will better explain it.

As you can see, it's an ugly mess and just begging for organization.  What's a programmers best organization tool?  Tree views.

Wait, didn't someone make something for this?


Yes!  Inmeta Solutions made a package for Visual Studio 2010 that was pretty awesome but for whatever reason development stopped after 2010 and Microsoft never included it in future versions.  This became troublesome for those of us that loved the tool but also wanted to go beyond VS 2010.  When attempting to go to 2012, we found it to be simply incompatible.  

Why didn't we just use that source code?  

Fellow coding cohort (and coordinator of the new project) Josh Rack had done some research on contributing to the current project but in the end we wanted to extend it out a little differently.  Not because anything was wrong with the Inmeta solution we just wanted to do something different.  That's the fun of open source/community development isn't it?  :)

Josh started off using some of the existing code and trimming back to the necessities.  That's when he contacted me and asked to patch a few bugs and add some features to work more cleanly with Visual Studio 2012/2013 as well as Team Foundation Server 2012/2013.  

Currently the only features that are supported are simply a hierarchical view of the build tree using the '.' as the separator character and you can view the queue, edit the definition, and queue a new build.  We've got a couple items in the queue that we think many users will enjoy (such as the ability to choose the separator character and using that across your entire team).  

At this very moment, I'm waiting for Rack to go ahead and publish the project so you guys can download the latest bits. There are two installers, one for 2012 and the other for 2013. For whatever reason we couldn't get it to play well with 2013 without using the 2013 libraries.  Maybe we'll fix that sometime, until then, enjoy having build folders back!  

You can grab the latest bits and log any issues over at Codeplex.  Let us know what you think! 

Sunday, April 21, 2013

Real Life Lab Management – Developer Metrics and Value

For the past week I’ve been at a pretty progressive client (compared to where I came from, very sexy offices, young hipster types, no bureaucracy) basically the opposite.  That in and of itself has been quite a change. 

 

Top that, this is my first assignment and I get to do one of the coolest features in Visual Studio and Team Foundation Server 2012, Lab Management.  We were not going full blown SCVMM lab management but around the idea of Standard Lab Management

gremlin

Are they significantly different?  That is a yes and no kind of answer.  No, because in the end it comes down to the Test Agents being installed on the machine and registering with the test controller.  TFS 2012 even makes it easy enough that you don’t need to install them on each one.  In turn this has made things really easy regardless of your choice in virtualization platforms. 

On the other hand I say yes they are different in comparison to how much of the feature you can use.  SCVMM environments become exponentially easier to debug pesky issues that seem to be the ‘No-repro’ gremlins from Production. 

Alright, back to my progressive client.  It’s a cool setup, they’ve recently upgraded everything to Visual Studio/Team Foundation Server 2012, ASP.NET MVC4, Database Projects (the good ones), they brought someone in to write Web Tests and CodedUI Tests, bought a good amount of supporting hardware and asked us to tie it all together and make it happen in just under a week.  Success. 

LabBuildResults

It was amazing, we were literally watching through the lab management viewer two Windows 7 Virtual Machines running a battery of coded UI tests like some person was actually there.  At the end, all the information was dumped into the build detail and you could see how you were doing. It was glorious. 

Ok. What does all this mean?

 

 

LabChildBuildResults

When I was at JPMorgan we had a number of different ways they wanted to measure what was labeled as developer efficiency. Some metrics were as simple as churn rates, some were more advanced such as operational complexity, rate of change of code to requirements, etc. In the end we would get a single number, or a couple numbers, and no one had any idea what they meant.



Let's fast forward in time again to our current implementation. This isn't your ideal environment for development but it is a commonly normal one. This company has put together a fairly new team taking over a fairly large product. People need to find ways to prove what they are doing is providing value. The double edged sword we often have is trying to fix the application and figure out how to justify the work you've been doing provides real value. We know things like unit tests and code coverage are supposed to, but we struggle to both explain it and make it look good.

What do we do instead? Work our ass off tirelessly for weeks in hopes that management notices development is doing everything they can to make a better product thus giving some mercy.

Today friends, I'm here to help you with that in hopes of giving you both some more work/home-life balance and even a little personal satisfaction. Let's take a look at some key metrics that are listed on our build report and what they tell us about progress and quality.

First off we will start with the Lab Build and the CodedUI tests. One thing I notice is that blog entries such as this show a lot of successes, or they show a failure and then immediate success. That's the thing, these are not 1 day stories, these are going to continue to fail for awhile. You can expect for the first few weeks to have a color-blind mans nightmare of green, yellow, and quite a spike in bug rates. This is not uncommon nor should be used as a driver for some sort of large strategic change in your development organization.

CodeCoverageDetail2

What you will find is a lot of data errors, especially in CodedUI tests. If you don't use something like a gold database that is guaranteed to be the same data each time as well as properly updated to whatever the latest schema changes are. Entity Framework and Code First do a really good job at coming up with a pretty good solution going forward, but most of us are working on already existing apps under tight budgets and enough other stressful factors to take the time to properly implement. It took us close to 2 years at JPM to do it (given it was a pretty complex set of databases) but there are a lot of factors to consider we often don't. I digress...

Does that mean they provide no value? Of course not!  The data just needs to be interpreted differently right now than it will be in a few weeks. For now, it means we need to focus on the data quality of our CodedUI tests. Do we implement something to seed data each time? Do we make our data more dynamic using existing data sources (even the target database itself to determine what user names already exist)?

What we should not attempt to infer is that the application is buggy (we know it is) and our development team is doing a poor job. This data is simply exposing what the teams already would have discovered over a few months, instead the tools did it for us in a few minutes. That's a good thing, this is the beginning of our process, not the end.

Now we have a starting point to measure how good of a job we are doing at making our quality of integration tests better.

Second metric, code coverage and unit tests. I'll be honest, I wasn't really a fan of unit tests for a long time. I felt it didn't expose bugs as fast as something like integration tests could or CodeCoverageDetail at least the bugs that had more customer impact than some the unit tests would find. The problem came back to selling my own value to management of how good a job I was doing. They wanted to see numbers, I had no idea how to give it to them or at least how to explain it. Enter code coverage and what it means to developer productivity and application quality.

You can see on the details of this build that overall out of our passing unit tests only about 2% of the code that makes up this application is being tested. Again, early on in our process of implementing new quality controls such as more complete unit testing and integration testing we can expect to have what would be considered 'poor' metrics (like 2% coverage ;) ). It gives us the starting point. For developers it allows you to show how over the next few weeks they are increasing the amount of code tested in comparison to the entire code base.

At this point in time, it's important to not focus on bug rates. Like CodedUI tests you can expect a dramatic increase in the number of bugs discovered versus the amount of code covered. The reasons are slightly different in this case (data shouldn't really be a factor in a unit test) but again just exposes what you would have already discovered later. The other thing that it does not tell you is if the code tested fits the requirements, there are many other testing tools for that.

Lastly for our developers it comes down to how we make that 2% rise to 10% in the next 3 weeks. The beauty of Microsoft's Code Coverage and Analysis tools is they tell you exactly which blocks are covered, and which ones are not. (BTW, when we say blocks it could be something like code between an if {} statement, that's how granular it gets.). CodeCoverageDetail3 If you click on View Test Results of the unit tests it will download the results to your local machine for analysis. It goes into the detail of saying exactly what namespaces, methods, and blocks are tested, and which ones are not.  Double click on one of the lines and the red color indicates these uncovered blocks. This way you now know exactly what code isn't executed by any tests and can pretty easily just add new tests to cover those conditions. CodeCoverageDetail4 The value to the developer is two fold, first you are not guessing what to write and it gives you the ability to better communicate the value you and your team are providing to management in a very clear and measurable way.

In my next article, we will take a deeper look in some of the reports that come with TFS 2012 and how managers can use these to communicate a concise and clear message to stake holders about where development stands and what to expect in the future. These are not bad things, it allows people with a heavy (usually financially) investment help in situations early in the project (and spend more time thinking about the consequences) rather than knee jerk reactions late in the project that may have significantly more radical consequences.

Tuesday, April 9, 2013

Let me reintroduce myself

Before we start... 

After 5+ years, I've left JPMorgan Chase & Co. and taken a position at Polaris Solutions as a Senior Consultant for ALM using Microsoft Team Foundation Server. 

It's my dream job.  Seriously.

I get to work with some really cool characters like Angela Dugan and Chris Kadel and do some amazing things going forward with Microsoft's ever expanding ALM platform

I enjoyed my years at Chase, I had an amazing team to work with.  Being an integral part of that team was critical as we transformed from a very traditional waterfall methodology to a mash-up of waterfall and agile processes all using TFS 2008.  That experience allowed me the opportunity to be recognized by the folks at Polaris who are giving me the resources to expand and share my knowledge at a significantly faster rate than I would in a company of 250,000 people; 37,000+ of those being developers.  Yeah, banks write a lot of software.    

Surprisingly, part of that job involves writing about TFS and how it influences and adapts to each organizations unique ALM process.  The choice of topics is up to me with the restriction to be on ALM and not overdone.  Check.   

Over the next few weeks I'll be focusing on several topics, but the two that will be most notable will be the value within various reports and my personal favorite; implementing Team Foundation Server Lab Management for a second time. 

As a preview to the second. 

Well, I took it out.  According to the ranger guide I may have misspoken from the original content... but I could swear I didn't...