Gated check-ins in TFS are awesome. They mean that any code checked in by your team will be built, tested, analysed and verified before committing to your source control repository. If the code doesn’t build, unit tests fail, or even if code analysis throws a load of errors, the check-in gets sent back to the developer for more work.
Visual Studio basically creates a shelveset for you which it sends to TFS for validation and includes a handy little tool which alerts you when your validation build is complete and if the code passes and is committed, allows you to reconcile your workspace. Once or twice though, I’ve found that this tool doesn’t alert. How do you in that case, reconcile? It’s not immediately obvious, but you need to find your build using Team Explorer, and click the Actions menu at the top. There you’ll find a reconcile option.
I’ve used a few ALMs, Build Managers, Source Code Repositories over the years and I’d dipped into TFS now and again, but recently I really got to see what TFS 2012 can really do. I remember back when TFS 2005 came out and watching a sysadmin quite literally pull their hair out over how frustratingly complex it was to install and configure, but now it’s an absolute breeze. If you really don’t want to take the plunge and install somewhere on site, you can try out Microsoft’s cloud based offering in the form of¬ Team Foundation Service¬†which is currently free for up to 5 users.
TFS 2012 Express
TFS 2012 Express is Microsoft’s free version of Team Foundation Server. Like the other Express¬†flavours of developments software, it’s a great way to get a small team up and running on some very feature rich software before spending some cash on some CALs. As mentioned above, setup is little more than clicking next on a wizard a few times, made simpler by the fact that this is the Express edition and so there are a few limitations that mean configuration is compact:
No SharePoint or Reporting Services Integration
SQL Server Express Only (installed as part of the setup if not already available)
Can only be installed on a single server (Db, App and Client Tiers cannot be split across servers)
Because Reporting Services Integration is not available, that means I can’t generate reports on work items that have been completed. I can however still use the TFS API to query work items and generate a nice looking release note using PdfSharp.
Creating the Shared Query in TFS
First we need to create a shared query in TFS to filter our work items. Log into the web portal, go to “Work” and create a new shared query there.
First we need an object store store the WorkItems in.
Next, let’s write a class that will query TFS for the workitems based on the request object passed in. We need to set where the TFS server is, where the query can be found (Shared Queries) and the name of the query. We also specify the Iteration number to make sure we only get workitems associated with this Iteration.
We should have a list of workitems available (assuming your TFS user has permission to use the shared query). What you then do with the work items is up to you. The example code feeds the list of workitems into another service that generates a pdf file using the template pdf files specified in the request. It also uses the Build Number and Iteration Number for the title page and footers. You could also use these classes to add a CodeActivity and make it part of the TFS build, feeding in the Build Number and Iteration number directly from the build itself.
SOLID principles state in the Dependency Inversion Principle that you should depend upon abstractions, not implementations. With that in mind, services should not be creating instances of log4net loggers, rather depending on an abstraction of that service. However, creation of a log4net logger requires the type to be passed in when it is instantiated. For some IoC containers which allow service location (and if you don’t strictly adhere to IoC principles), this is fine as you can locate the service whilst passing a parameter into the constructor. Using constructor injection with a container such as Autofac however, means you need to rely on the dependency to be passed correctly.
The Log Service
I created an iterface for a log service which currently implements ILog from the log4net library. I can at some point however, simply copy these declarations into my ILogService interface if I decide to use something other than log4net. Below is the implementation of the LogService (or part of it, the whole implementation is huge!). It takes a generic type, which it then passes into the logger when it is created.
Configuring the container
Register the service with the container as an open generic (the type asked for in the interface will be passed to the concrete).
Using the log service
Introduce and initialize the service in the constructor (pass in the type of the class where you’re using the log service, in this case the HomeController). Then simply call the service in the same way that you’d call log4net.
There’s a few benefits in using this log service rather than using an instance of log4net itself. If at any point you change logging frameworks, assuming there’s similar methods (debug, warn, info etc) then you can simply switch it out in the log service itself, rather than every place you’ve used the logger. You can unit test and ensure a class that you’re testing is logging (and at the correct points) by mocking the interface.
We’ve been playing with our TFS builds recently now have a system in place where our nightly build will drop and recreate the project’s databases based on the schemas in the database projects. As part of the publish, MSBuild will generate a script file containing either the entire schema (nightly build) or any schema changes found in running a comparison (QA environment and higher).
One issue we had was that we wanted a post deploy script to run in any data required for testing the nightly environment (users, reference data, even a few test customer records). Fine, database projects handle this, simply mark the script file as Build: Post Deploy Script and the script will run after the schema changes have been run in. Only one problem, this post deploy script will run regardless of environment.¬†On our QA environment, we already had the data we needed there, since we weren’t dropping the database, just running in the schema changes.
I found a stack overflow article here -¬ http://stackoverflow.com/q/7151021/503734 – which talks about changing the sqlproj file to copy certain files into a location that the rest of the build is looking for to run as a post deploy script. A bit messy. Here’s how I eventually handled it…
Add a SQLCMD Variable
Right click on the sqlproj in your solution and go to properties. You’ll see a SQLCMD Variable tab. Add a variable, something like $(EXECUTEPOSTDEPLOY).
Add a Condition to your Post Deploy Script
Check the $(EXECUTEPOSTDEPLOY) variable, and if it’s not equal to ‘True’, SET NOEXEC ON. This basically turns of execution of scripts for that connection. You may want to SET NOEXEC OFF again at the end if you have other scripts to run.
Set the SQLCMD Variable depending on configuration
In your publish profile for your required configuration, set the SQLCMD Variable to either true or false (depending on whether you want to run the script or not).
Run your build (with publish enabled, obviously!) and it will either run or not run depending on your setup. You can set other SQLCMD Variables and play around with the conditions to allow the script to run. You could also SET NOEXEC on or off at various points to selectively run parts of the script (would probably be better to split into seperate scripts if this is the case though).