Category Archives: .NET

Vermont Code Camp

Just got back from a road trip to Burlington VT for Vermont Code Camp 3. It was a great code camp, lots of good sessions and speakers, well-organized, and generally a lot of fun.

The first session I attended was Dane Morgridge’s talk on ASP.NET MVC3: A Gateway to Rails. I was really interested in this talk, because as a TDD junkie, I was curious if part of the “gateway to Rails” involved embracing testing. I still feel like the .NET community is behind the curve on TDD and BDD, and I wanted to hear Dane’s perspective on making the switch from .NET to Rails. His presentation was very much an overview of the goodness in Rails and ASP.NET MVC3, like convention over configuration and how having a separation of concerns with MVC allows for a more testable system. Afterwards I was able to catch up with Dane between sessions and he showed me a few things he is doing with Cucumber and BDD. I have to say the readability of Cucumber makes it look very appealing and we talked about how BDD tests can be more maintainable than TDD/unit level tests. There is a right time to use one and the right time to use another.

Next up was David Howell’s Tackling Big Data with Hadoop. I knew very little about Hadoop, but had heard a few good things about it and was looking to learn a bit more. David’s talk focused on using Hadoop’s Map-Reduce capabilities to handle large data sets. We also talked a bit about what constitutes “big data”. If you are not familiar with Map-Reduce, check out Google’s 2004 paper on it. David went over the basics of a hadoop cluster and showed how it could be used in a distributed architecture to tackle a big data set. He also touched on how some of the fault tolerance could be implemented using the cluster and distributing the same job to multiple nodes. To wrap up he ran a simplified Map-Reduce demos on a small data sets that was appropriate scaled for the time that we had available.

I would have to say the third session was one of my favorites, Functional Programming on the JVM. This talk was given by Jonathan Phillips and another developer named Jack (sorry Jack, missed your last name), both from Dealer.com. As someone who has spent the last….well lets just say a lot of years doing OO programming, I struggle with the shift to functional programming, especially when state and immutablility come into play. I have been to several functional programming and F# presentations that have left me confused, but 5 minutes into their talk Jonathan and Jack went over closures and recursion as they apply to functional programming and it was like a light going off. Suddenly immutability made more sense. After the overview of functional programming they went into 3 different languages that can be used to do functional programming on the JVM: Groovy, Scala, and Clojure. The used a simple example, comparing the amount and clarity of code that you would write using pure Java versus each of the 3 other languages. They also went over some of the pro’s and cons about Groovy, Scala, and Clojure. It left me with a lot to think about, and I am eager to experiment with all 3 languages.

Another great talk was Free and Open Source Software (FOSS) in the Enterprise by Kevin Thorley, also from Dealer.com. Kevin laid out what you need to consider using FOSS software. What is the community support, how many contributors are there, how robust is it, how easy is it to use and what documentation exists, do you really need it or do you want to use it because you just read some great blog post about NoSQL. One thing that Kevin said really rang true when he said that open source is not free and that we need to understand and accurately assess what the cost is to use and maintain it. Kevin went over how they use MongoDB, RabbitMQ, Spring, and Solr at Dealer.com. Don’t misinterpret Kevin’s statement about it not being free. Obviously they rely heavily on FOSS at Dealer.com, but you have to be honest about why you are using it and what is the cost going to be to your organization to use “free software”.

I give a lot of credit to the guys from Dealer.com for their participation at the code camp. They are a Burlington based company and they had a significant presence at the code camp. It is one thing to say that you are passionate about software, it is another to do something about it. Developers from Dealer.com did 4 of the 26 total sessions. I followed up with a few of them after the sessions and you can tell they are really into the technologies that they were presenting.

The last 2 presentations were familiar ground for me, but I still learned a few things. Vincent Grondin’s talk on mocking and mocking frameworks. Vincent gave a very good discussion of mocking and the two type of mocking frameworks available: those based on dynamic proxies, and those based on the .NET Profiler API‘s. Examples of frameworks based on dynamic proxies are Moq and NMock3. Examples of .NET profiler based frameworks include Telerik’s JustMock and Typemock Isolator. The .NET profile based mocking frameworks have some significant advantages in being able to mock out static methods and even system calls. I have been using Moq (and previously NMock2) and I have had to write custom wrapper classes to work around problems mocking static methods and system calls.

The day wrapped up for me with Josh Sled’s talk on dependency injection. Josh gave a good overview of why you would want to use DI, but also gave an honest assessment of pros and cons of using them. I think any time we decide to take on a new framework we need to understand what the cost is (similar to what Kevin Thorley said). I am a big fan of DI and IOC frameworks, but you need to be aware of what the cost is to using that framework. We touched on a few IOC frameworks that you can use like Spring (Java), Castle Windsor (.NET), and StructureMap (.NET).

All in all it was a great code camp. It is usually hard to justify spending a beautiful late summer day in the basement of UVM, but it worked in this case. I am really looking forward to all the slides being available to I can check out some of the sessions I couldn’t make.

Thanks to Julie Lerman, Rob Hale, and all the other organizers, volunteers, and sponsors for putting together a great event.

Advertisements

State of the .NET Culture

I listened to a podcast the other day from the guys at Herding Code that I found a bit disturbing, mostly because I think there is a some amount of truth in it.  The podcast was about some fairly well known .NET developers who in the past few years have moved away from .NET into the Ruby world.  Mike Moore and Scott Bellware were very active in the ALT.NET community and had a fairly well publicized split from the group.  Google it if you want, I am not interested in rehashing it here.  Jeff Cohen was a .NET developer who switched over to Ruby on Rails several years ago.  Jeff also has a book out called Rails for .NET developers that looks interesting.

These guys have all switched over to primarily Ruby on Rails development.  I have never developed any production Ruby code, so I can’t speak much to the benefits of Ruby, but the part of the podcast that really concerned me was the characterization of the larger .NET community as not being interested in design, testing, or having clean, simple code.  And what really scared me is I think there is a fair bit of truth to it.  As a .NET developer who cares a lot about TDD, good design, continuous integration and automated testing, I had to admit to my frustrations in the past at a larger community that may not care as much about these core principles as they might what the is in the next framework or set of tools.

I am not talking about the thought leaders in the .NET space like Jeremy Miller or Ayende Rahien, or the folks that I meet at local developer events.  These are people that are either pushing the envelope, or at the very least trying to learn better ways to build great software.  But I can’t tell you how many times I have gotten less than satisfying answers while interviewing candidates to questions like…How do you test your code?  What blogs do you read?  What new languages have you tried recently and how does it compare to what you are using in terms of ease of use, testability, performance.  Why did you use a particular tool/framework and what were the strengths/weaknesses.

I have been able to do a little Python recently and have to say that the Python developers I have worked with are very familiar with the variety of tools available to them.  Testing (especially TDD) and the goal of clean, simple code seem to be more baked into the culture and frameworks.

I am hopefully the the .NET community will continue to embrace good software development practices and be willing to look outside the Microsoft stack for good ideas and technologies.  At the same time I plan on looking more at the Ruby and Python development communities.  Give the podcast a listen, you may find it interesting and hopefully a little thought provoking.

Head in the Cloud with Windows Azure

Microsoft hosted a developer conference in Boston today.  I think for the most part, you get when you pay for, and the $99 price of this event told me it would probably be more marketing than technology.  But it’s always good to get out and hear what’s going on and talk with other developers.

There were several topics of interest to me at the conference, ranging from F#, Silverlight, Visual Studio Team System 2010, and ASP.NET 4.0 , but one of the things I was most interested in learning about is Azure, Microsoft’s foray into cloud computing.  I thought that this conference would give me a good 50,000 foot view into Microsoft’s plans for a cloud computing paltform.

In the keynote, Amanda Silver referenced the Battle of the Currents, where in the early days of electrical distribution Thomas Edison’s system of direct current (DC) was pitted against Nikola Tesla and George Westinghouse’s system of alternating current (AC).  One of the disadvantages of direct current was that the power generation had to be close by due to power loss associated with transmission.  This meant that a manufacturing plant might need to have its own electrical plant, with all the associated capital costs and maintenance. This is much the same as technology companies today incurring the capital cost and IT support of maintaining their own data centers.  The ability to buy electric power from a utility company would allow the consumer to focus on their business and treat the incoming power as a service.

Microsoft’s intention in this market seems to be to offer a similar utility model that would have the benefits of scalability, redundancy, and IT support and allow a company that subscribes to the Microsoft data center services to focus on it’s own business domain.  As any of us who have had internal data centers know, power outages, scaling, and IT support (security patches, etc.) can be a real headache for a developer and data center IT support keeps us from doing our real jobs: design and writing code.

Now before you think I drank too much Microsoft Kool-Aid, I am just saying that is the idea behind cloud computing in general.  Why absorb the capital outlay and support costs of setting up and maintaining a data center when you can lease those capabilities?  Theoretically, this could lead to better support, scalability, and fault tolerance.  Microsoft seems to be positioning itself for proving data center services and support in the future.  How far off in the future is a question that remains to be answered.

The presentations on Windows Azure presented a fairly realistic picture of where the technology is today, and I give the presenters credit for that.  Michael Stiefel’s presentation was good and gave the 50,000 foot view that I was looking for.  He also drilled down a little bit into the services provided in Azure, including  high level.  Ben Day’s presentation was particularly good I thought, pointing out the potential of the technology while balancing that against some of the limitations of the current implementation, particularly related to Azure data storage.  I would assume their blogs will have the presentations at some point in the future, so you may want to check in.

The presentations and keynote showed how you can get started with Azure today using Visual Studio 2008.  You will need to install the Azure SDK and Azure Tools for Visual Studio that you can find here. Azure applications can execute locally on a Development Fabric, which is a simulated cloud environment on your desktop. You can also deploy a service to run in the Azure cloud, but you need to set up an account for that, and for the purposes of learning about the technology it would seem that the Development Fabric is adequate.  The developer “experience” (another big buzzword at the conference) is the same as developing other Windows apps in Visual Studio.  You can debug applications normally if they are running in the Development Fabric, however once they are deployed the only debug mechanism is logging statements.

This is definitely a “down the road” technology, and there are several kinks to work out, but if you want to be ahead of the curve it might not be a bad idea to try some of it out.  One of the things that came up in the presentations is that Microsoft is on the fence on some aspects of the implementation and will be looking to the developer community for feedback.  We’ll have to wait and see how all this plays out, but I am certainly willing to give it a spin.

Using the Automatic Recovery Features of Windows Services

Windows Services support the ability to automatically perform some defined action in response to a failure. The recovery action is specified in the Recovery tab of the service property page (which can be found in Settings->Control Panel->Administrative Tools -> Services). The Recovery tab allows you to define actions that can be performed on the 1st failure, 2nd failure, and subsequent failures, and also provides support for resetting the failure counters and how long to wait before taking the action. The allowed actions are

  • Take No Action (default)
  • Restart the Service
  • Run a Program
  • Restart the Computer

Having this type of functionality is really helpful from the perspective of a developer of services. Who wants to re-invent the wheel and have to write recovery code in the service if you can get it for free. Plus it allows the recovery to be reconfigured as an IT task as opposed to rebuilding the software. I did discover a few gotchas along the way, though.

Building a Basic Service
Let’s start with the basics though. You can create a service using Visual Studio by adding a project and selecting the type “Windows Service”. This will populate an empty service. First thing I like to do is give the classes and service real names. In this case I am testing that the recovery from a failure works so I renamed my class to FailingService.cs (not something I would recommend calling a product, but in this case it fits) . I also changed the ServiceName using the Properties of the window from Service1 (the default) to TestFailingService. This is the name that is going to appear in the Windows Services dialog, so I strongly recommend changing it.

You will also want to add an installer for your service as this will make it much easier to install the service onto your computer. You can’t run the service like a regular EXE file, so you definitely want an installer here.

Now that we’ve done all the basics, you should be able to build your service and install it. You can install the service using the InstallUtil.exe program that comes with the .NET Framework. Open a Visual Studio Command Prompt (this will already have a path to InstallUtil) and run

     InstallUtil ServicePath

Note: If there are spaces in the path you should enclose the path in “” as in InstallUtil “C:Documents and SettingsuserMy DocumentsVisual Studio 2005ProjectsServiceRecoveryTestsTestServicebinDebugTestService.exe”

Depending on how your PC is configured, you may be prompted to log in as part of installing the service. If you are on a domain you should use the full user name of domainuser.

From the Services tab you should now be able to start and stop your service. You should also be able to go to the Windows Event Viewer and see events related to starting and stopping your service in the Application and System logs.

To uninstall the service use the commands above, but with the /u option, as in

     InstallUtil /u ServicePath

Error Recovery
What I really wanted to do was to work with the failure recovery features of the service, so the first thing I did was create a thread that would simulate an error on some background processing of the service. The thread sleeps for 30 seconds and then throws an exception.

Since this exception is being thrown on a different thread than the main thread, I need to subscribe to the AppDomain’s UnhandledException event. If I don’t do this the thread will just die silently and the service will continue to run, which is not what I want.

Initially I thought I could just take advantage of the ServiceBase class ExitCode property. I figured that if I set the ExitCode property to a non-zero value and stopped the service that would be interpreted as a failure and the service would automatically be restarted, as in

        void UnhandledExceptionHandler(object sender, UnhandledExceptionEventArgs e)
        {
            ExitCode = 99;
            Stop();
        }

That is not the case though as I found out here. “A service is considered failed when it terminates without reporting a status of SERVICE_STOPPED to the service controller.”

So from this definition I decided that I needed to throw an exception in the Stop event handler, otherwise Stop would return normally, and it would not be considered a failure by the SCM.What I finally ended up doing was having the unhandled exception handler cache the unhandled exception and call Stop. The Stop event handler then checks if there is an unhandled exception and wraps that in an exception and throws it. Wrapping the exception, and passing the asynchronous exception as the InnerException, preserves the call stack of the asynchronous exception.

Examining the Event Viewer
I configured the service to restart after the 1st and 2nd failures, but to take no action after the third. The fail counters will reset after 1 day and the service will restart immediately after a failure.

Configuring Service Recovery 2

The service is written to fail after 30 seconds and the recovery mechanism should restart it immediately. If you start the log file you should see entries in the Event Viewer’s Application or System log showing the service starting, failing, and restarting.

Viewing Service Events

You can also get more information by opening up some of these events, such as service state transitions, how many times the error has occurred, or detailed error messages. Here is an exampleExamining Service Events

Resetting the Error Counters
Unfortunately the Reset fail count after and Restart service after fields in the Recovery tab only takes integers. It would have been nice to be able to have granularity less than a whole day for the reset of the counters to take effect. If you are running this service repeatedly (like I was doing during testing) the counters may be too high to automatically restart the service. If you expected a restart and didn’t get one, examine the entries in the Event Viewer’s System log and you should see something like this

If you want to reset the counters you can set the Reset fail count after field to zero which will cause the counters to reset after each failure.

Turning Off the JIT Debugger
One of the main reasons you write a service is to do something without user intervention (or even with a user logged in). One of the problems with this implementation is that you end up throwing an unhandled exception, by design, from the Stop event handler. If the Microsoft Just-In-Time (JIT) debugger is configured to run on your system it will prompt you if you want to debug the application. For a service that you want to auto-recover this is a really bad thing, since….well there might not be anyone there to answer the prompt.

I found a few tips on how to turn this off. You can read about them here.

Note: If you have multiple versions of Visual Studio installed (I have 2005 and 2003 installed) you have to turn off the JIT debugger for each version.

Service Source Code
Here is the source code for the service. The service is designed to fail after 30 seconds to test the recovery mechanisms of the Windows services. The code automatically generated for the installer was not modified at all, except to change the service name which was described above.

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.ServiceProcess;
using System.Text;
using System.Threading;

namespace TestService
{
    public partial class FailingService : ServiceBase
    {
        private Thread _workerThread;

        // Used to cache any unhandled exception
        private Exception _asyncException;

        public FailingService()
        {
            InitializeComponent();

            // Wire up the UnhandledExcepetion event of the current AppDomain.  
            // This will fire any time an undandled exception is thrown
            AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(UnhandledExceptionHandler);
        }

        void UnhandledExceptionHandler(object sender, UnhandledExceptionEventArgs e)
        {
            // Cache the unhandled excpetion and begin a shutdown of the service
            _asyncException = e.ExceptionObject as Exception;
            Stop();
        }

        protected override void OnStart(string[] args)      
        {
            // Start up a thread that will simulate a failure by throwing an unhandled exception asynchronously
            _workerThread = new Thread(new ThreadStart(SimulateAsyncFailure));
            _workerThread.Start();

        }

        protected override void OnStop()
        {
            // If there was an unhandled exception, throw it here
            // Make sure to wrap the unhandled exception, as opposed to just rethrowing it, to preserve its StackTrace
            // To wrap it, create a new Exception and pass the unhandled exception as the InnerException
            // The exception info will be in the Windows Event Viewer's Application log
            // You could also use some logging/tracing mechanism to capture information about the exception 
            if( _asyncException != null )
                throw new InvalidOperationException("Unhandled exception in service", _asyncException );
        }

        private void SimulateAsyncFailure()
        {
            // Simulate an asynchronous unhandled excpetion by sleeping for some time and then throwing
            // There is no caller to catch this exception since it is running in a separate thread
            Thread.Sleep((int)TimeSpan.FromSeconds(30).TotalMilliseconds);
            throw new InvalidOperationException("Simulating a service error");
        }
    }
}

Programmatically Configuring Recovery
The installers do not provide any support for programmatically setting up the recovery actions, which I find a little frustrating. I did find some code here that will allow you to do this, but have not had a chance to test it out or incorporate it into the sample.