Category Archives: Software

Vermont Code Camp

Just got back from a road trip to Burlington VT for Vermont Code Camp 3. It was a great code camp, lots of good sessions and speakers, well-organized, and generally a lot of fun.

The first session I attended was Dane Morgridge’s talk on ASP.NET MVC3: A Gateway to Rails. I was really interested in this talk, because as a TDD junkie, I was curious if part of the “gateway to Rails” involved embracing testing. I still feel like the .NET community is behind the curve on TDD and BDD, and I wanted to hear Dane’s perspective on making the switch from .NET to Rails. His presentation was very much an overview of the goodness in Rails and ASP.NET MVC3, like convention over configuration and how having a separation of concerns with MVC allows for a more testable system. Afterwards I was able to catch up with Dane between sessions and he showed me a few things he is doing with Cucumber and BDD. I have to say the readability of Cucumber makes it look very appealing and we talked about how BDD tests can be more maintainable than TDD/unit level tests. There is a right time to use one and the right time to use another.

Next up was David Howell’s Tackling Big Data with Hadoop. I knew very little about Hadoop, but had heard a few good things about it and was looking to learn a bit more. David’s talk focused on using Hadoop’s Map-Reduce capabilities to handle large data sets. We also talked a bit about what constitutes “big data”. If you are not familiar with Map-Reduce, check out Google’s 2004 paper on it. David went over the basics of a hadoop cluster and showed how it could be used in a distributed architecture to tackle a big data set. He also touched on how some of the fault tolerance could be implemented using the cluster and distributing the same job to multiple nodes. To wrap up he ran a simplified Map-Reduce demos on a small data sets that was appropriate scaled for the time that we had available.

I would have to say the third session was one of my favorites, Functional Programming on the JVM. This talk was given by Jonathan Phillips and another developer named Jack (sorry Jack, missed your last name), both from As someone who has spent the last….well lets just say a lot of years doing OO programming, I struggle with the shift to functional programming, especially when state and immutablility come into play. I have been to several functional programming and F# presentations that have left me confused, but 5 minutes into their talk Jonathan and Jack went over closures and recursion as they apply to functional programming and it was like a light going off. Suddenly immutability made more sense. After the overview of functional programming they went into 3 different languages that can be used to do functional programming on the JVM: Groovy, Scala, and Clojure. The used a simple example, comparing the amount and clarity of code that you would write using pure Java versus each of the 3 other languages. They also went over some of the pro’s and cons about Groovy, Scala, and Clojure. It left me with a lot to think about, and I am eager to experiment with all 3 languages.

Another great talk was Free and Open Source Software (FOSS) in the Enterprise by Kevin Thorley, also from Kevin laid out what you need to consider using FOSS software. What is the community support, how many contributors are there, how robust is it, how easy is it to use and what documentation exists, do you really need it or do you want to use it because you just read some great blog post about NoSQL. One thing that Kevin said really rang true when he said that open source is not free and that we need to understand and accurately assess what the cost is to use and maintain it. Kevin went over how they use MongoDB, RabbitMQ, Spring, and Solr at Don’t misinterpret Kevin’s statement about it not being free. Obviously they rely heavily on FOSS at, but you have to be honest about why you are using it and what is the cost going to be to your organization to use “free software”.

I give a lot of credit to the guys from for their participation at the code camp. They are a Burlington based company and they had a significant presence at the code camp. It is one thing to say that you are passionate about software, it is another to do something about it. Developers from did 4 of the 26 total sessions. I followed up with a few of them after the sessions and you can tell they are really into the technologies that they were presenting.

The last 2 presentations were familiar ground for me, but I still learned a few things. Vincent Grondin’s talk on mocking and mocking frameworks. Vincent gave a very good discussion of mocking and the two type of mocking frameworks available: those based on dynamic proxies, and those based on the .NET Profiler API‘s. Examples of frameworks based on dynamic proxies are Moq and NMock3. Examples of .NET profiler based frameworks include Telerik’s JustMock and Typemock Isolator. The .NET profile based mocking frameworks have some significant advantages in being able to mock out static methods and even system calls. I have been using Moq (and previously NMock2) and I have had to write custom wrapper classes to work around problems mocking static methods and system calls.

The day wrapped up for me with Josh Sled’s talk on dependency injection. Josh gave a good overview of why you would want to use DI, but also gave an honest assessment of pros and cons of using them. I think any time we decide to take on a new framework we need to understand what the cost is (similar to what Kevin Thorley said). I am a big fan of DI and IOC frameworks, but you need to be aware of what the cost is to using that framework. We touched on a few IOC frameworks that you can use like Spring (Java), Castle Windsor (.NET), and StructureMap (.NET).

All in all it was a great code camp. It is usually hard to justify spending a beautiful late summer day in the basement of UVM, but it worked in this case. I am really looking forward to all the slides being available to I can check out some of the sessions I couldn’t make.

Thanks to Julie Lerman, Rob Hale, and all the other organizers, volunteers, and sponsors for putting together a great event.


Head in the Cloud with Windows Azure

Microsoft hosted a developer conference in Boston today.  I think for the most part, you get when you pay for, and the $99 price of this event told me it would probably be more marketing than technology.  But it’s always good to get out and hear what’s going on and talk with other developers.

There were several topics of interest to me at the conference, ranging from F#, Silverlight, Visual Studio Team System 2010, and ASP.NET 4.0 , but one of the things I was most interested in learning about is Azure, Microsoft’s foray into cloud computing.  I thought that this conference would give me a good 50,000 foot view into Microsoft’s plans for a cloud computing paltform.

In the keynote, Amanda Silver referenced the Battle of the Currents, where in the early days of electrical distribution Thomas Edison’s system of direct current (DC) was pitted against Nikola Tesla and George Westinghouse’s system of alternating current (AC).  One of the disadvantages of direct current was that the power generation had to be close by due to power loss associated with transmission.  This meant that a manufacturing plant might need to have its own electrical plant, with all the associated capital costs and maintenance. This is much the same as technology companies today incurring the capital cost and IT support of maintaining their own data centers.  The ability to buy electric power from a utility company would allow the consumer to focus on their business and treat the incoming power as a service.

Microsoft’s intention in this market seems to be to offer a similar utility model that would have the benefits of scalability, redundancy, and IT support and allow a company that subscribes to the Microsoft data center services to focus on it’s own business domain.  As any of us who have had internal data centers know, power outages, scaling, and IT support (security patches, etc.) can be a real headache for a developer and data center IT support keeps us from doing our real jobs: design and writing code.

Now before you think I drank too much Microsoft Kool-Aid, I am just saying that is the idea behind cloud computing in general.  Why absorb the capital outlay and support costs of setting up and maintaining a data center when you can lease those capabilities?  Theoretically, this could lead to better support, scalability, and fault tolerance.  Microsoft seems to be positioning itself for proving data center services and support in the future.  How far off in the future is a question that remains to be answered.

The presentations on Windows Azure presented a fairly realistic picture of where the technology is today, and I give the presenters credit for that.  Michael Stiefel’s presentation was good and gave the 50,000 foot view that I was looking for.  He also drilled down a little bit into the services provided in Azure, including  high level.  Ben Day’s presentation was particularly good I thought, pointing out the potential of the technology while balancing that against some of the limitations of the current implementation, particularly related to Azure data storage.  I would assume their blogs will have the presentations at some point in the future, so you may want to check in.

The presentations and keynote showed how you can get started with Azure today using Visual Studio 2008.  You will need to install the Azure SDK and Azure Tools for Visual Studio that you can find here. Azure applications can execute locally on a Development Fabric, which is a simulated cloud environment on your desktop. You can also deploy a service to run in the Azure cloud, but you need to set up an account for that, and for the purposes of learning about the technology it would seem that the Development Fabric is adequate.  The developer “experience” (another big buzzword at the conference) is the same as developing other Windows apps in Visual Studio.  You can debug applications normally if they are running in the Development Fabric, however once they are deployed the only debug mechanism is logging statements.

This is definitely a “down the road” technology, and there are several kinks to work out, but if you want to be ahead of the curve it might not be a bad idea to try some of it out.  One of the things that came up in the presentations is that Microsoft is on the fence on some aspects of the implementation and will be looking to the developer community for feedback.  We’ll have to wait and see how all this plays out, but I am certainly willing to give it a spin.

Using the Automatic Recovery Features of Windows Services

Windows Services support the ability to automatically perform some defined action in response to a failure. The recovery action is specified in the Recovery tab of the service property page (which can be found in Settings->Control Panel->Administrative Tools -> Services). The Recovery tab allows you to define actions that can be performed on the 1st failure, 2nd failure, and subsequent failures, and also provides support for resetting the failure counters and how long to wait before taking the action. The allowed actions are

  • Take No Action (default)
  • Restart the Service
  • Run a Program
  • Restart the Computer

Having this type of functionality is really helpful from the perspective of a developer of services. Who wants to re-invent the wheel and have to write recovery code in the service if you can get it for free. Plus it allows the recovery to be reconfigured as an IT task as opposed to rebuilding the software. I did discover a few gotchas along the way, though.

Building a Basic Service
Let’s start with the basics though. You can create a service using Visual Studio by adding a project and selecting the type “Windows Service”. This will populate an empty service. First thing I like to do is give the classes and service real names. In this case I am testing that the recovery from a failure works so I renamed my class to FailingService.cs (not something I would recommend calling a product, but in this case it fits) . I also changed the ServiceName using the Properties of the window from Service1 (the default) to TestFailingService. This is the name that is going to appear in the Windows Services dialog, so I strongly recommend changing it.

You will also want to add an installer for your service as this will make it much easier to install the service onto your computer. You can’t run the service like a regular EXE file, so you definitely want an installer here.

Now that we’ve done all the basics, you should be able to build your service and install it. You can install the service using the InstallUtil.exe program that comes with the .NET Framework. Open a Visual Studio Command Prompt (this will already have a path to InstallUtil) and run

     InstallUtil ServicePath

Note: If there are spaces in the path you should enclose the path in “” as in InstallUtil “C:Documents and SettingsuserMy DocumentsVisual Studio 2005ProjectsServiceRecoveryTestsTestServicebinDebugTestService.exe”

Depending on how your PC is configured, you may be prompted to log in as part of installing the service. If you are on a domain you should use the full user name of domainuser.

From the Services tab you should now be able to start and stop your service. You should also be able to go to the Windows Event Viewer and see events related to starting and stopping your service in the Application and System logs.

To uninstall the service use the commands above, but with the /u option, as in

     InstallUtil /u ServicePath

Error Recovery
What I really wanted to do was to work with the failure recovery features of the service, so the first thing I did was create a thread that would simulate an error on some background processing of the service. The thread sleeps for 30 seconds and then throws an exception.

Since this exception is being thrown on a different thread than the main thread, I need to subscribe to the AppDomain’s UnhandledException event. If I don’t do this the thread will just die silently and the service will continue to run, which is not what I want.

Initially I thought I could just take advantage of the ServiceBase class ExitCode property. I figured that if I set the ExitCode property to a non-zero value and stopped the service that would be interpreted as a failure and the service would automatically be restarted, as in

        void UnhandledExceptionHandler(object sender, UnhandledExceptionEventArgs e)
            ExitCode = 99;

That is not the case though as I found out here. “A service is considered failed when it terminates without reporting a status of SERVICE_STOPPED to the service controller.”

So from this definition I decided that I needed to throw an exception in the Stop event handler, otherwise Stop would return normally, and it would not be considered a failure by the SCM.What I finally ended up doing was having the unhandled exception handler cache the unhandled exception and call Stop. The Stop event handler then checks if there is an unhandled exception and wraps that in an exception and throws it. Wrapping the exception, and passing the asynchronous exception as the InnerException, preserves the call stack of the asynchronous exception.

Examining the Event Viewer
I configured the service to restart after the 1st and 2nd failures, but to take no action after the third. The fail counters will reset after 1 day and the service will restart immediately after a failure.

Configuring Service Recovery 2

The service is written to fail after 30 seconds and the recovery mechanism should restart it immediately. If you start the log file you should see entries in the Event Viewer’s Application or System log showing the service starting, failing, and restarting.

Viewing Service Events

You can also get more information by opening up some of these events, such as service state transitions, how many times the error has occurred, or detailed error messages. Here is an exampleExamining Service Events

Resetting the Error Counters
Unfortunately the Reset fail count after and Restart service after fields in the Recovery tab only takes integers. It would have been nice to be able to have granularity less than a whole day for the reset of the counters to take effect. If you are running this service repeatedly (like I was doing during testing) the counters may be too high to automatically restart the service. If you expected a restart and didn’t get one, examine the entries in the Event Viewer’s System log and you should see something like this

If you want to reset the counters you can set the Reset fail count after field to zero which will cause the counters to reset after each failure.

Turning Off the JIT Debugger
One of the main reasons you write a service is to do something without user intervention (or even with a user logged in). One of the problems with this implementation is that you end up throwing an unhandled exception, by design, from the Stop event handler. If the Microsoft Just-In-Time (JIT) debugger is configured to run on your system it will prompt you if you want to debug the application. For a service that you want to auto-recover this is a really bad thing, since….well there might not be anyone there to answer the prompt.

I found a few tips on how to turn this off. You can read about them here.

Note: If you have multiple versions of Visual Studio installed (I have 2005 and 2003 installed) you have to turn off the JIT debugger for each version.

Service Source Code
Here is the source code for the service. The service is designed to fail after 30 seconds to test the recovery mechanisms of the Windows services. The code automatically generated for the installer was not modified at all, except to change the service name which was described above.

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.ServiceProcess;
using System.Text;
using System.Threading;

namespace TestService
    public partial class FailingService : ServiceBase
        private Thread _workerThread;

        // Used to cache any unhandled exception
        private Exception _asyncException;

        public FailingService()

            // Wire up the UnhandledExcepetion event of the current AppDomain.  
            // This will fire any time an undandled exception is thrown
            AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(UnhandledExceptionHandler);

        void UnhandledExceptionHandler(object sender, UnhandledExceptionEventArgs e)
            // Cache the unhandled excpetion and begin a shutdown of the service
            _asyncException = e.ExceptionObject as Exception;

        protected override void OnStart(string[] args)      
            // Start up a thread that will simulate a failure by throwing an unhandled exception asynchronously
            _workerThread = new Thread(new ThreadStart(SimulateAsyncFailure));


        protected override void OnStop()
            // If there was an unhandled exception, throw it here
            // Make sure to wrap the unhandled exception, as opposed to just rethrowing it, to preserve its StackTrace
            // To wrap it, create a new Exception and pass the unhandled exception as the InnerException
            // The exception info will be in the Windows Event Viewer's Application log
            // You could also use some logging/tracing mechanism to capture information about the exception 
            if( _asyncException != null )
                throw new InvalidOperationException("Unhandled exception in service", _asyncException );

        private void SimulateAsyncFailure()
            // Simulate an asynchronous unhandled excpetion by sleeping for some time and then throwing
            // There is no caller to catch this exception since it is running in a separate thread
            throw new InvalidOperationException("Simulating a service error");

Programmatically Configuring Recovery
The installers do not provide any support for programmatically setting up the recovery actions, which I find a little frustrating. I did find some code here that will allow you to do this, but have not had a chance to test it out or incorporate it into the sample.

The Importance of Sharpening the Saw

Today I popped into Starbucks and saw a sign at the register that said they would be closed from 5:30 – 9:00 this Tuesday for Espresso Training. The sign went on to say how important quality is and that the training is necessary to maintain quality. I thought maybe it was just this one store, but now I see it is nation wide.

So what about our industry, why isn’t regular training more of a priority? Some of the companies I have worked for have, on occasion, sent me to training but usually that was because a specific skill was required. What about the regular training and learning that makes you a better developer, where is the priority for that?

In a perfect world, maybe your company would arrange for your continued development, but that’s pretty unlikely. The reality is that we as developers are responsible for staying current and looking for tools and techniques to make us better. We’re the only ones that are going to know what we need to learn to fill in the gopher holes. Whenever I am in crunch mode, the first thing that suffers is learning: I don’t take the time to try new tools, I read fewer blog posts, and I certainly don’t take the time to write any blog posts.

I’m going to try and make regular time for sharpening the saw. If it’s good enough for Starbucks, it’s good enough for me.

Credibility and Trust

Software is one of those industries that can suffer from credibility issues. We have all had painful experiences with buggy applications or operating systems that have been thrust upon us. Software, like any product, will have failures. Some failures will be more obvious or serious than others, some may only arise in unusual situations or in unforeseen conditions. As good developers it is our job to limit these errors and avoid having serious ones get to the end user. Automated test and continuous integration tools go a long way towards helping us accomplish those goals.

I’ve worked in environments where I was the only software developer (as well as QA, config management, and tech support all rolled into one).  Other places had full blown SQA, test department, etc.  In all of those environments I had a “customer”. Sometimes it was someone external to the company who paid money for the software we produced, other times it was an internal R&D user or the software test department, or maybe it was just a teacher in a class I was taking. In all these cases, the “customer” is expecting a reliable piece of software to do the task at hand. If you release a buggy piece of software, be it to a real paying customer, an internal R&D organization, or a professor you are going to have credibility issues that will be difficult to overcome. Subsequent releases will be received with varying degrees of skepticism.

One of the first times I began using continuous integration and automated unit tests was at an R&D facility. The customers were coworkers that you would see every day, and if things weren’t going well you knew about it. When I came on board as part of a new team of developers, the group was just beginning to employ CI and unit testing. The previous group that developed software did various levels of ad hoc testing. Often the tests were manual tests that involved examining log files or single stepping through critical sections of code. Often times a fix would introduce new problems, and the end users suffered the consequences. There was an understandable lack of trust between the users and the developers.

I remember early in the process telling end users that new releases were available, but they would choose not to upgrade. It took a long time to build a framework that gave us good unit test coverage and procedures to perform system level testing in a meaningful way. This organization did not have a software test or QA department so it was the developer’s job to ensure that reliable software was being released. That may not be an ideal situation, but I have worked in more organizations without a software test department than ones that had one. Believe me, I wish we had one but we didn’t. Slowly as we increased automated test coverage we began to provide more reliable software and rebuild the trust that had been damaged by previous experience. It was very rewarding when users would ask when new releases would be available.

Sometimes we are pressured to get changes out the door in a hurry and invariable there is a temptation to skimp on testing, but the damage that can be done by a bad release can have long lasting effects. Trust and credibility are easily damaged and take a long time to restore. As professional developers, it is our job to ensure that what we release has been tested and is reliable. Sometimes that means balancing the demands to get things done fast with getting them done is a way that they are well tested and reliable.

Debugging – Art or Science?

I write a lot of instrumentation and laboratory automation software, and I invariably utilize 3rd party software and hardware. Talking to the vendors these products work perfectly every time, and in most cases they do, but invariably things go wrong. So an important part of my job is diagnosing and debugging hairy integration problems. Automation applications typically utilize multiple threads and processes and that adds to the complexity of the debugging effort.

I’ve heard people describe debugging as more of an art than a science. For grins, I googled “Art of Debugging” and I got 16,700 hits. There is certainly a lot of skill and hard work that goes into debugging, but I’m not so sure it qualifies as an art. To me, good debugging is the result of up front planning, careful observation, intuition, experience, and probably a bit of luck.

Instrument Your Software

Debugging without the benefit of trace or log data is going to make your life difficult. Log files are really helpful for me when it comes to recreating the sequence of events leading up to a bug. A couple of tools I find really helpful are log4net and BareTail.

Log4net is a logging framework that allows you to write log output to a variety of sources: rolling disk files, database, Telnet, UDP, and many more. Changing how data is output is done by configuration file and you are able to filter and control output in a variety of ways. There are a lot of good examples on the log4net site and elsewhere on the web and this is a tool you should really look into if you are developing in .NET. The Java folks already know this as log4net is a port of log4j. Thanks again Java folks for paving the way for us .NET developers.

BareTail is a tail utility for windows that has some nice features like highlighting and allows me to watch log files dynamically. Give it a try if you find yourself in a situation where watching log files “live” is important. There is also a companion tool called BareGrep that you may have guessed is a grep utility.

When you write code, you should try and be sure you are adding the capability to record significant events. It’s never too late to start adding logging. If it’s not in there, or there’s not enough, add some. If an intermittent bug crops that leaves you stumped, put in some additional logging and maybe next time the bug occurs you’ll have more to go on, or at least something to eliminate.

Experience and Intuition

These are the kind of intangible skills that might push debugging into the “art” category for some people. Experience and intuition are really helpful in terms of narrowing the possible causes of the problem and how to identify it. This is especially true if you are picking up some legacy code and don’t have a full understanding of all the inner workings of the code. Good debuggers have a knack for zeroing in on a particular part of the code and getting to the root of the problem.

I got assigned a bug recently where it appeared that we were processing the same record twice. This was also a piece of code that I had not worked in much. There were several processing threads and the likely culprit was that the queuing/dequeuing of records was not thread-safe. A quick look through the code showed this not to be the case, so I was a bit stumped. What else could the problem be? The error message indicated that a record with a particular ID was being processed twice. Then it hit me, how is the ID assigned. It turned out that when the records were constructed there was a pre-increment of the ID that was not thread-safe. This code had been running for months without the error occurring but eventually those once in a million bugs crop up. This is an example of where intuition saved me from what could have been a long exercise of going through the code line by line or the dreaded “unable to reproduce”.

Pay Attention to The Data

Intuition can really help narrow the scope of your efforts, but I have also seen this work against people when they went into the debugging task with a particular idea in mind and ignored or missed signals that would have pointed them in a different direction. They were too focused on what they thought “must be the problem”.

I recently worked on a camera system that was getting spurious triggers. Electrical noise problems are notoriously difficult to find and this had all the marks of a noise problem. I’m not an electrical engineer, but the hardware guys on the team had some possible suspects that they wanted to investigate. The thinking was that something else in the system was creating intermittent noise on the camera trigger lines. I was going to set up a test that would exercise various parts of the system and monitored the camera logs for unexpected images being triggered to try an identify what components were causing the noise.

Just before the test was ready to be run I wanted to verify that the log would record unexpected images, so I manually triggered some images by driving the trigger signal with a UI test application that came with the IO device. I expected to see one image when I enabled the trigger but then I saw a second image when I turned off the trigger. At first I though maybe I double-clicked instead of clicked so I tried it a few more times. Usually I only got the expected single image, but every once in a while I got two images. What we thought was going to be a difficult problem to reproduce was now fairly easy to reproduce. I got lucky on this one because the triggering issue was exacerbated by the difference in timing between an automatically triggered image (tens of milliseconds) versus the manual triggered image (hundreds of milliseconds). But if I wasn’t paying attention to the logs (log4net and BareTail again) or waved it off as operator error I would have missed a valuable clue.

Art or Science

Debugging is about preparation and hard work, being guided by your instincts, but also knowing when you are looking in the wrong place and coming up with another plan.

NHibernate Sample Application – Part I

I was telling a friend about my NHibernate Skunk Works project. He asked if I had an application in mind and I said I was planning on doing a wine inventory/tasting notes application and that I would start on the database this weekend. His response stopped me right in my tracks “Why don’t you write a scenario, first”. He’s right, I got so focused on the technology that I stepped right over defining what I expected out of the application. Just because it’s a skunk works project doesn’t mean I shouldn’t be following good developer practices. Here is a very brief description of the intended user (the persona) and an initial implementation scenario.

Persona – Mark Atwood
Mark Atwood recently began collecting wine as a hobby. Mark has a collection of around 100 bottles, and enjoys pairing wines and foods. Mark and his wife Beth often invite friends over to try wines and compare tasting impressions.

Scenario – Wine Inventory and Tasting Notes Database
Mark currently relies on memory for wines that he has enjoyed and what foods paired well with them. As his collection has grown it has become harder to track that information. Mark would like a system that allows him to keep track of his current wine inventory as well as a way to store and retrieve tasting notes. Initially this application will run as a simple Windows Forms application. The implementation should allow for eventual migration to a web-based application to allow the system to be used by multiple users

Mark would like to track the following information regarding a purchase of wine.

  • Name
  • Region
  • Price
  • Where/when it was purchased
  • Grape variety – Note a particular wine may be a blend of different grapes

The application will allow a user to enter a wine purchase, prompting for the number of bottles purchased and the information stored above. The data will be persisted in a database.

The application will allow a user to indicate that a bottle of wine has been opened. The user will select a wine from the inventory and record the open date. The date will default to the current date.

Mark would also like to track tasting notes for a particular opened bottle of wine. The application will display opened bottles of wine, ordered by most recently opened bottle first. The user will select a bottle and be able to enter the following tasting notes.

  • Taster
  • Tasting date
  • Tasting notes

NOTE: For this scenario, food pairing information will be maintained in the body of the tasting notes. In the future, the database may be extended to include more detailed food pairing information

Skunkworks – NHibernate

I learn by doing. I have plenty of books on software development and I like to read about different technologies, but the only way for me to feel I know a technology is to actually use it to develop software. This doesn’t apply just to languages or technologies, but to methodologies as well. I had read plenty of books and posts on Agile, but it wasn’t until I did my first project using Agile methods that I really “got it”. Understanding the notions of iterations, velocity, and all the other buzzwords is one thing. Putting them in use in a given company culture is another matter entirely.

It is not always possible to put new technologies into practice at our “regular jobs” given the particular problem domain we might be working in. That is why I like to work on what interests me on my time, in my own personal Skunk works. It keeps me up on technologies, gives me something new to work on that I can choose, and can be fun. It also positions me to be ready when the time comes that the technology might be needed on an actual commercial project.

That said, I have been reading a lot lately about NHibernate. I know it’s not exactly bleeding edge, but I haven’t had a chance to use it and thought it might be a good time to put a little of that into practice. Reading an article or a quick start tutorial is all well and good, but it is no substitute for actually doing it. Not to mention a big part of good software is figuring out how to get around the problems that arise, even when you follow all the steps and do everything “right”.

Stay tuned