Category Archives: Unit Testing

TDD with C++

I know, it is 2010, and doing TDD with C++ probably isn’t trending too high on anyone’s hot topics list, but after a 8 year hiatus I find myself doing some Agile consulting with an embedded systems team that is using C++.   At first I was concerned about the transition.  What was I going to do without my testing frameworks, my mocking frameworks, my continuous integration support, my…wait I have to manage my own memory too…

Well it is not all bad news folks, a lot has happened in the world of C++.  And while the tools are not yet at the level you may be accustomed to coming from C#, Java, or Ruby, it’s not all bad news.  This is not your father’s C++.

Testing Frameworks

There are a few testing frameworks out there for C++: CppUnit, CxxTest, and GoogleTest are some of the better known frameworks.  These three are all heavily influenced by JUnit, so if you are familiar with JUnit, NUnit or even MSTest, you will be familiar with the terminology, i.e. they support test fixtures, have setup and teardown support etc.

On the project I am on, we are using GoogleTest and it is working OK for us.  It can be targetted to a variety of compilers and OS’s, so you may have some work to do to get it up and running.  The syntax relies heavily on macros, which isn’t all that surprising, but it makes it a little ugly to look at, but it supports the notion of assertions, test fixtures, setup/teardown that you are probably familiar with.  A few simple tests might look something like this

// Tests that 1 really equals 1.
// Just looking to see if my framework does what I think it does
TEST(DumbTest, Checks1Is1) {
  EXPECT_EQ(1, 1);
}

// Tests that a person can be created with a first and last name
TEST(PersonTests, PersonFullName){
	Person aPerson = Person("Fred", "Flintstone");
	ASSERT_STREQ( "Fred Flintstone", aPerson.FullName() );
}

Test Output

To get your tests results reported as part of your continuous integration process, you will want to see what output formats the testing framework provides.  GoogleTest supports several output formats, including a text based output that is helpful from running within the IDE as well as an XML report that conforms to the Ant/JUnit format.  We are using TeamCity, so reporting automated test results is straightforward in that regard.

Controlling What Tests Run

This is an area that hurts a bit, at least if you are coming from an environment where you are used to being able to easily control which tests run while you are in the IDE via a plug-in, for example right-click and select “Run this test” or “Run this test suite”.  GoogleTest does have options to control what tests run and which tests do not run, the ability to repeat the test, etc, but it is all via switches to the test runner, which makes the development cycle a little slower.   Since I am using this on an embedded systems project, some tests may require hardware that is not at the development station or a build server. To have the test runner skip tests that have the word Hardware in the test name, you would pass –gtest_filter=-*.*Hardware*.  Although it is a bit cumbersome, at least the capability exists to control which tests run.

Mocking Frameworks

I am a big fan of mocking frameworks in C#.  I know they are just one tool in the unit testing toolbag, but I find them invaluable for testing interactions between components and can free me from having to write more sophisticated simulators.  GoogleMock provides mocking support for C++ and like GoogleTest can be targeted towards a variety of development tools and OS’s.  Note that as of this writing, there are some problems building GoogleMock for VS2010.

Coming up to speed on any mocking framework takes effort, especially once you get away from the plain vanilla mock scenarios and get into events, custom matchers, custom actions, etc.  Most of my C# mocking is with NMock2, there are other great mocking tools out there, like TypeMock, RhinoMocks, and Moq.  If you are used to those tools, you are going to find GoogleMock a little clumbsy and cumbersome, in my opinion.  For example, this snippet sets an expectation that the mutator’s Mutate funtion will be called with true as the first argument and a don’t care on the second argument (indicated by the ‘_’).  It will also perform an action when the expectation is satisfied by setting the value of the first argument (0-based) to 5

  MockMutator mutator;
  EXPECT_CALL(mutator, Mutate(true, _))
      .WillOnce(SetArgumentPointee<1>(5));

Again, like GoogleTest, heavily macro-based and in my opinion a little….clumsy.  Maybe it is just getting used to being in C++ again, but it doesn’t flow as naturally as some of the C# mocking tools.  But on the plus side, at least there is a mocking framework andit will probably do 90% of what you need out of the box.  If you need to write a custom action, there may be some work there.

Other Niceties

Boost C++ Libraries

Ah memory management, how I have NOT missed you all these years.  Fortunately the Boost C++ Libraries have good support for smart pointers which relieve the developer of a lot of the pain of memory management.  Not all of the pain, but enought to merit using them.  In addition to smart pointers, the Boost library has a variety of classes for string and text, containers, iterators, math, threading, and on and on.  If your OS or compiler does not directly support something, check the Boost documentation before you implement something from scratch.  Chances are it may have been solved already.

IOC/DI Tools

Although I have not had a chance to use it yet, there is at least on IOC container tool in C++, Pococapsule.  Since I haven’t had a chance to use it yet, I will leave it as an exercise to the reader to try it out, and if you have used it, please feel free to comment on it.

Summary

Although many of us may not have used it in many years, C++ is still chugging along and has a place in the software universe.   In certain areas it is the best choice based on criteria like performance, foot-print, etc.  It is encouraging to see TDD practices and tools that started in other frameworks/languages come into C++.  They may not have the same ease of use as the other frameworks, but these frameworks also had some warts in their early development.  The open-source aspect of these tools will also hopefully continue to move the feature set and usability of these tools forward.

If I had my choice, I would prefer to be develop in C#, Python, Ruby, or Java.  The ease of use and richness of tools make these more “productive” developer environments, especially when looked at from the perspective of TDD.  However sometimes C++ is the right tool and I am glad to see that the tools are there to develop code in a TDD manner.  They may not be at the level of the tools we are used to, but they are available to us.

And many thanks to the folks on these open-source projects who have put those tools out there.

PyCon on the Charles – Part I

I have been doing some work with Python the last few months and recently joined a Python Meetup group in Cambridge that had a meeting last night at Microsoft NERD. The purpose of the meeting was a chance for some people who are presenting at PyCon to do a dry run on their presentations. The meeting was called PyCon on the Charles – Part I

This meeting definitely met my one good thing rule, in fact I thought all the presentations were interesting. There were three presentations

  • Python for Large Astronomical Data Reduction and Analysis Systems by Francesco Pierfederici
  • Python’s Dusty Corners by Jack Diederich
  • Tests and Testability by Ned Batchelder

Francesco Pierfederici talked about how he uses Python in “Big Astronomy”, particularly for the LSST project. The code base they are using has a pipeline framework that I am interested in looking at. There is a lot of simulation that needs to be done before building a large telescope and the supporting computing infrastructure. Most of the high level code is developed in Python, with the computation intensive code written in C.

I was most interested in the Ned Batchelder’s Tests and Testability presentation because I am interested in the tool support for testing in Python. He presented techniques to using mocking to make your code more testable, as well as ways to structure your code to support dependency injection. The mocking framework was Michael Foord’s mock.

I would have liked to have seen more emphasis on addressing testability early in the development cycle. As a proponent of TDD, I think you need to begin testing as soon as you start developing. In my experience it leads to a better design through loose coupling and you avoid having to make a major refactoring to put in test hooks after the fact. I think Ned did a nice job presenting the techniques and some of the tools available in Python to make testing easier and with better coverage.

All in all, an interesting evening and if you are a Python developer in the Boston/Cambridge developer you may want to join up. There is a follow up on scheduled on 2/3/10 called PyCon on the Charles – Part II where there will be three more practive presentations for PyCon.

And once again, kudos to Microsoft for making their space available for different groups. This is the 4th event that I have been at in their space and exactly none of them were Microsoft specific. I think having a space where developers can get together is something that has been sorely lacking and they deserve credit for trying to make that happen.

Credibility and Trust

Software is one of those industries that can suffer from credibility issues. We have all had painful experiences with buggy applications or operating systems that have been thrust upon us. Software, like any product, will have failures. Some failures will be more obvious or serious than others, some may only arise in unusual situations or in unforeseen conditions. As good developers it is our job to limit these errors and avoid having serious ones get to the end user. Automated test and continuous integration tools go a long way towards helping us accomplish those goals.

I’ve worked in environments where I was the only software developer (as well as QA, config management, and tech support all rolled into one).  Other places had full blown SQA, test department, etc.  In all of those environments I had a “customer”. Sometimes it was someone external to the company who paid money for the software we produced, other times it was an internal R&D user or the software test department, or maybe it was just a teacher in a class I was taking. In all these cases, the “customer” is expecting a reliable piece of software to do the task at hand. If you release a buggy piece of software, be it to a real paying customer, an internal R&D organization, or a professor you are going to have credibility issues that will be difficult to overcome. Subsequent releases will be received with varying degrees of skepticism.

One of the first times I began using continuous integration and automated unit tests was at an R&D facility. The customers were coworkers that you would see every day, and if things weren’t going well you knew about it. When I came on board as part of a new team of developers, the group was just beginning to employ CI and unit testing. The previous group that developed software did various levels of ad hoc testing. Often the tests were manual tests that involved examining log files or single stepping through critical sections of code. Often times a fix would introduce new problems, and the end users suffered the consequences. There was an understandable lack of trust between the users and the developers.

I remember early in the process telling end users that new releases were available, but they would choose not to upgrade. It took a long time to build a framework that gave us good unit test coverage and procedures to perform system level testing in a meaningful way. This organization did not have a software test or QA department so it was the developer’s job to ensure that reliable software was being released. That may not be an ideal situation, but I have worked in more organizations without a software test department than ones that had one. Believe me, I wish we had one but we didn’t. Slowly as we increased automated test coverage we began to provide more reliable software and rebuild the trust that had been damaged by previous experience. It was very rewarding when users would ask when new releases would be available.

Sometimes we are pressured to get changes out the door in a hurry and invariable there is a temptation to skimp on testing, but the damage that can be done by a bad release can have long lasting effects. Trust and credibility are easily damaged and take a long time to restore. As professional developers, it is our job to ensure that what we release has been tested and is reliable. Sometimes that means balancing the demands to get things done fast with getting them done is a way that they are well tested and reliable.