Saturday, February 14, 2009

Unit testing vs defensive coding

While working towards releasing the first alpha of my PeerBackup application, I decided to check out the code coverage on my unit tests. In the process of addressing some deficiencies, I noticed an unexpected tension between defensive coding and unit testing.

I want to get my test coverage as close to 100% as possible, because I really like the confidence it gives me that the code is working correctly. This is particularly important since I'm using Python, and without any sort of static validation, if a line doesn't get executed I have absolutely no idea whether or not it is at all functional. It could reference nonexistent variables or functions; there may even be some syntax errors that won't be found until the line is actually run.

So 100% coverage is a Good Thing. But... I learned years ago that defensive coding is also a Good Thing. I often write a few lines of code to address cases that I'm pretty sure are impossible, and which I definitely can't think of a way to create. Since I can't think of any way to create the case, I can't write a unit test to cover it -- which means that my unit test can't be 100% unless I remove the defensive code.

As a result, I find myself tempted to remove defensive code that is probably a really good idea, because there just might, after all, be a way to trigger it. Or even if there isn't, perhaps some future change will create a way to trigger it.

On balance, I'll leave the code in there and accept less than perfect coverage. I don't like it, though.

No comments:

Post a Comment

My collection is complete(ish)!

I drive an electric car, a Tesla Model S. Generally, I never worry about how or where to charge it. It has a large battery and a long range,...