Best Practice: Unit test coverage vs. in-method sanity checks [duplicate]

  softwareengineering

I have a code-coverage requirement of of a certain percentage, and face the following tradeoff:

Should I sacrifice in-method sanity checks and error handling for ease of (unit-) testability?

Lets consider two variants of a simple method addAndSquare.

Variant A:

     double addAndSquare(double a, double b ){
         return (a+b)*(a+b);
     }

This method is wonderfully easy to test for the standard behaviour that is intended. In a unit test one could simply pass it two exemplary non-trivial values and check for the wanted result. The testcoverage of percentage X is very easy to achieve. It is, however, not the case that the method would check for corner cases during runtime, and react accordingly. Say we pass the method a NAN or INF, or produce some overflow error, then the method would merrily pass on wrong results to the outside world.

Variant B:

    double addAndSquare(double a, double b ){
        if(a == null || b == null) throw ...
        if(a > Double.MAX_VALUE) throw new ArithmeticException("double overflow");
        if(b > Double.MAX_VALUE) throw new ArithmeticException("double overflow"); 
        ...


        double result =  (a+b)*(a+b);


        if(result == null) ...
        if(result > Double.MAX_VALUE) ...
        return result;
    }

Now, this variant will be more safe during production runtime, it will immediately handle a bad state as it occurs, and will make finding the underlying error a lot easier. It will also gracefully end the program before anything gets broken in the realworld. The drawback however is, that this is
horribly annoying to write tests for and reaching coverage X since you’d have to cover all the if’s and the try-catch situations.

I come from a background where I’d want to know if anything is wrong immediately, but recently I observed that devs more experienced than me lay their focus much more on the testing side than on the runtime-sanity-checks side.

What am I missing?

3

If your code includes checks for exceptional conditions, your tests should test them, too. Of course, this increases the number of tests, but as your border case checks increase complexity, this is only natural. It is also good practice to test for the uncommon cases, as developers tend to “assume” how the code behaves in these cases, which may easily be wrong.

For example, you’ve included a check for null parameters, so you should include a test that asserts that the right exception is raised in this case. This test will most likely fail, because the checks for overflow happen before the null checks, and some other exception will happen (I don’t know what language you’re using and how it would handle comparison of null values with double constants).

Other tests might point out border condition checks that are unnecessary. For example, testing whether any double value is larger than infinity is most likely futile, infinity should be defined such that no value is larger than it. You will not be able to write a test causing this exception to be raised, which probably means that you don’t have to include the check at all. Similarly for the test that checks that the result of a float multiplication isn’t null, this is simply a condition that cannot happen.

For checks that guard against failures such as out-of-memory errors or database session disconnection which are not easy to provoke in a unit test situation, you my have to accept that you can’t get full test coverage.

1

It depends on your requirements. Unit tests describe how your code works. If you pass in null and it returns the wrong return, write a test for it to describe that behavior. That might be perfectly fine.

If you need to handle those cases, then update code and corresponding tests that describe that new behavior. It may be annoying but those tests will describe how that function works for the next person in line.

2

LEAVE A COMMENT