Cyclomatic complexity vs performance

As far as I understand the concept, CC is determined by how many nested branched logic the given method have. It can be refactored to checking the opposite of original predicate and calling return. For example:

void Foo()
{
  if (predicate0)
  {
    if (predicate1)
    {
      // some code
    }
  }
}

can be refactored to

void Foo()
{
  if (!predicate0) return;

  if (!predicate1) return;

  // some code
}

but the thing is, following the branch predicate0 && predicate1 will generate less cache misses in C++ if this path is taken much more frequently.

Obviously, I am intentionally simplifying the code. For example, there can be dozens of predicates and things like one predicate dependency on another predicate.

What to do in this case? Should I throw away performance for better readability?

3

Unless you really need that performance, always prefer better readability.

The biggest performance wins do not come from micro-optimizations, but from algorithmic complexity improvements, and from other ways to avoid doing unnecessary work. Not doing something is always faster than doing something as fast as possible. Cleaner code makes it easier to spot such opportunities.

If you do need that performance, and can prove with a profiler & benchmarking tools that this change will give you that performance, then of course go for it. But you are still at the mercy of your compiler to get machine code that is branch prediction friendly. The compiler might suddenly deduce that a branch that it previously considered unlikely is in fact likely, or vice versa. To get around this, try to look into profile-guided optimization and/or annotations and compiler hints that let you unambiguously mark a branch as likely or unlikely.

4

If you don’t already know for sure that you should throw away readability for performance because you have a performance problem and you have profiler data proving that the problem lies here, then definitely throw away performance.

For code that does exactly the same thing, once readable, once unreadable, an optimising compiler is more likely to produce the fastest code than you can be rearranging bits of code.

If you have code that is unreadable because it is unnecessarily complicated, the “unnecessarily complicated” will tend to make it slower. Simplifying the code will often have the effect of doing less work, AND making it easier for the optimiser to produce highly optimised code, and by good coincidence make the code more readable.

And it is much easier to improve readable code. If it’s hard to figure out what your code does in the first place, then it’s also hard to find faster code with the exact same effect.

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *