McCabe's Cyclomatic Complexity and Why We Don't Use It
Dr. Benjamin Hummel
In nearly every audience we present our analysis and quality management tool to, there is at least one person asking whether we also measure Cyclomatic Complexity. After all, almost every existing software quality tool calculates this metric and so it is very well known. Much to their surprise, our answer is usually no and I want to explain our rationale in this post.
Just to make sure we are talking about the same thing, Cyclomatic Complexity is a metric that was introduced by Thomas McCabe in 1976 and aims at capturing the complexity of a method in a single number. Actually, the original publication talks about the more general term module, but today this typically means function or method for most languages.
The metric itself is based on graph measures on the control flow graph of a method and describes the non-linearity of this graph. For structured programming (no goto) the metric is roughly equivalent to one plus the number of loops and if statements.
For example, the Cyclomatic Complexity of the following Java method is 3:
ERROR: Content Element with uid "677" and type "beautyofcode_contentrenderer" has no rendering definition!
If you want the full scientific treatment, you can find the original publication here.
Cyclomatic Complexity != Cyclomatic Complexity
Unless you prefer pen and paper for determining cyclomatic complexity, you will use one of the many existing tools for software metrics calculation out there. Unfortunately, the original paper is vague on some details of the metric, such as how to derive the control flow graph, and hence different implementations often result in different measured complexity values for the same code.
For example, the following simple Java code snippet is reported with complexity 2 by the Eclipse Metrics Plugin, with 4 by GMetrics, and with complexity 5 by SonarQube:
ERROR: Content Element with uid "679" and type "beautyofcode_contentrenderer" has no rendering definition!
The reason is that the conjunction in the condition is counted as separate branches in the control flow graph in one tool, while the other tool counts the entire if as one branch. The first behavior (value 4) is recommended in McCabe’s paper, but the later one is found in many tools as it is easier to implement (I have no explanation for the value 5, which seems to be too high).
Throw in new language features, such as the lambdas from C# and Java 8, and you are in for even more variations. So if you want to measure and compare complexity, stick with one of the tools and don’t copy thresholds from other sources blindly.
What do you want to measure?
Besides tools measuring different things for the same metric (which happens for other metrics as well), the main obstacle is the interpretation of the results. The simple interpretation is that the cyclomatic complexity is an upper bound for the number of test cases required to obtain branch coverage of the code. So, in the context of testing, cyclomatic complexity can be used to estimate the required effort for writing tests. In my opinion, this is a valid use-case for this metric.
Often, however, high “complexity” is directly translated to low readability and high maintenance costs. It turns out that complexity of the control flow graph of a piece of code is not necessarily what a human would perceive as complex.
For example, the following example results in a complexity value of 14, which is way beyond the threshold typically recommended for a single method.
ERROR: Content Element with uid "681" and type "beautyofcode_contentrenderer" has no rendering definition!
Obviously, the complexity of the control flow graph is high and I would need many tests to completely test this method. But I feel fairly confident that I can understand and change that method without much effort even years after writing it (if I ever had to add a 13th month).
A more compelling example are the next two methods. Both have a cyclomatic complexity of 5, although most developers would judge the complexity of both methods to be very different.
ERROR: Content Element with uid "683" and type "beautyofcode_contentrenderer" has no rendering definition!
But we want to measure complexity!
The sad truth is that there probably is no single simple measurement that can express an abstract concept such as complexity in a single number.
But this does not imply that we cannot measure and control complexity. It just has to be done with multiple metrics and checks that cover the various aspects of complexity.
For example, one very simple measure is the length of a method. While many refuse this metric as too simple, most agree that longer methods are harder to understand and maintain, making it a useful part of a set of metrics covering complexity.
Another metric we like to look at is nesting depth, which is the maximal number of control structures (loops, if) that are nested in each other. The method sumOfNonPrimes from the example above has nesting depth 3.
Nesting statements tends to expand the context that a developer has to understand for changing a line. One more benefit is that we can point a developer directly to a deeply nested statement instead of annotating an entire method with the label “too complex” and let the developer figure out where the complexity is hidden.
To complement general metrics, we typically employ checks for specific quirks of the languages. For example, you might want to discourage nesting of lambda expressions, or loops within switch statements.
I hope I could shed some light on the pitfalls when measuring complexity and would be happy to hear of your experience with complexity metrics in the comments.