Topic » Quality Control Target Group » Developers Target Group » Testers Target Group » Software Managers
Why we don’t use the Software Maintainability Index

The Software Maintainability Index (MI) is a single-value indicator for the maintainability of a software system. It was proposed by Oman and Hagemeister in the early nineties. [1]

The Maintainability Index is computed by combining four traditional metrics: It is a weighted composition of the average Halstead Volume per module, the Cyclomatic Complexity, the number of lines of code (LOC) and the comment ratio of the system.

The formula [2] for the Maintainability Index is presented in the following figure:

Several static analysis and software development tools, including Microsoft Visual Studio, compute the Maintainability Index [3].

But: What knowledge can you gain if you know that the Maintainability Index of your system is, say, 57?
If you trust in the validity of this Key Performance Indicator (KPI)—despite its obscure formula—you can try to compare your system to others. That’s it. Apart from that, it does not provide more information and, in particular, will not help you in locating the problems and determining what you can do to improve.

Because of this lack of expressiveness, we at CQSE are convinced that this KPI is nonsense. We think that it is not sensible to reduce the maintainability of a whole software system to one single indicator.

What makes a good KPI for software quality?

In our opinion, a valid KPI needs to satisfy four criteria [4]:

  • A KPI needs to be objective and unambiguous. It must be clear for every developer how it is calculated and the way of aggregation must be transparent.
  • It must be predictable how the KPI is affected by changes to the code. In particular, it is very important that an optimization of the indicator cannot be achieved by making the code quality worse (e.g. “improving” the method length by removing its comments).
  • A KPI needs to be actionable: If it indicates bad performance, actions to improve the situation must be clearly deducible.
  • In order to gain acceptance for the KPI, its impact on development and maintenance activities (such as code reading, testing, and so on) must be clear. The implications of a bad value, e.g. more time consuming or error-prone development, must be comprehensible.

The Maintainability Index does not satisfy these criteria:

  • First, it is not objective. Unfortunately different tools compute different results because they simply arbitrarily adjust the formula. Moreover, the complex formula is not easily comprehensible and therefore not all developers understand and interpret it in the same way.
  • Second, a developer cannot predict how code changes influence the Maintainability Index. For instance, if a developer extracts code out of a method to lower its complexity (and thus improves the overall cyclomatic complexity), he will usually increase the lines of code at the same time. As a change influences multiple factors, an improvement of the code may negatively affect the KPI.
  • Third, due to the unknown influence of code changes, it is unclear what can be done to improve maintainability. For example, it is not deducible what a developer can do to raise the value from 57 to 60. The single value is not expressive enough to allow inferring concrete improvement actions.
  • Fourth, the Maintainability Index does not provide information about the impact on development activities. A value of 57 does not express which maintainability aspects are affected by a bad value. Is the code difficult to understand, are changes expensive, are extensions difficult, or is it not reliable? The Maintainability Index does not answer this question.

Besides, other relevant factors such as the code coverage of regression tests that reveal newly introduced faults during changes are not considered by the Maintainability Index.

How to measure code quality instead?

For these reasons, our quality analysis tool Teamscale does not compute the Maintainability Index. When we assess the maintainability and complexity of a software system, we use manual reviews, multiple fine-grained checks, and indicators, such as method length, nesting depth, ratio of duplicated code, and more, to gain insights about a system.

A dashboard in Teamscale. showing software quality KPIs for a system

Read also what we think about the McCabe’s Cyclomatic Complexity.

[1] Oman, Paul, and Jack Hagemeister. “Metrics for assessing a software system’s maintainability.” In Proc. Conference on Software Maintenance. IEEE, 1992.
[2] Coleman, Don, et al. “Using metrics to evaluate software system maintainability.” Computer 27.8 (1994): 44-49.
[3] msdn.microsoft.com/en-us/library/bb385914.aspx
[4]  https://www.cqse.eu/publications/2015-managing-product-quality-in-complex-software-development-projects.pdf