Rainer Niedermayr

… ist Berater für Softwarequalität bei der CQSE. Er hat Wirtschaftsinformatik an der Technischen Universität München und an der Aalto University in Espoo, Finnland, studiert. Anschließend war er zwei Jahre lang als Softwareingenieur in einem mittelständischen Softwareunternehmen tätig. Parallel zu seiner Beratungstätigkeit bei der CQSE promoviert er im Bereich Software Testing.

  • +49 space::163 space::7163900
  • @nrainer2

Blog Posts

The issue metrics are an exciting feature of Teamscale. They allow you to analyze and visualize issues (a.k.a. tickets) of an issue tracker using a query language. Benjamin presented this feature in a previous blog post.

In this post, I will show how issue metrics can be used in threshold configurations to assess the number of critical bugs.


Teamscale computes a number of metrics for each analyzed project (lines of codes, clone coverage, comment completeness, …) and supports uploading further metrics from external systems (e.g. from the build server). The computed metrics are updated with every commit and provide an overview over the state of a software project. As it is sometimes hard to tell what is good and what is bad, the next Teamscale version will be equipped with a new powerful feature that provides an assessment of metric values based on (built-in or custom) threshold configurations. It will help the user interpret the values.


The Software Maintainability Index (MI) is a single-value indicator for the maintainability of a software system. It was proposed by Oman and Hagemeister in the early nineties [1]. The Maintainability Index is computed by combining four traditional metrics. It is a weighted composition of the average Halstead Volume per module, the Cyclomatic Complexity, the number of lines of code (LOC) and the comment ratio of the system.



Roman Haas, Rainer Niedermayr, Tobias Roehm, Sven Apel:

Is Static Analysis Able to Identify Unnecessary Source Code?

ACM Transactions on Software Engineering and Methodology, Vol. 1, 2020 (to appear).

Rainer Niedermayr, Stefan Wagner:

Is the Stack Distance Between Test Case and Method Correlated With Test Effectiveness?

Proceedings of the 23rd International Conference on Evaluation and Assessment in Software Engineering (EASE’19), 2019.

Roman Haas, Rainer Niedermayr, Tobias Roehm, Sven Apel:

Poster: Recommending Unnecessary Source Code Based on Static Analysis.

Proceedings of the 41st International Conference on Software Engineering Companion (ICSE’19), 2019.

Roman Haas, Rainer Niedermayr, Elmar Juergens:

Teamscale: Tackle Technical Debt and Control the Quality of Your Software.

Proceedings of the 2nd International Conference on Technical Debt (TechDebt’19), 2019.

Rainer Niedermayr, Tobias Roehm, Stefan Wagner:

Poster: Identification of Methods with Low Fault Risk.

Proceedings of the 40th International Conference on Software Engineering Companion (ICSE’18), 2018.

Jakob Rott, Rainer Niedermayr, Elmar Juergens, Dennis Pagano:

Ticket Coverage: Putting Test Coverage into Context.

Proceedings of the 8th Workshop on Emerging Trends in Software Metrics (WETSoM’17), 2017.

Rainer Niedermayr, Elmar Juergens, Stefan Wagner:

Will My Tests Tell Me If I Break This Code?

Proceedings of the International Workshop on Continuous Software Evolution and Delivery (CSED’16), 2016.