Dr. Benjamin Hummel

Contacting customer support with a technical issue can feel like you

are in a quiz show, with the support agent going through a list of

standard questions. As you don’t know these questions, providing the

information bit by bit can get very tedious.


You: Hello, customer support. The FooBar server is not running as

expected. I’m getting errors when updating user information.


Customer Support: Hi, pleased to help you. Which version of the server are you running?


You: Version 17.5.33


Customer Support: Great. Maybe this is a configuration issue. Can you send me the configuration file?


Dr. Dennis Pagano

Many companies employ sophisticated testing processes, but still bugs find their way into production. Often they hide among the subset of changes that were not tested. Actually, we found that untested changes are five times more error prone than the rest of the system.


To avoid that untested changes go into production, we came up with Test Gap analysis—an automated way of identifying changed code that was never executed before a release. We call these areas test gaps. Knowing them allows test managers to make a conscious decision, which test gaps have to be tested before shipping and which ones are of low risk and can be left untested.


A short while ago, we introduced Test Gap analysis into our code quality software Teamscale.


Dr. Benjamin Hummel

The term technical debt is often used to explain the nature of

software quality and quality decay: the option to decide for reduced

quality (taking debt) in exchange for more features or faster

time-to-market, the fact that some quality issues will hit you hard

later on (interest), or the problem that accumulation of too many

quality issues might make further development of a software system

impossible (just as too much debt might break a company). Still, we

try to avoid the actual term technical debt, both in our own tools and

when dealing with our customers. Our main reason is that the metaphor

is often overdone and its users tend to see too many parallels to its

financial counterpart.


In a recent software quality audit, our static analysis tool Teamscale found that the comment completeness was nearly perfect, but our manual inspection found that the majority of them was generated automatically and therefore of limited use.

This blog post sketches software quality tasks which should be performed by software tools, software quality tasks which should be performed by human experts and software quality tasks which should be performed jointly.



One side-effect that I have observed from performing code reviews for years is that the code after review is mostly way shorter then it was before.

Reducing the size of code will increase maintainability as we have less code to read and comprehend in the future.

This stresses one of the primary goals of code reviews: Producing clean and concise code that is easy to understand.



Dr. Corneliu Popeea

In this post, I show which are the steps to configure Teamscale for

such an assessment.

This is illustrated using three open-source projects,

FindBugs, Google Error Prone and

Microsoft StyleCop.

First, I use the Teamscale architecture editor and specify for which

third-party libraries monitoring dependencies might be desired.

Then, the architecture perspective shows the static analysis results

and allows quick inspection of dependencies to third-party libraries.


Dr. Christian Pfaller

Analysis tools for your source code, like Teamscale and others, produce a list of findings—or issues—identified in your code. When you go over the list of findings, you will probably encounter some individual findings you will not fully agree with. These issues might be not valid, not a problem or not worth to fix for you. For these cases, professional quality analysis tools offer blacklisting features. Blacklisting allows you to hide individual findings. The question »What should be put on the blacklist?« will be answered quite differently, depending who you ask. Developers may tend to »everything I cannot fix should be on the blacklist«. A QA manager might answer something like »only false positives may be put on the blacklist«.



The Software Maintainability Index (MI) is a single-value indicator for the maintainability of a software system.

It was proposed by Oman and Hagemeister in the early nineties.

The Maintainability Index is computed by combining four traditional metrics: It is a weighted composition of the average Halstead Volume per module, the Cyclomatic Complexity, the number of lines of code (LOC) and the comment ratio of the system.



Anyone writing code knows that famous sentence: »I’ll clean that up later«.

We have all heard it. We have all uttered it.

And, as my colleague Daniela already remarked: No you won’t!


Most likely, you’ll forget about that long method you wanted to shorten later. Or that clone you think can be removed. Or that null pointer finding you’re sure to run into once your code

goes into production. But there’s no time right now! So you leave it as is and turn to more pressing matters. And the quality problems stay—forgotten.


It is a challenge to keep track of these issues is a challenge. You want them to be visible to your entire team—they’ll just gather dust in your own personal ToDo list. But you don’t want to

clutter your team’s issue tracker with every…


Have you ever run into a »new« or »trending« programming language (Rust, nim, Kotlin, etc.), promoting all sorts of great »new« abstraction mechanisms (traits, mixins, multiple-inheritance, monads, lambdas, macros, you name it), thinking: »I could finally get rid of all this copy-pasted or boilerplate code we have in our codebase!«?


I discussed whether newer/trendier languages will lead to less copy-pasted code (or even better code quality in general) with a few colleagues over a few beers recently. Based on our experience as consultants for software quality and Teamscale developers, we quickly came to the conclusion: No, they will not.


Let me explain and give a few examples.


Interested in our blog? Subscribe!

Get a short notification when we blog about software quality, speak on conferences or publish our CQSE Spotlight.

By submitting your data you confirm that you agree to our privacy policy.