Rainer Niedermayr

Teamscale computes a number of metrics for each analyzed project (lines of codes, clone coverage, comment completeness, …) and supports uploading further metrics from external systems (e.g. from the build server). The computed metrics are updated with every commit and provide an overview over the state of a software project.

As it is sometimes hard to tell what is good and what is bad,

the next Teamscale version will be equipped with a new powerful feature that provides an assessment of metric values based on (built-in or custom) threshold configurations.

It will help the user interpret the values.


Thomas Kinnen

A year ago we released the first version of our web-based architecture editor. It was lacking one major feature: saving architectures directly into Teamscale from the browser. This will change with the upcoming release 2.1:




In this post I will outline how to create a new architecture for an existing system, demonstrating the usage and usefulness of real-time architecture analysis.



During the last years, we have deeply integrated analyses for ABAP in Teamscale (and formerly in ConQAT). These analyses are used by SAP customers to keep the quality of their custom code high. However, since SAP is introducing many new programming technologies, more and more SAP users are confronted with new languages besides ABAP. One of these is SAP HANA SQLScript, which is used to develop high-performance stored procedures for the SAP HANA in-memory database. Unfortunately, SAP did not provide any static code analysis for SQLScript (in contrast to SAP Code Inspector for ABAP). Moreover, there are no precise guidelines how to develop good SQLScript code so far. In this post I’ll present our initial thoughts on assessing the code quality of SQLScript.



Dr. Benjamin Hummel

Teamscale comes equipped with its own task manager. As most development teams already use an issue tracker (such as

Jira or Redmine), a common question is how Teamscale tasks relate to

tickets in the issue tracker and whether one should replace the other.

Not surprisingly, this is not a simple yes/no answer.


Dr. Lars Heinemann

A Growing User Base


In the early days of Teamscale when the number of users was still

moderate, we used a rather ad hoc way of providing support to our

customers. In most cases, support requests were directly handled by

the Teamscale product development team. This worked perfectly fine at

that time. Customers and evaluators got the most qualified help

possible and the development team in turn received unfiltered feedback

from Teamscale users.


Fortunately, the user base of Teamscale grew significantly over the

past years and, as expected, we were facing an increased number of

support requests with a broad variety of topics ranging from general

questions regarding installation, configuration, and usage to feature

requests and bug reports.




Dr. Nils Göde

For me, one of the most appealing quality improvement actions still is the

deletion of code. That means, not just any code, but the code that is of no more

use. Obviously, this refers to dead code—code that is not reachable at

all and will therefore never be executed. This includes unused variables and

fields as well as methods that are never called and commented-out code

(Teamscale can tell you where these are). However,

this is only part of the truth. More importantly, it refers to all the

experimental code, the one-time migration code, the code of the previous

version, the code of the supplementary tool that no-one used anymore and so on.

In most systems we see, this accounts for a notable part of the total code

volume. Beyond that, it might be…


Dr. Andreas Göb

When talking to customers about improving the quality of their code, one question that always comes

up is where to start. And, as always in the software industry, the answer is »It depends«.

We have already covered this topic on this blog in the past (e.g., here, here).

This time, I would like to add another dimension to the question, namely the actual usage of code in production.



Dr. Florian Deißenböck

In our code audits we primarily focus on issues that affect the maintainability of the code, e.g., code duplication caused by copy&paste programming. When presenting the audit results, a common reaction by the development team is: »Yes, you’re right. This is ugly. However, it is not a problem for us because we will never change this code!« My reaction: »Great. Compile the code to a binary library and delete the source.« This always provokes second thoughts. Not only because source code also serves as documentation that would be lost but also because the team suddenly doubts its own assessment of the stability of the code. So far, not a single development team followed my suggestion.


With examples of real-world systems this post discusses the multitude of…


Dr. Benjamin Hummel

Contacting customer support with a technical issue can feel like you

are in a quiz show, with the support agent going through a list of

standard questions. As you don’t know these questions, providing the

information bit by bit can get very tedious.


You: Hello, customer support. The FooBar server is not running as

expected. I’m getting errors when updating user information.


Customer Support: Hi, pleased to help you. Which version of the server are you running?


You: Version 17.5.33


Customer Support: Great. Maybe this is a configuration issue. Can you send me the configuration file?


Dr. Dennis Pagano

Many companies employ sophisticated testing processes, but still bugs find their way into production. Often they hide among the subset of changes that were not tested. Actually, we found that untested changes are five times more error prone than the rest of the system.


To avoid that untested changes go into production, we came up with Test Gap analysis—an automated way of identifying changed code that was never executed before a release. We call these areas test gaps. Knowing them allows test managers to make a conscious decision, which test gaps have to be tested before shipping and which ones are of low risk and can be left untested.


A short while ago, we introduced Test Gap analysis into our code quality software Teamscale.


Interested in our blog? Subscribe!

Get a short notification when we blog about software quality, speak on conferences or publish our CQSE Spotlight.

By submitting your data you confirm that you agree to our privacy policy.