Dr. Christian Pfaller

In this blog post I’d like to give a few insights on how we do ABAP programming at CQSE, which might be quite different than in most other ABAP development teams.


Dr. Dennis Pagano

Test Gap Analysis identifies changed code that was never tested before a release. Often, these code areas - the so called "test gaps" - are way more error prone than the rest of the system. We introduced Test Gap Analysis in many projects and used it on a wide range of different projects: from enterprise information systems to embedded software, from C/C++ to Java, C#, Python and even ABAP. In this post, I want to highlight a few important factors, which I think are good to know before starting with Test Gap Analysis.


The issue metrics are an exciting feature of Teamscale. They allow you to analyze and visualize issues (a.k.a. tickets) of an issue tracker using a query language. Benjamin presented this feature in a previous blog post.

In this post, I will show how issue metrics can be used in threshold configurations to assess the number of critical bugs.



On our mission to support development teams improving their products’ software quality, we strive to support their development process the best we can. To do this, Teamscale has always analyzed every commit on the mainline in the repository for potential problems.

However, recently many teams (including ourselves) have switched from a single development line to a branched based development approach (e.g., git flow, merge requests or manual Subversion/Team Foundation Server branches).

In this post, I will present how branch support looks like in Teamscale and how developers can work with it to create the best code possible.


Noha Khater

At CQSE, our mission is to improve the quality of software systems in all domains and across a wide spectrum of technologies. Our latest step towards that goal is the integration of Simulink in Teamscale.

Simulink has been deeply integrated into Teamscale. In addition to being added to our growing list of over 20 supported programming languages, it comes with its own Simulink Viewer in Teamscale, advanced rendering of Simulink blocks and Stateflow models, its own set of metrics, findings and more.

In this post, I present the new features added with the Simulink support in detail, and explain how they can be used.


Dr. Benjamin Hummel

Today, I would like to focus on an exciting new feature we added to Teamscale with the latest version: issue metrics.

While Teamscale could connect to your issue tracker (such as Jira or Redmine) and link commits to individual issues already, this new feature provides deeper insights into your existing issue data.


Dr. Lars Heinemann

In this blog post I’d like to highlight a small yet very handy feature of Teamscale’s metric trend charts that assists you in effectively finding the root cause for conspicuous changes of quality in the history of your code base.

Teamscale’s metric trends allow you to inspect how your source code evolved with regards to different quality criteria. With its incremental analysis engine, Teamscale analyses the effects of every single commit on the quality metrics and thereby providing detailed trend information. For instance, to analyze how the amount of copy and paste programming changed, you can easily bring up a trend chart for the clone coverage by clicking on the respective metric value in the metrics perspective.



Dr. Nils Göde

Many static analysis tools excel at analyzing code that is written in popular programming languages like Java and C# for which there are tons of freely available resources like code examples and documentation. There is a substantial common understanding of coding best practices, frequent pitfalls and recurring code smells. In addition, existing libraries to parse and analyze code written in these programming languages make it easy to support these in a static analysis tool.

So does Teamscale.

However, there is a large number of programming languages like Structured Text and Magik that are less popular but, nonetheless, highly relevant for certain domains. Maintenance of code written in these languages faces the same challenges as maintenance of code…


Fabian Streitel

If you are interested in improving the quality and specifically the maintainability of the software you produce, then you will have very likely asked yourself one of these questions:

"How well is my system doing in comparison to others?" "Which of our projects are not doing so well?" "Where do I have a serious problem and need to act right away?" "How can I make sure that my system is improving over time?"

And most likely, you will also have asked yourself: "Is there not one single number that can answer oll of these questions for me?"

Can we take all the complex quality measurements that are possible (clone coverage, test coverage, structure metrics, open field defects, pending reviews, …) and aggregate them into one KPI?


Dr. Andreas Göb

Throughout this blog, there have been several posts explaining how you can use Teamscale to keep your code-base clean and prevent quality decay. Since we have a lot of enterprise customers, we are often confronted with the question whether our approach works only with modern languages and infrastructure used by small software vendors and start-ups, or if we can also offer analysis for mature technologies mainly used in the enterprise.


Interested in our blog? Subscribe!

Get a short notification when we blog about software quality, speak on conferences or publish our CQSE Spotlight.

By submitting your data you confirm that you agree to our privacy policy.