Andi Scharfstein

It's the end of spring in 2021, the sun is coming out and it's starting to look like this will be the year where we can finally start beating back the COVID-19 pandemic in earnest. However, there is still a stretch of time we need to get over until regular life may hopefully resume, so I wanted to share with you how we are dealing with the pandemic at the company where I work, CQSE.


Learn more

Since this post accompanies events in German, it is written in German, too.

Wer an der Softwarequalität in seinem Projekt interessiert ist, kann sich im Bring Your Own Code Workshop durch die Einrichtung von Teamscale begleiten lassen und in einem gemeinsamen Treffen mit der CQSE über die Analyseergebnisse diskutieren.

In diesem Blogartikel wird Ablauf und Vorbereitung des Workshops beschrieben.

Learn more

Fabian Streitel

Many companies rely on manual tests as a cornerstone of their quality assurance strategy. Unlike automated tests, manual tests, especially exploratory tests, are very flexible, but at the same time very costly. Thus, we must make sure that the effort we put into manual testing is spent as effectively as possible, i.e. that we maximize our chance to catch defects.

This post explores how Test Gap analysis provides some much needed transparency for manual and exploratory testing.

Learn more

Jakob Rott

In times when many of us work from home, due to COVID-19, we thought it would be nice to have some interesting articles about software quality at hand.

This post outlines two collections: the 5 most read blog posts from 2019 and hand-picked articles tackling questions that frequently pop up in our work.

Learn more

Due to COVID-19, many of us work from home and conferences can not take place. Therefore in this blog post, we collect video recordings from the following three talks and one interview.

Learn more

Jakob Rott

Since this post links to video recordings in German language, it is written in German, too.

Da wegen COVID-19 derzeit viele von Zuhause aus arbeiten und unter anderem Konferenzen nicht stattfinden, haben wir hier fünf Videos von Konferenzvorträgen zusammengestellt.

Die Videos zu den Vorträgen sind direkt im jeweiligen Post zum Vortrag zu finden:

Learn more

Dr. Sven Amann

Our Test Gap analysis and Test Impact analysis automatically monitor the execution of your application to determine which code is covered in tests.

As a result, they are objective with respect to what has been tested and, more importantly, what hasn’t. No personal judgement or gut feeling involved.

However, when we first setup the analyses with our customers, we often find that the measurements differ (significantly!) from their expectations. Often, this is because other coverage tools report different coverage numbers.

This post explores causes for such differences.

Learn more

We are happy to announce that Springer released "The Future of Software Quality Assurance", to which Elmar and I contributed a chapter on Change-Driven Testing.

The entire book, published to mark the 15th anniversary of the International Software Quality Institute (iSQI), is Open Access and may be downloaded from Springer’s website (in English).

You may also download only our chapter in English or the German translation.

Learn more

Dr. Andreas Göb

As a consultant, I often talk to customers who have large amounts of custom ABAP code in their SAP systems and spend equally large efforts on testing all of it over and over again, since it is hard to know what exactly to test after changing certain parts.

Since costs matter a lot these days, many said customers are looking for ways to spend their test budget more efficiently. One way to do so is using tools to spot the areas where testing is more likely to find bugs than in others. In this post, I will compare two such tools, namely SAP’s Business Process Change Analyzer (BPCA) and CQSE’s Teamscale, which offers Test Gap Analysis (TGA) and will provide features for Test Impact Analysis (TIA) in the future.

Learn more

Many software projects use online tools like GitLab, GitHub, Jira, and Gerrit for collaboration between developers. They discuss about code, reviewing features, and check if the automated tests passed.

However, the impact of a merge on code maintainability is not easy to judge in such tools, because it is hard to make decisions from a simple code diff. Some introduced maintainability problems (such as new architecture violations or copy-pasted code) are impossible to spot when seeing only the changed code.

In this blog post, I illustrate how Teamscale results can be integrated easily in existing online-collaboration tools. This helps to make existing code-review processes more thorough and efficient.

Learn more

Interested in our blog? Subscribe!

Get a short notification when we blog about software quality, speak on conferences or publish our CQSE Spotlight.

By submitting your data you confirm that you agree to our privacy policy.