Fabian Streitel

Many companies rely on manual tests as a cornerstone of their quality assurance strategy. Unlike automated tests, manual tests, especially exploratory tests, are very flexible (you can decide to start testing anything at any time) but at the same time very costly. Thus, we must make sure that the effort we put into manual testing is spent as effectively as possible, i.e. that we maximize our chance to catch defects.

 

This post explores how Test Gap analysis provides some much needed transparency for manual and exploratory testing.

Read more...

Jakob Rott

In times when many of us work from home, due to COVID-19, we thought it would be nice to have some interesting articles about software quality at hand.

This post outlines two collections: the 5 most read blog posts from 2019 and hand-picked articles tackling questions that frequently pop up in our work.

Read more...

Due to COVID-19, many of us work from home and conferences can not take place. Therefore in this blog post, we collect video recordings from the following three talks and one interview.

Read more...

Jakob Rott

Since this post links to video recordings in German language, it is written in German, too.

Da wegen COVID-19 derzeit viele von Zuhause aus arbeiten und unter anderem Konferenzen nicht stattfinden, haben wir hier fünf Videos von Konferenzvorträgen zusammengestellt.

Die Videos zu den Vorträgen sind direkt im jeweiligen Post zum Vortrag zu finden:

Read more...

Our Test Gap analysis and Test Impact analysis automatically monitor the execution of your application to determine which code is covered in tests.

As a result, they are objective with respect to what has been tested and, more importantly, what hasn’t. No personal judgement or gut feeling involved.

However, when we first setup the analyses with our customers, we often find that the measurements differ (significantly!) from their expectations. Often, this is because other coverage tools report different coverage numbers.

This post explores causes for such differences.

Read more...

We are happy to announce that Springer released "The Future of Software Quality Assurance", to which Elmar and I contributed a chapter on Change-Driven Testing.

The entire book, published to mark the 15th anniversary of the International Software Quality Institute (iSQI), is Open Access and may be downloaded from Springer’s website (in English).

You may also download only our chapter in English or the German translation.

Read more...

As a consultant, I often talk to customers who have large amounts of custom ABAP code in their SAP systems and spend equally large efforts on testing all of it over and over again, since it is hard to know what exactly to test after changing certain parts.

Since costs matter a lot these days, many said customers are looking for ways to spend their test budget more efficiently. One way to do so is using tools to spot the areas where testing is more likely to find bugs than in others. In this post, I will compare two such tools, namely SAP’s Business Process Change Analyzer (BPCA) and CQSE’s Teamscale, which offers Test Gap Analysis (TGA) and will provide features for Test Impact Analysis (TIA) in the future.

Read more...

Many software projects use online tools like GitLab, GitHub, Jira, and Gerrit for collaboration between developers. They discuss about code, reviewing features, and check if the automated tests passed.

However, the impact of a merge on code maintainability is not easy to judge in such tools, because it is hard to make decisions from a simple code diff. Some introduced maintainability problems (such as new architecture violations or copy-pasted code) are impossible to spot when seeing only the changed code.

In this blog post, I illustrate how Teamscale results can be integrated easily in existing online-collaboration tools. This helps to make existing code-review processes more thorough and efficient.

Read more...

Die CQSE ist auf der Konferenz »The Architecture Gathering 2018« präsent - herzliche Einladung zum Fachsimpeln über Architektur- und Softwarequalität, einer Teamscale-Demo oder einem Plausch.

CQSE takes part in the conference »The Architecture Gathering 2018« - you are cordially invited to discuss architecture and software quality, watch a Teamscale demo or have a chat.

 

Read more...

As more and more software applications are operated in the cloud, stakeholders of applications originally developed for another platform wonder how they can make their application cloud ready.

This article describes how we answer this question by analyzing cloud smells on code level and cloud requirements on architecture and infrastructure level during a software audit.

 

Read more...