Jakob Rott

Since this post links to video recordings in German language, it is written in German, too.

Da wegen COVID-19 derzeit viele von Zuhause aus arbeiten und unter anderem Konferenzen nicht stattfinden, haben wir hier fünf Videos von Konferenzvorträgen zusammengestellt.

Die Videos zu den Vorträgen sind direkt im jeweiligen Post zum Vortrag zu finden:

Read more...

Dr. Sven Amann

Our Test Gap analysis and Test Impact analysis automatically monitor the execution of your application to determine which code is covered in tests.

As a result, they are objective with respect to what has been tested and, more importantly, what hasn’t. No personal judgement or gut feeling involved.

However, when we first setup the analyses with our customers, we often find that the measurements differ (significantly!) from their expectations. Often, this is because other coverage tools report different coverage numbers.

This post explores causes for such differences.

Read more...

We are happy to announce that Springer released "The Future of Software Quality Assurance", to which Elmar and I contributed a chapter on Change-Driven Testing.

The entire book, published to mark the 15th anniversary of the International Software Quality Institute (iSQI), is Open Access and may be downloaded from Springer’s website (in English).

You may also download only our chapter in English or the German translation.

Read more...

Dr. Andreas Göb

As a consultant, I often talk to customers who have large amounts of custom ABAP code in their SAP systems and spend equally large efforts on testing all of it over and over again, since it is hard to know what exactly to test after changing certain parts.

Since costs matter a lot these days, many said customers are looking for ways to spend their test budget more efficiently. One way to do so is using tools to spot the areas where testing is more likely to find bugs than in others. In this post, I will compare two such tools, namely SAP’s Business Process Change Analyzer (BPCA) and CQSE’s Teamscale, which offers Test Gap Analysis (TGA) and will provide features for Test Impact Analysis (TIA) in the future.

Read more...

Many software projects use online tools like GitLab, GitHub, Jira, and Gerrit for collaboration between developers. They discuss about code, reviewing features, and check if the automated tests passed.

However, the impact of a merge on code maintainability is not easy to judge in such tools, because it is hard to make decisions from a simple code diff. Some introduced maintainability problems (such as new architecture violations or copy-pasted code) are impossible to spot when seeing only the changed code.

In this blog post, I illustrate how Teamscale results can be integrated easily in existing online-collaboration tools. This helps to make existing code-review processes more thorough and efficient.

Read more...

Dr. Tobias Roehm

Die CQSE ist auf der Konferenz »The Architecture Gathering 2018« präsent - herzliche Einladung zum Fachsimpeln über Architektur- und Softwarequalität, einer Teamscale-Demo oder einem Plausch.

CQSE takes part in the conference »The Architecture Gathering 2018« - you are cordially invited to discuss architecture and software quality, watch a Teamscale demo or have a chat.

 

Read more...

As more and more software applications are operated in the cloud, stakeholders of applications originally developed for another platform wonder how they can make their application cloud ready.

This article describes how we answer this question by analyzing cloud smells on code level and cloud requirements on architecture and infrastructure level during a software audit.

 

Read more...

Andi Scharfstein

If you are using GitLab CI as your build server, you might be familiar with the [skip ci] syntax —it gives you the option to skip a build entirely by including this tag in your commit message.

However, there are scenarios when a build is still required but certain stages can be skipped. Let’s say you are preparing a new release, which requires a changelog entry.

In this post, I explain how to add a [skip tests] command for commit messages.

 

Read more...

Dr. Benjamin Hummel

ConQAT was designed as a toolkit for rapid development and execution of software quality analyses. We started it as a vehicle for

performing research on automated software quality analysis back in 2005 at TUM and it also was one of the corner stones of CQSE when we started the company in 2009.

Read more...

Dr. Daniela Steidl

In many software development projects, code metrics are used to grasp the concept of technical software quality, put it into numbers, and, hence, make it measurable and transparent. While installing a tool and receiving a set of numeric values is quite simple, deriving useful quality-improving actions from it, is not.

For us at CQSE, it all starts first and foremost with defining the analysis scope—something we call the art of code discrimination. Whenever we use any sort of metric to gain insights about a software system, we devote significant resources to get this right. Only a cleanly defined analysis scope will allow you to get undistorted metric results.

It sounds like a very trivial thing to know and to do. Yet, it is so often omitted in practice.…

Read more...

Interested in our blog? Subscribe!

Get a short notification when we blog about software quality, speak on conferences or publish our CQSE Spotlight.

By submitting your data you confirm that you agree to our privacy policy.