As more and more software applications are operated in the cloud, stakeholders of applications originally developed for another platform wonder how they can make their application cloud ready.

This article describes how we answer this question by analyzing cloud smells on code level and cloud requirements on architecture and infrastructure level during a software audit.



Andi Scharfstein

If you are using GitLab CI as your build server, you might be familiar with the [skip ci] syntax —it gives you the option to skip a build entirely by including this tag in your commit message.

However, there are scenarios when a build is still required but certain stages can be skipped. Let’s say you are preparing a new release, which requires a changelog entry.

In this post, I explain how to add a [skip tests] command for commit messages.



Dr. Benjamin Hummel

ConQAT was designed as a toolkit for rapid development and execution of software quality analyses. We started it as a vehicle for

performing research on automated software quality analysis back in 2005 at TUM and it also was one of the corner stones of CQSE when we started the company in 2009.


Dr. Daniela Steidl

In many software development projects, code metrics are used to grasp the concept of technical software quality, put it into numbers, and, hence, make it measurable and transparent. While installing a tool and receiving a set of numeric values is quite simple, deriving useful quality-improving actions from it, is not.

For us at CQSE, it all starts first and foremost with defining the analysis scope—something we call the art of code discrimination. Whenever we use any sort of metric to gain insights about a software system, we devote significant resources to get this right. Only a cleanly defined analysis scope will allow you to get undistorted metric results.

It sounds like a very trivial thing to know and to do. Yet, it is so often omitted in practice.…


Martin Pöhlmann

Over the last few years, Docker has largely been adopted to ease the deployment of applications.

One of the key benefits is bundling all required dependencies of an application in a single image that can be used right away without a huge installation and configuration overhead. Hence, we are using Docker for running our own Teamscale instances, as well as for Teamscale instances at the customer site.

Docker containers are also neat when testing and reviewing new features before they are merged back into master.

Especially on a local developer machine it is beneficial that the Docker image is small, because you don’t want to waste your SSD space with gigabytes of Docker image data.

Over the last years the Teamscale Docker image became quite large,…


A code quality control process must not be solely based on automated tools, rather it should combine tool-based analysis with a certain amount of human interaction.

There are endless variations of how a code quality control process can be implemented.

In this blog-post I’d like to give a few insights on how regular code assessments, named here monthly assessments, can lead to a feedback loop that improves the code quality control process.



Dr. Christian Pfaller

Quality Control is one of the major services we do at CQSE. The aim of quality control is to stay (almost) clean of quality deficits or to achieve a continuous improvement regarding quality.

In most cases we focus on the quality of source code. To keep it simple, I will stick to code quality throughout this post. I will focus on two aspects of this process: Quality tasks and how these relate to quality goals.



Daniela Transiskus

Die CQSE GmbH ist im Finale um den Deutschen Gründerpreis. Die Jury ist beeindruckt von der rasanten Entwicklung des Unternehmens.

Der Analysesoftware-Hersteller CQSE GmbH ist für den renommierten Deutschen Gründerpreis nominiert. Das Garchinger Unternehmen zählt zu den Top-3-Finalisten in der Kategorie »Aufsteiger«. Damit bewertet die Jury die CQSE GmbH als eine der erfolgreichsten und vielversprechendsten Existenzgründungen der vergangenen Jahre in Deutschland.


Dr. Dennis Pagano

Software is made of code. Well, yes, but not exclusively. Software engineering involves working with many other artifacts, such as models, architectures, tickets, build scripts, … and tests.

The goal of Teamscale is to provide meaningful and useful information about all aspects of software engineering. This is what we call »Software Intelligence«.

Consequently, in addition to providing profound insights into code quality, Teamscale performs many other sophisticated analyses, including architecture conformance analysis, analysis of issue tracker data, team evolution analysis, code ownership analysis, or data taint security analysis.


As developers, we can all relate to that sense of accomplishment when a feature is finally done.

You’ve spent a lot of time planning the implementation, ensuring that all cases and any imaginable scenarios are handled, and that the feature functions as intended. You’ve checked your code into the shared code repository and now you’re done! Or are you?

»Is my feature done?« is actually a vague question with no definitive or unique answer. And in order to answer this question, each

team needs to define their own Definition of Done (DoD). The Definition of Done is a checklist of activities or conditions that need to be completed before a software product or a development task is considered as done. Examples for these activities are: writing code, coding…


Interested in our blog? Subscribe!

Get a short notification when we blog about software quality, speak on conferences or publish our CQSE Spotlight.

By submitting your data you confirm that you agree to our privacy policy.