Team Member

Fabian Streitel


… is a consultant for continuous quality control at CQSE GmbH. He studied computer science and obtained a Master of Science degree at the Technische Universität München.

  • +49 space::159 space::04046270
  • streitel@invalid::cqse.eu
  • @karottenreibe

Blog Posts


If you are interested in improving the quality and specifically the maintainability of the software you produce, then you will have very likely asked yourself one of these questions:

  • How well is my system doing in comparison to others?
  • Which of our projects are not doing so well?
  • Where do I have a serious problem and need to act right away?
  • How can I make sure that my system is improving over time?

And most likely, you will also have asked yourself: Is there not one single number that can answer oll of these questions for me? I.e. can we take all the complex quality measurements that are possible (clone coverage, test coverage, structure metrics, open field defects, pending reviews, …) and aggregate them into one KPI?

Read more...


This piece contains some lessons learned about my experience optimizing our git hook performance. The information here is certainly not new but I haven’t found it aggregated and explained in one single place yet.

We recently switched our main code repository from SVN to Git and with that came many challenges and improvements to our software development process. One option that Git offers are so-called hooks. These are small programs or scripts that are run before or after a commit, when pushing to a repository and at other times. They may write to the console during these Git processes (making the output look like it came from Git) and even abort them, e.g. if the user is trying to use Git in a way you don’t want them to.

Directly after the migration to Git, we had several recurring problems that we addressed with these commit hooks. In this post, I’ll summarize a few of the

Read more...


Anyone writing code knows that famous sentence: »I’ll clean that up later«. We have all heard it. We have all uttered it. And, as my colleague Daniela already remarked: No you won’t!

Most likely, you’ll forget about that long method you wanted to shorten later. Or that clone you think can be removed. Or that null pointer finding you’re sure to run into once your code goes into production. But there’s no time right now! So you leave it as is and turn to more pressing matters. And the quality problems stay—forgotten.

It is a challenge to keep track of these issues is a challenge. You want them to be visible to your entire team—they’ll just gather dust in your own personal ToDo list. But you don’t want to clutter your team’s issue tracker with every single finding either—some might just be too small to even be worth mentioning in your sprint planning session. Your quality engineer might want to have an overview over

Read more...


Who doesn’t love metrics? They measure your progress in achieving a goal. They can help you objectively answer important questions. They help you understand complex situations or problems.

However, there are some caveats: it’s easy to pile on a ton of metrics and be drowned in information. Selecting the relevant ones is important. Some metrics may also be hard to interpret when you see merely the raw data. Only with the right visualization will you reap its full benefits and gain an intuitive understanding of your problem.

Read more...


When you say »software test«, most people will immediately have a mental picture of automated unit tests, continuous integration, fancy mutation testing tools etc. But in reality a large portion of testing activities is still manual, i.e., someone clicking through the UI, entering values, hitting buttons and comparing on-screen results with an Excel sheet. Such tests are sometimes exploratory (i.e., the tester performs some random actions in the UI and reports any bugs he may or may not encounter on the way) and sometimes they are structured (i.e., someone writes a natural-language document with step-by-step instructions for the tester).

In our long-running research cooperation with TU München and our partner HEJF GbR, we have encountered large regression test suites (i.e., hundreds of test cases) in many different companies that were built this way. Usually some test management tool is used to store these test descriptions and monitor the

Read more...


Testing is an integral part of a software product’s life-cycle. Some people prefer executing their tests continuously while they develop, others have a separate testing phase before each release. The goal, however, is always the same: finding bugs. The more, the better.

Unfortunately, the time available for testing and for writing new tests is limited. At some point, you have to get on with development, ship your software and be confident it contains no serious flaws. Often, more tests exist than can be run in the available time span. This is especially true if your tests are executed manually, as is the case for many of our customers. Then the question becomes: which tests do I select to find the most bugs before the release?

Research has shown that untested changes are 5 times more likely (“Did We Test Our Changes?”, Eder et. al., 2013) to

Read more...


One of the services we offer is called TestGap Analysis. It helps you identify changes in your code that have not been covered by your tests since the last release. I have been working on TestGap for the last few months and never had to worry about performance problems. Analyzing a code base of over 30,000 classes took about 15 minutes, which is what you’d expect for a complex analysis. Last week, though, we tried executing it on a new project (> 60,000 classes) and after the analysis had run for 2 hours with no end in sight, we had to face the facts: we had a performance problem.

Read more...


Talks


Fabian Streitel:

Dead Code Detection.

Talk at the 16th Workshop Software Reengineering and Evolution (WSRE), 2014.