Audits & Assessments

We provide a comprehensive and neutral assessment of the most important quality criteria of your software.

Quality Improvement

Teamscale Logo

Teamscale helps your developers to reach your code quality goals by revealing new quality deficits immediately.

  • Quality analysis in realtime
  • Individual dashboards
  • Integration in the development environment

Learn more about Teamscale...

ConQAT Logo

ConQAT is our customizable analysis engine to solve your specific problem.

  • Freely configurable
  • Easy to extend
  • Available open source

Learn more about ConQAT...

It Wasn't Me: Baselining in Teamscale

Posted on 05/13/2015 by Dr. Nils Göde

Almost every long-living software system has accumulated an abundance of quality deficits over time. It’s not only impossible to remove all findings, I also do not recommend to do that. You may very well argue to not remove any legacy finding by saying »It has worked all the time« or »It wasn’t me who introduced the problem«. But then—on the other hand—you should make sure that you don’t introduce any new problems. To check this, you need a tool that can reliably differentiate between legacy findings and findings that have been recently introduced. This has to work also if directories of files are renamed, code is moved between files and findings change their appearance in the code. Making the distinction between legacy and recent findings is one of the many strengths of Teamscale.


Testing Changes in SAP BW Applications

Posted on 04/29/2015 by Dr. Andreas Göb

As my colleague Fabian explained a few weeks ago, a combination of change detection and execution logging can substantially increase transparency regarding which recent changes of a software system have actually been covered by the testing process. I will not repeat all the details of the Test Gap Analysis approach here, but instead just summarize the core idea: Untested new or changed code is much more likely to contain bugs than other parts of a software system. Therefore it makes sense to use information about code changes and code execution during testing in order to identify those changed but untested areas.

Several times we heard from customers that they like the idea, but they are not sure about its applicability in their specific project. In the majority of these cases the argument was that the project mainly deals with generated artifacts rather than


our Customers