Dr. Corneliu Popeea

… ist Berater für Softwarequalität bei der CQSE GmbH. Er studierte Informatik an der Politehnica Universität Bucharest und promovierte über statische Analyse an der National University of Singapore.

  • +49 space::1590 space::4122943

Blog Posts

A code quality control process must not be solely based on automated tools, rather it should combine tool-based analysis with a certain amount of human interaction. There are endless variations of how a code quality control process can be implemented. In this blog-post I’d like to give a few insights on how regular code assessments, named here monthly assessments, can lead to a feedback loop that improves the code quality control process.


In this blog-post I’d like to give a few insights on how someone can inspect code maintainability of code written in Fortran and I will use as an example a popular open source library.


CQSE uses Teamscale for code quality control for our customers and our own code. The spectrum of programming languages that we support is large, including C#, ABAP, Java, JavaScript and Matlab.

In this post, I give an overview of establishing a code quality control process for a Matlab codebase, like we did for one of our customers. Some of the code examples are taken from three popular open-source applications listed at the Matlab Central webpage: export_fig, T-MATS and CNN-for-Image-Retrieval.


At CQSE, we use Teamscale and static analysis for assessment of technology suitability:

  • Get insights about the relevant third-party libraries used in a codebase.
  • Inspect code patterns corresponding to various third-party libraries.
  • Assess ease of migration away from third-party libraries that are not suitable.
  • Assist with refactoring code and avoid uses of deprecated third-party library.

In this post, I show which are the steps to configure Teamscale for such an assessment. This is illustrated using three open-source projects, FindBugs, Google Error Prone and Microsoft StyleCop. First, I use the Teamscale architecture editor and specify for which third-party libraries monitoring dependencies might be desired. Then, the architecture perspective shows the static analysis results and allows quick inspection of dependencies to third-party libraries.


While there is no precise, commonly agreed-on definition of what constitutes a software architecture, it is understood that a software system’s architecture is defined by its decomposition into building blocks and their inter-dependencies. For each pair of components, the architecture defines if and in what way the two components interact which each other. An architecture conformance analysis evaluates how well the implemented architecture matches the specified architecture. Identifying architecture violations using the conformance analysis is a required step for the maintainability of the code base. The release 1.5 of our tool Teamscale adds features that allow the conformance analysis to be better integrated in the development cycle of a project. This article describes the basic concepts needed to understand the editing of architecture and conformance analysis as performed by Teamscale.


Code quality audits aim to assess the quality of a system’s source code and identify weak points in it. Two areas of the quality audits that have been discussed in the previous posts by my colleagues are the redundancy caused by copy/paste and the anomalies that go undetected unless static analysis tools like FindBugs are used periodically to check the source code for defects. In the following, I will outline a small experiment meant to see whether the findings of the static analysis tool FindBugs reside in code blocks that have been copied over in other parts of a system’s source code. To illustrate this experiment, I will use a »Big Data« open-source project, namely Apache Hadoop. It is worth mentioning that, related to its code quality, Apache Hadoop was in the spotlight of the 2014 Report on open-source software quality from our colleagues at Coverity.



Sergey Grebenshchikov, Nuno P. Lopes, Corneliu Popeea, Andrey Rybalchenko:

Synthesizing software verifiers from proof rules.

Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’12), 2012.

Ashutosh Gupta, Corneliu Popeea, Andrey Rybalchenko:

Threader: A Constraint-Based Verifier for Multi-threaded Programs.

Proceedings of the 23rd International Conference on Computer Aided Verification (CAV’11), 2011.

Wei-Ngan Chin, Florin Craciun, Siau-Cheng Khoo, Corneliu Popeea:

A flow-based approach for variant parametric types.

Proceedings of the 21th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA’06), 2006.

Wei-Ngan Chin, Siau-Cheng Khoo, Shengchao Qin, Corneliu Popeea, Huu Hai Nguyen:

Verifying safety policies with size properties and alias controls.

Proceedings of the 27th International Conference on Software Engineering (ICSE’05), 15-21 May 2005, St. Louis, Missouri, USA, 2005.