… is consultant for software quality at CQSE GmbH. He studied computer science at the Technische Universität Darmstadt and received a PhD in software engineering from the Technische Universität München.
Posted on 01/04/2018 by Dr. Andreas Göb
Over the recent years, version control systems have started to consolidate. While there were lots of proprietary systems in the past, the world now seems to agree that distributed systems such as Git are the way to go in most cases. At CQSE, we switched our own development infrastructure from SVN to Git in 2016. In this post, I will outline some important lessons I learned from migrating several code bases from proprietary solutions to Git over the last years.
Posted on 03/15/2017 by Dr. Andreas Göb
Throughout this blog, there have been several posts explaining how you can use Teamscale to keep your code-base clean and prevent quality decay. Since we have a lot of enterprise customers, we are often confronted with the question whether our approach works only with modern languages and infrastructure used by small software vendors and start-ups, or if we can also offer analysis for mature technologies mainly used in the enterprise.
Posted on 06/22/2016 by Dr. Andreas Göb
When talking to customers about improving the quality of their code, one question that always comes up is where to start. And, as always in the software industry, the answer is »It depends«. We have already covered this topic on this blog in the past (e.g., here, here). This time, I would like to add another dimension to the question, namely the actual usage of code in production.
Since this post accompanies a talk in German, it is written in German, too.
Since this post accompanies a talk in German, it is written in German, too.
Posted on 10/15/2015 by Dr. Andreas Göb
As we all know, programmers spend a lot of their time reading code. The paper Concise and Consistent Naming shows that approximately 70% of a system’s source code is identifiers, i.e. names of procedures, methods, variables, constants, and so on. The paper concludes that identifiers should be chosen with care. In this post, I approach the topic of identifiers from a more technical perspective, and illustrate some basic things both programmers and tool vendors can easily stumble upon.
Posted on 04/29/2015 by Dr. Andreas Göb
As my colleague Fabian explained a few weeks ago, a combination of change detection and execution logging can substantially increase transparency regarding which recent changes of a software system have actually been covered by the testing process. I will not repeat all the details of the Test Gap Analysis approach here, but instead just summarize the core idea: Untested new or changed code is much more likely to contain bugs than other parts of a software system. Therefore it makes sense to use information about code changes and code execution during testing in order to identify those changed but untested areas.
Several times we heard from customers that they like the idea, but they are not sure about its applicability in their specific project. In the majority of these cases the argument was that the project mainly deals with generated artifacts rather
Test-Gap-Analyse - Erfahrungen aus drei Jahren Praxiseinsatz.
Vortrag auf dem German Testing Day 2016, 2016.
Test Gap Analysis: Risk-based Testing of ABAP Applications.
Talk at SAP Inside Track Munich 2015, 2015.
Test Accompanying Calculation of Test Gaps for Java Applications.
Guided Research. Technische Universität München, 2017.
Obtaining Coverage per Test Case.
Master’s Thesis. Technische Universität München, 2017.
Operationalised product quality models and assessment: The Quamoco approach.
Information and Software Technology, Vol. 62, 2015.
Bachelor’s Thesis. Technische Universität München, 2015.
A Meta Model for Software Architecture Conformance and Quality Assessment.
Electronic Communications of the ECEASST, Vol. 60, 2013.
Early Validation of Software Quality Models with respect to Minimality and Completeness: An Empirical Analysis.
DASMA Metrik Kongress (Metrikon’13), 2013.
A model for the design of interactive systems based on activity theory.
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (CSCW’12), 2012.
Software Quality Models in Practice.
Report TUM-I129. Technische Universität München, 2012.
The Quamoco Product Quality Modelling and Assessment Approach.
Proceedings of the 34th ACM/IEEE International Conference on Software Engineering (ICSE’12), 2012.
The Quamoco Quality Meta-Model.
Report TUM-I128. Technische Universität München, 2012.
A software quality model for SOA.
8th International Workshop on Software Quality (WoSQ’11), 2011.
A Unifying Model for Software Quality.
8th Int. Workshop on Software Quality (WoSQ’11), 2011.
Reducing User Perceived Latency with a Proactive Prefetching Middleware for Mobile SOA Access.
International Journal of Web Services Research, Vol. 8, 2011.
Categorization of Software Quality Patterns.
3. Workshop zur Software-Qualitätsmodellierung und -bewertung (SQMB’10), 2010.
Reducing User Perceived Latency in Mobile Processes.
2010 IEEE International Conference on Web Services, 2010.
What is Different in Quality Management for SOA?
14th IEEE International Enterprise Distributed Object Computing Conference, 2010.
Quality models in practice: A preliminary analysis.
3rd International Symposium on Empirical Software Engineering and Measurement (ESEM’09), Vol. 0, 2009.
Reducing User Perceived Latency with a Middleware for Mobile SOA Access.
2009 IEEE International Conference on Web Services, 2009.