… ist Beraterin der CQSE GmbH für Software Qualität. Sie studierte Informatik an der Technischen Universität München, dem Georgia Institute of Technology (USA) sowie der University of Illinois (USA). Als erste Absolventin des Promotionsprograms der CQSE erhielt sie ihren Doktorgrad von der Technischen Universität München.
In many software development projects, code metrics are used to grasp the concept of technical software quality, put it into numbers, and, hence, make it measurable and transparent. While installing a tool and receiving a set of numeric values is quite simple, deriving useful quality-improving actions from it, is not. For us at CQSE, it all starts first and foremost with defining the analysis scope—something we call the art of code discrimination. Whenever we use any sort of metric to gain insights about a software system, we devote significant resources to get this right. Only a cleanly defined analysis scope will allow you to get undistorted metric results. It sounds like a very trivial thing to know and to do. Yet, it is so often omitted in practice.
It’s not that normal pressure wouldn’t be enough. Your next release is already delayed. There are numerous bugs that still need to be fixed. Your customers are getting impatient. And here comes your manager, telling you that CQSE will perform a code and architecture audit on your system. You might very well think »What the hell…«. Yes, we know. And believe us, you are not alone. No development team on earth has free time to spend, in particular not on an audit that seems more scary than useful. But the audit sounds worse than it actually is. That’s why this blog post is supposed to take away some of your fears and clarify what is, in fact, ahead of you (and what is not). And believe it or not, you and your team can actually benefit from the audit, too.
Often, time pressure forces you to quickly write dirty code. You do not choose the most elegant solution. But at least the change is done and it works. You can always clean it up next time, right? Let me tell you: No, you won’t.
Why is that? Because the probability that you will change the code again is actually rather small. (Needless to say, next time you also will not have much more time at hand.) With our tool »Teamscale«, we studied how software systems evolve and how developers change their code. In particular, we examined how often a Java method is changed during its history. It turns out that most methods are only changed about two to three times on average. Two or three times? In a history of three, four, five, even up to 15 years? You might wonder how this can be true.
Posted on 27.08.2014 by Dr. Daniela Steidl
The recent blog post ‘Improving Software Quality’ by my colleague Martin showed how we can improve software quality besides just installing tools: We believe that a continuous quality control process is necessary to have a long-term, measurable impact. However, does this quality control actually have an long-term impact in practice? Or are feature and change requests still receiving more priorities than cleaning up code that was written in a hurry and under time pressure? With our long-term partner Munich Re, we gathered enough data to show in a large-scale industry case study that quality control can improve the code quality even while the system is still under active development. Our results were accepted to be published as a conference paper at the International Conference on Software Maintenance and Evolution (ICSME).
Teamscale is our tool for continuous software quality control. It provides feedback about quality problems in real-time, allowing you to keep your software free of technical debt. To give you a better idea, how certain core features of Teamscale work in practice, we created a couple of short video clips that demonstrate Teamscale in action.
How many of you know the feeling, when an incoming change request forces you to dig into code you never wanted to dig into? And how many of you have drawn the conclusion while reading the code: »I don’t get what’s going on.« With the immediate follow-up question: »Who the hell has written this code?«
As we all probably have experienced, software systems evolve over time and without effective counter measurements, their quality gradually decays, making it hard to understand and to maintain the system. With this blog post, we provide a useful way to start preventing further decay of a grown software system by cleaning up the code.
As quality consultants, we mainly work together with our customers, but we are also actively involved in current research. In this post, we summarize our paper »Incremental Origin Analysis of Source Code Files« that was recently accepted for publication at the MSR—the Working Conference on Mining Software Repositories (from 31.5. to 1.6.14 in Hyderabad, India).
I guess most of you have heard about it—and many of you use it on a daily basis: The version control system.
Feature-based Detection of Bugs in Clones.
Talk at the 7th ICSE International Workshop on Software Clones (IWSC’13), 2013.
Quality Analysis of Source Code Comments.
Talk at the 21st IEEE Internation Conference on Program Comprehension (ICPC’13), 2013.
Using Network Analysis for Recommendation of Central Software Classes.
Talk at the 19th Working Conference on Reverse Engineering (WCRE’12), 2012.
Proceedings of the 15th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM’15), 2015.
2014 IEEE International Conference on Software Maintenance and Evolution (ICSME’14), 2014.
Softwaretechnik-Trends, Vol. 34, 2014.
Proceedings of the 11th Working Conference on Mining Software Repositories (MSR’14), 2014.
Proceedings of the 22nd International Conference on Program Comprehension (ICPC’14), 2014.
Softwaretechnik-Trends, Vol. 34, 2014.
Proceedings of the 36th ACM/IEEE International Conference on Software Engineering (ICSE’14), 2014.
Proceedings of the 7th ICSE International Workshop on Software Clones (IWSC’13), 2013.
Proceedings of the 21st IEEE Internation Conference on Program Comprehension (ICPC’13), 2013.
Master’s Thesis. Technische Universität München, 2012.
Proceedings of the 19th Working Conference on Reverse Engineering (WCRE’12), 2012.
Proceedings of the 5th ICSE International Workshop on Software Clones (IWSC’11), 2011.