If you are controlling software quality in a continuous manner, the absolute values of the quality measures at a specific point in time are often not the most important thing. What matters more, is the direction in which you are going. While there may be short-term decreases in quality over time, e.g. due to larger refactorings, the general trend should be towards improvement. An effective way to determine this trend is a Delta Analysis.
A Delta Analysis compares two snapshots of the code base and determines how the changes in the time frame affected the software quality. To do this correctly, a tool has to be able to differentiate between old and new quality deficits (we call them findings). Many existing tools have major limitations in their tracking of findings. For instance, renaming a class or moving a method from one class to another will usually result in all findings being reported as
Almost every long-living software system has accumulated an abundance of quality deficits over time. It’s not only impossible to remove all findings, I also do not recommend to do that. You may very well argue to not remove any legacy finding by saying »It has worked all the time« or »It wasn’t me who introduced the problem«. But then—on the other hand—you should make sure that you don’t introduce any new problems. To check this, you need a tool that can reliably differentiate between legacy findings and findings that have been recently introduced. This has to work also if directories of files are renamed, code is moved between files and findings change their appearance in the code. Making the distinction between legacy and recent findings is one of the many strengths of Teamscale.