Why We Eat our Own Dogfood
Dr. Lars Heinemann
When giving presentations or demos about Teamscale, we often get asked whether we use Teamscale for developing Teamscale. The answer is a clear »Yes!«. In this blog post, I’d like to shed some light on why and how we use Teamscale at CQSE.
Regardless of whether it is called eat your own dogfood or drink your own champagne, we are convinced that using your own product is essential for making your product great. Consequently, dogfooding is an integral part of the Teamscale development process. Not only do we believe that Teamscale helps us to write better code, we also think that it is very insightful to get an unfiltered impression of the end user experience of our tooling on a daily basis.
In particular, we see three benefits. First of all, Teamscale allows us to monitor and control our code quality, just the way it does for our customers. Second, it completes our testing strategy. By using a recent development snapshot of Teamscale, we are able to detect bugs that were not caught by either the automated test suite or our code review. Finally, it is part of our requirements engineering. By using Teamscale ourselves, we see which features work well, which additional functionality could be useful and how the user experience could be improved. This form of early feedback allows us to detect problems in time, giving us the opportunity to improve before a new version of Teamscale gets shipped to the customer.
Technically, we are running a recent development snapshot of Teamscale on an internal server which analyzes the entire Teamscale code base (currently about 800 kLOC). We regularly deploy a new build of our main development branch (approx. every week) which allows us to use the most recent features and see how they feel in a real world usage scenario. In the stabilization phase, we deploy our weekly release candidates from the integration branch and keep our focus on detecting critical bugs. We also use the Eclipse and WebStorm plug-ins to make the findings available during code editing right within the IDE.
Besides the daily usage by each developer, we also perform regular code quality retrospectives with the team. For every 6-week release cycle, one of our team members takes over the role of the quality engineer and prepares a session in which we have a look at our current code quality status and its recent evolution. As a team, we thereby get an idea of where we’re going and by looking at findings that slipped through, we develop a shared understanding of which code quality aspects are important to us. This also gives us the opportunity to identify areas that need improvement. Typical outcomes are maintenance tasks to address individual findings or refinements to our own Teamscale analysis profile.
Besides the introductory question, we sometimes also get asked whether the code of Teamscale is perfect and free from findings. Here, the answer is a clear »No!«. As any other software project out there, we also have to live with the trade off between shipping features on time and writing perfect code. Still, we think that we have reached a good balance between these two goals. To bring this down to some numbers, let’s have a look at the Teamscale dashboard for the Teamscale code base: