How can I test, how good my tests are?
We want our tests to discover new bugs quickly. But with which likelihood do my tests actually discover new bugs in my code base? And which code is pseudo-tested in the sense that it gets executed by tests, but in which novel, severe bugs will most likely not be discovered?
In this talk, I present different approaches to answer this question. From code coverage, to mutation testing to novel approaches in between from new research (partly from our group). I demonstrate all approaches using the same real world project and depict the strengths and limitations of each.
Finally, I show various analyses (including test gap analysis, test impact analysis and pareto testing) that use data on code coverage and pseudo testedness to perform test case selection, test suite minimization or other test optimizations.