Analyzing Changes in SAP
As a consultant, I often talk to customers who have large amounts of custom ABAP code in their SAP systems and spend equally large efforts on testing all of it over and over again, since it is hard to know what exactly to test after changing certain parts. Since costs matter a lot these days, many said customers are looking for ways to spend their test budget more efficiently. One way to do so is using tools to spot the areas where testing is more likely to find bugs than in others. In this post, I will compare two such tools, namely SAP’s Business Process Change Analyzer (BPCA) and CQSE’s Teamscale, which offers Test Gap Analysis (TGA) and will provide features for Test Impact Analysis (TIA) in the future.
Allocating Test Efforts Efficiently
Whenever large software systems are maintained over a long time, testing is both important and expensive, because legacy systems often lack comprehensive test suites. In fact, one of the definitions of legacy code is »code without tests« . Therefore, changes to business-critical systems are often accompanied by phases of structured, manual testing. Unsurprisingly, companies are looking for ways to reduce test efforts while keeping their tests effective. One way of doing so is to steer test activities towards areas of the system that are likely to contain bugs. Research has shown that change is a good predictor for bugs, i.e. areas that recently changed have a much higher probability of containing bugs , .
In order to help achieving this goal, the central questions are: »Given a set of changes, which tests will provide the most value?« and »After running these tests, did they actually execute all of the code changes in the system?«
SAP Business Process Change Analyzer
As the owner of the ABAP technology stack, SAP offers the Business Process Change Analyzer (BPCA) as part of Solution Manger. This tool integrates into Solution Manager’s test management capabilities by leveraging test cases from the solution documentation and create test plans out of them.
The 10,000 feet view is roughly as follows:
- You already have a complete »Solution Documentation«, i.e. you modeled your business processes within Solution Manager.
- For each business process, you created a »Technical Bill of Materials (TBOM)« by executing the business process and monitoring which parts of the SAP system it touches. In order to do so, SAP provides »Usage Procedure Logging (UPL)« as well as »SCMON«. A TBOM consists of code objects, UIs, and database tables, among others.
- The set of changes is given as a transport. This may be a custom development, but can also be a support package provided by SAP or any third-party vendor.
- The data from steps 1–3 is combined, so that you can see which steps of which business processes are affected by the changes in the transport.
- If you also manage your test cases in your solution documentation (or a test management system by an approved partner) and have linked them to your business processes, you can have the relevant test cases selected and compiled into a test plan.
To my understanding, the functional scope of BPCA ends with selecting relevant test cases, or, in case these are not available to Solution Manager, with determining the affected business processes. It does not provide any transparency on whether executing these actually touched the changed objects, or if parts of them remained untested.
Hence, BPCA heavily relies on the fact that you use SAP standard tools or approved vendors for modeling business processes, managing test cases, transports, and so on. It further assumes that customers keep all their business process models and TBOMs current. So in order to get accurate results, you always have to make sure your solution documentation as well as the TBOM data are in sync with the code. From our experience, keeping various development artifacts (e.g. architecture models, documentation) in sync with the code is something only very few teams actually achieve.
If all these requirements are met, BPCA can provide an answer to our first central question based on the combination of manually maintained models and semi-automatically gathered execution information.
Test Gap Analysis
The Test Gap Analysis (TGA) story is quite the opposite. It does not rely on any manually maintained information. The only things you need are Teamscale, your test system, and the Teamscale Connector for SAP NetWeaver. This custom Add-On watches for changes to the custom code and tracks them over time. In addition, it leverages the SAP Coverage Analyzer (SCOV) to track code execution. This tool is available on every SAP AS ABAP system. In order to keep the performance impact minimal, we also support using SCOV in »Lite« mode, just like it is used by UPL. Once these are activated, you have a continuously updated view on which parts of your custom code changed recently, and whether you have missed any of them during testing so far. This answers our second central question.
In case your system documentation is as complete as required by BPCA and you already use it to select relevant tests, this mechanism can provide assurance that the tests suggested by BPCA actually covered all relevant changes to the custom code. However, it is worth mentioning that Teamscale only analyzes custom ABAP code and it is therefore not possible to see changes to UIs, database tables, or SAP-provided code.
Test Impact Analysis
In order to also provide an answer to the first question, we developed Test Impact Analysis (TIA). Currently, we only use this technique internally to make it ready for production use. Building upon the data we gather for Test Gap Analysis, i.e. test execution information on the code level, we observed several things:
- When recording not only whether a piece of code was executed, but also by which test case it was executed, we can automatically link test cases to code regions they execute.
- Since Test Gap Analysis already runs continuously, these links are updated automatically with each test run and never become outdated.
- When Teamscale knows about code changes as well as links from code regions to test cases, it can provide fast, accurate test case selection.
- Recording also the execution time for each test case, we can provide additional information to help prioritizing test cases.
Notice that this approach does not rely on any manually maintained models or test case descriptions, but gathers all required information from the source code and from actual test executions. Once integrated into your testing framework, all data is automatically kept current.
In our studies so far, we observed that TIA can reduce test execution time by 90% while still finding 90% of newly introduced bugs . While this is not intended to replace the execution of your whole test suite, it can dramatically shorten feedback cycles for developers and hence reduce the cost of finding and fixing bugs.
While the overall goal is the same as for BPCA (hence both use the term Test Impact Analysis), the ways of achieving this to answer our first central question are quite different.
SAP and CQSE take very different approaches to address the same questions. While BPCA (understandably) leverages a variety of existing tools and best practices, Teamscale aims for the highest possible degree of automation and the lowest possible dependency on other tools and on manually maintaining data quality.
This includes data gathering as well as data analysis and presentation: The view on untested changes is automatically updated whenever custom ABAP code is changed or tests are executed. As soon as TIA is available, executing tests will also automatically update the information on which test case executes which custom code. This can be an enabler for Continuous Integration (CI) in ABAP projects, which is a topic that we observe getting a lot of traction in the ABAP community recently.
Since Teamscale can see ABAP code, but currently no DDIC objects or UIs, neither changes to these, nor usage during execution can be tracked using Teamscale. Information on these objects is only available in BPCA.
As with all questions in Software Engineering, the answer to which tool is the best choice is »it depends«. In case you find some of Teasmcale’s features appealing and would like to have more information, don’t hesitate to contact me.
- Michael Feathers »Working Effectively with Legacy Code« (ISBN 0-13-117705-2)
- N. Nagappan and T. Ball »Use of relative code churn measures to predict system defect density« (ICSE, 2005)
- T. Graves, A. Karr, J. Marron, and H. Siy »Predicting fault incidence using software change history« (IEEE Trans. Softw. Eng., vol. 26, no. 7, 2000)
- Elmar Juergens, Dennis Pagano, Andreas Goeb »Test Impact Analysis: Detecting Errors Early Despite Large, Long-Running Test Suites«
Subscribe to our newsletter and you'll be notified when new blog posts arrive.