On Practical Adequate Test Suites for Integrated Test Case Prioritization and Fault Localization

Author(s):  
Bo Jiang ◽  
W.K. Chan ◽  
T.H. Tse
2012 ◽  
Vol 54 (7) ◽  
pp. 739-758 ◽  
Author(s):  
Bo Jiang ◽  
Zhenyu Zhang ◽  
W.K. Chan ◽  
T.H. Tse ◽  
Tsong Yueh Chen

2021 ◽  
Vol 26 (6) ◽  
Author(s):  
Christoph Laaber ◽  
Harald C. Gall ◽  
Philipp Leitner

AbstractRegression testing comprises techniques which are applied during software evolution to uncover faults effectively and efficiently. While regression testing is widely studied for functional tests, performance regression testing, e.g., with software microbenchmarks, is hardly investigated. Applying test case prioritization (TCP), a regression testing technique, to software microbenchmarks may help capturing large performance regressions sooner upon new versions. This may especially be beneficial for microbenchmark suites, because they take considerably longer to execute than unit test suites. However, it is unclear whether traditional unit testing TCP techniques work equally well for software microbenchmarks. In this paper, we empirically study coverage-based TCP techniques, employing total and additional greedy strategies, applied to software microbenchmarks along multiple parameterization dimensions, leading to 54 unique technique instantiations. We find that TCP techniques have a mean APFD-P (average percentage of fault-detection on performance) effectiveness between 0.54 and 0.71 and are able to capture the three largest performance changes after executing 29% to 66% of the whole microbenchmark suite. Our efficiency analysis reveals that the runtime overhead of TCP varies considerably depending on the exact parameterization. The most effective technique has an overhead of 11% of the total microbenchmark suite execution time, making TCP a viable option for performance regression testing. The results demonstrate that the total strategy is superior to the additional strategy. Finally, dynamic-coverage techniques should be favored over static-coverage techniques due to their acceptable analysis overhead; however, in settings where the time for prioritzation is limited, static-coverage techniques provide an attractive alternative.


Author(s):  
Ziyuan Wang ◽  
Chunrong Fang ◽  
Lin Chen ◽  
Zhiyi Zhang

For the test case prioritization problems, the average percent of faults detected (APFD) and its variant versions are widely used as metrics to evaluate prioritized test suite’s efficiency of fault detection. By a revisit of metrics for test case prioritization, we observe that APFD is only available for the scenarios where all test suites under evaluation contain the same number of test cases. Such a limitation is often overlooked, and lead to incorrect results when comparing fault detection efficiency of test suites with different sizes. Moreover, APFD cannot precisely illustrate the process of fault detection in the real world. Besides the APFD, most of its variants, including the NAPFD and the APFD[Formula: see text], have similar problems. This paper points out these limitations in detail by analyzing the physical explanation of APFD series metrics formally. In order to eliminate these limitations, we propose a series of improved metrics, including the relative average percent of faults detected (RAPFD) and the relative cost-cognizant weighted average percent of faults detected (RAPFD[Formula: see text]), to evaluate the efficiency of the test suite. Furthermore, for the scenario of parallel testing, a series of metrics including the relative average percent of faults detected in parallel testing ([Formula: see text]-RAPFD) and the relative cost-cognizant weighted average percent of faults detected in parallel testing ([Formula: see text]-RAPFD[Formula: see text]) are proposed too. All the proposed metrics refer to both the speed of fault detection and the constraint of the testing resource. A formal analysis and some examples show that all the proposed metrics provide much more precise illustrations of the fault detection process.


2020 ◽  
Vol 17 (7) ◽  
pp. 125-144
Author(s):  
Tomas Pospisil ◽  
Jan Sobotka ◽  
Jiri Novak

Sign in / Sign up

Export Citation Format

Share Document