A Novel Approach to Test Data Selection in Regression Testing of Software

2010 ◽  
Vol 1 (14) ◽  
pp. 61-66
Author(s):  
Soubhagya Sankar Barpanda ◽  
Durga Prasad Mohapatra ◽  
Baikuntha Narayan Biswal
2021 ◽  
Author(s):  
Octavian Dumitru ◽  
Gottfried Schwarz ◽  
Mihai Datcu ◽  
Dongyang Ao ◽  
Zhongling Huang ◽  
...  

<p>During the last years, much progress has been reached with machine learning algorithms. Among the typical application fields of machine learning are many technical and commercial applications as well as Earth science analyses, where most often indirect and distorted detector data have to be converted to well-calibrated scientific data that are a prerequisite for a correct understanding of the desired physical quantities and their relationships.</p><p>However, the provision of sufficient calibrated data is not enough for the testing, training, and routine processing of most machine learning applications. In principle, one also needs a clear strategy for the selection of necessary and useful training data and an easily understandable quality control of the finally desired parameters.</p><p>At a first glance, one could guess that this problem could be solved by a careful selection of representative test data covering many typical cases as well as some counterexamples. Then these test data can be used for the training of the internal parameters of a machine learning application. At a second glance, however, many researchers found out that a simple stacking up of plain examples is not the best choice for many scientific applications.</p><p>To get improved machine learning results, we concentrated on the analysis of satellite images depicting the Earth’s surface under various conditions such as the selected instrument type, spectral bands, and spatial resolution. In our case, such data are routinely provided by the freely accessible European Sentinel satellite products (e.g., Sentinel-1, and Sentinel-2). Our basic work then included investigations of how some additional processing steps – to be linked with the selected training data – can provide better machine learning results.</p><p>To this end, we analysed and compared three different approaches to find out machine learning strategies for the joint selection and processing of training data for our Earth observation images:</p><ul><li>One can optimize the training data selection by adapting the data selection to the specific instrument, target, and application characteristics [1].</li> <li>As an alternative, one can dynamically generate new training parameters by Generative Adversarial Networks. This is comparable to the role of a sparring partner in boxing [2].</li> <li>One can also use a hybrid semi-supervised approach for Synthetic Aperture Radar images with limited labelled data. The method is split in: polarimetric scattering classification, topic modelling for scattering labels, unsupervised constraint learning, and supervised label prediction with constraints [3].</li> </ul><p>We applied these strategies in the ExtremeEarth sea-ice monitoring project (http://earthanalytics.eu/). As a result, we can demonstrate for which application cases these three strategies will provide a promising alternative to a simple conventional selection of available training data.</p><p>[1] C.O. Dumitru et. al, “Understanding Satellite Images: A Data Mining Module for Sentinel Images”, Big Earth Data, 2020, 4(4), pp. 367-408.</p><p>[2] D. Ao et. al., “Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X”, Remote Sensing, 2018, 10(10), pp. 1-23.</p><p>[3] Z. Huang, et. al., "HDEC-TFA: An Unsupervised Learning Approach for Discovering Physical Scattering Properties of Single-Polarized SAR Images", IEEE Transactions on Geoscience and Remote Sensing, 2020, pp.1-18.</p>


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Ali M. Alakeel

Program assertions have been recognized as a supporting tool during software development, testing, and maintenance. Therefore, software developers place assertions within their code in positions that are considered to be error prone or that have the potential to lead to a software crash or failure. Similar to any other software, programs with assertions must be maintained. Depending on the type of modification applied to the modified program, assertions also might have to undergo some modifications. New assertions may also be introduced in the new version of the program, while some assertions can be kept the same. This paper presents a novel approach for test case prioritization during regression testing of programs that have assertions using fuzzy logic. The main objective of this approach is to prioritize the test cases according to their estimated potential in violating a given program assertion. To develop the proposed approach, we utilize fuzzy logic techniques to estimate the effectiveness of a given test case in violating an assertion based on the history of the test cases in previous testing operations. We have conducted a case study in which the proposed approach is applied to various programs, and the results are promising compared to untreated and randomly ordered test cases.


2018 ◽  
Vol 2018 ◽  
pp. 1-14
Author(s):  
A. Y. Elruby ◽  
Sam Nakhla ◽  
A. Hussein

The eXtended Finite Element Method (XFEM) is a versatile method for solving crack propagation problems. Meanwhile, XFEM predictions for crack onset and propagation rely on the stress field which tends to converge at a slower rate than that of displacements, making it challenging to capture critical load at crack onset accurately. Furthermore, identifying the critical region(s) for XFEM nodal enrichments is user-dependent. The identification process can be straightforward for small-scale test specimen while in other cases such as complex structures it can be unmanageable. In this work a novel approach is proposed with three major objectives; (1) alleviate user-dependency; (2) enhance predictions accuracy; (3) minimize computational effort. An automatic critical region(s) identification based on material selected failure criterion is developed. Moreover, the approach enables the selection of optimized mesh necessary for accurate prediction of failure loads at crack initiation. Also, optimal enrichment zone size determination is automated. The proposed approach was developed as an iterative algorithm and implemented in ABAQUS using Python scripting. The proposed algorithm was validated against our test data of unnotched specimens and relevant test data from the literature. The results of the predicted loads/displacements at failure are in excellent agreement with measurements. Crack onset locations were in very good agreement with observations from testing. Finally, the proposed algorithm has shown a significant enhancement in the overall computational efficiency compared to the conventional XFEM. The proposed algorithm can be easily implemented into user-built or commercial finite element codes.


1987 ◽  
Vol 7 (2) ◽  
pp. 89-97 ◽  
Author(s):  
F.R.D. Velasco
Keyword(s):  

Author(s):  
Anuranjan Misra ◽  
Raghav Mehra ◽  
Mayank Singh ◽  
Jugnesh Kumar ◽  
Shailendra Mishra

2018 ◽  
Vol 10 (2) ◽  
pp. 55-71
Author(s):  
Munish Khanna ◽  
◽  
Naresh Chauhan ◽  
Dilip Sharma ◽  
AbhishekToofani

Sign in / Sign up

Export Citation Format

Share Document