scholarly journals Reducing the Discard of MBT Test Cases

2020 ◽  
Vol 8 ◽  
pp. 4-1 - 4:15
Author(s):  
Thomaz Diniz ◽  
Everton L G Alves ◽  
Anderson G F Silva ◽  
Wilkerson L Andrade

Model-Based Testing (MBT) is used for generating test suites from system models. However, as software evolves, its models tend to be updated, which may lead to obsolete test cases that are often discarded. Test case discard can be very costly since essential data, such as execution history, are lost. In this paper, we investigate the use of distance functions and machine learning to help to reduce the discard of MBT tests. First, we assess the problem of managing MBT suites in the context of agile industrial projects. Then, we propose two strategies to cope with this problem: (i) a pure distance function-based. An empirical study using industrial data and ten different distance functions showed that distance functions could be effective for identifying low impact edits that lead to test cases that can be updated with little effort. We also found the optimal configuration for each function. Moreover, we showed that, by using this strategy, one could reduce the discard of test cases by 9.53%; (ii) a strategy that combines machine learning with distance values. This strategy can classify the impact of edits in use case documents with accuracy above 80%; it was able to reduce the discard of test cases by 10.4% and to identify test cases that should, in fact, be discarded.

Author(s):  
ALIREZA SADEGHI ◽  
SEYED-HASSAN MIRIAN-HOSSEINABADI

Test Driven Development (TDD), as a quality promotion approach, suffers from some shortages that discourage its usage. One of the most challenging shortcomings of TDD is the low level of granularity and abstraction. This may lead to production of software that is not acceptable by the end users. Additionally, exploiting of TDD is not applicable in the enterprise systems development. To overcome this defect, we have merged TDD with Model Based Testing (MBT) and suggested a framework named Model Based Test Driven Development (MBTDD). According to TDD, writing test cases comes before programming, and based on our improved method of TDD, modeling precedes writing test cases. To validate the applicability of the proposed framework, we have implemented a use case of Human Resource Management (HRM) system by means of MBTDD. The empirical results of using MBTTD show that our proposed method overwhelms existing deficiencies of TDD.


2021 ◽  
Author(s):  
Saptarshi Bej ◽  
Anne-Marie Galow ◽  
Robert David ◽  
Markus Wolfien ◽  
Olaf Wolkenhauer

AbstractThe research landscape of single-cell and single-nuclei RNA sequencing is evolving rapidly, and one area that is enabled by this technology, is the detection of rare cells. An automated, unbiased and accurate annotation of rare subpopulations is challenging. Once rare cells are identified in one dataset, it will usually be necessary to generate other datasets to enrich the analysis (e.g., with samples from other tissues). From a machine learning perspective, the challenge arises from the fact that rare cell subpopulations constitute an imbalanced classification problem.We here introduce a Machine Learning (ML)-based oversampling method that uses gene expression counts of already identified rare cells as an input to generate synthetic cells to then identify similar (rare) cells in other publicly available experiments. We utilize single-cell synthetic oversampling (sc-SynO), which is based on the Localized Random Affine Shadowsampling (LoRAS) algorithm. The algorithm corrects for the overall imbalance ratio of the minority and majority class.We demonstrate the effectiveness of the method for two independent use cases, each consisting of two published datasets. The first use case identifies cardiac glial cells in snRNA-Seq data (17 nuclei out of 8,635). This use case was designed to take a larger imbalance ratio (∼1 to 500) into account and only uses single-nuclei data. The second use case was designed to jointly use snRNA-Seq data and scRNA-Seq on a lower imbalance ratio (∼1 to 26) for the training step to likewise investigate the potential of the algorithm to consider both single cell capture procedures and the impact of “less” rare-cell types. For validation purposes, all datasets have also been analyzed in a traditional manner using common data analysis approaches, such as the Seurat3 workflow.Our algorithm identifies rare-cell populations with a high accuracy and low false positive detection rate. A striking benefit of our algorithm is that it can be readily implemented in other and existing workflows. The code basis is publicly available at FairdomHub (https://fairdomhub.org/assays/1368) and can easily be transferred to train other customized approaches.


2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Defri Kurniawan ◽  
Danang Wahyu Utomo ◽  
Novita Kurnia Ningrum

Pembuatan kasus uji (<em>test case generation</em>) merupakan tahapan yang membutuhkan sumber daya terbesar yang memiliki pengaruh terhadap keefektifan dan efisiensi suatu pengujian perangkat lunak. Pembuatan <em>test case</em> menjadi salah satu topik penelitian paling manarik. Pengujian berbasis model (<em>model based testing</em>) diusulkan untuk membuat kasus uji pada Sistem Layanan Permohonan Rohaniwan Kementerian Agama Provinsi Jawa Tengah. Model yang diusulkan dalam pembuatan kasus uji dimulai dari kegiatan pengumpulan kebutuhan, menganalisa <em>use case</em> dan <em>class</em>, mengidentifikasi <em>state</em>, melakukan pemodelan perilaku (<em>behaviour modelling</em>) menggunakan<em> state machine diagram</em> dan membuat daftar kasus uji. Penelitian menunjukkan penggunaan model berbasis <em>state</em> mempu mendukung pembuatan kasus uji (<em>test case</em>) dan dapat mendeteksi perilaku (<em>behavior</em>) dari <em>response</em> sistem yang kurang sesuai terhadap inputan atau aksi yang diberikan oleh <em>user.</em>


Mekatronika ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 1-9
Author(s):  
Wang Yan ◽  
HouJun Lu ◽  
Chun Sern Choong

It is difficult to spot failures in port machinery and equipment, and maintaining such systems is even more complex. Maintenance such modifications in a reasonable time  is a tough challenge since each change might have an endless number of test cases run. It's critical to have a risk assessment of the impact of such maintenance fixes. In the software engineering community, there has been a considerable amount of study on failure prediction. Regrettably, there is little evidence of their application in day-to-day software development in port machinery and equipment. In this paper, we propose an unsupervised machine learning (k-means clustering) method for categorising cranes for maintenance and use a machine learning pipeline to solve the classification of crane failure data. The crane's maintenance decision data demonstrates the method's effectiveness. It was demonstrated that the Linear Support Vector Machine could give a superior classification accuracy of crane maintenance prediction with a 100 percent accuracy in train set and 94.5 percent accuracy in test set.


Author(s):  
Xiaobing Sun ◽  
Xin Peng ◽  
Hareton Leung ◽  
Bin Li

Regression testing is essential to ensure software quality during software evolution. Two widely-used regression testing techniques, test case selection and prioritization, are used to maximize the value of the continuously enlarging test suite. However, few works consider both these two techniques together, which decreases the usefulness of the independently studied techniques in practice. In the presence of changes during program evolution, regression testing is usually conducted by selecting the test cases that cover the impact results of the changes. It seldom considers the false-positives in the information covered. Hence, the effectiveness of such regression testing techniques is decreased. In this paper, we propose an approach, ComboRT, which combines test case selection and prioritization together to directly generate a ranked list of test cases. It is based on the impact results predicted by the change impact analysis (CIA) technique, FCA–CIA, which generates a ranked list of impacted methods. Test cases which cover these impacted methods are included in the new test suite. As each method predicted by FCA–CIA is assigned with an impact factor value corresponding to the probability of this method to be impacted, test cases are then ordered according to the impact factor values of the impacted methods. Empirical studies on four Java based software systems demonstrate that ComboRT can be effectively used for regression testing in object-oriented Java-based software systems during their evolution.


Author(s):  
Sebastiano Panichella ◽  
Annibale Panichella ◽  
Moritz Beller ◽  
Andy Zaidman ◽  
Harald C Gall

Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shown not to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are difficult to understand and maintain. Our paper proposes an approach, coined TestScribe, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which is considered particularly useful by developers.


Author(s):  
Elinda Kajo Mece ◽  
Kleona Binjaku ◽  
Hakik Paci

Regression testing is very important but also a very costly and time-consuming activity that ensures the developers that changes in the application will not bring new errors. Retest all, selection of test cases and prioritization of test cases (TCP)  approaches are used to enhance the efficiency and effectiveness in regression testing. While test case selection techniques decrease testing time and cost, it can exclude some critical test cases that can detect the faults. On the other hand, test case prioritization considers all test cases and execute them until resources are exhausted or all test cases are executed, while always focusing on the most important ones. Over the years, machine learning has found wide usage in solving different problems in software engineering. Software development and maintenance problems can be defined as learning problems and machine learning techniques have shown to be very effective in solving these problems. In the range of application of machine learning, machine learning techniques have also found usage in solving the test case prioritization problem. In this paper, we investigate the application of machine learning techniques in test case prioritization. We survey some of the most recent studies made in this field and provide information like techniques of machine learning used in TCP process, metrics used to measure the effectiveness of the proposed methods, data used to define the priority of test cases and some advantages or limitations of application of machine learning in TCP.


2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Ana Rosario Espada ◽  
María del Mar Gallardo ◽  
Alberto Salmerón ◽  
Pedro Merino

This paper presents the foundations and the real use of a tool to automatically detect anomalies in Internet traffic produced by mobile applications. In particular, our MVE tool is focused on analyzing the impact that user interactions have on the traffic produced and received by the smartphones. To make the analysis exhaustive with regard to the potential user behaviors, we follow a model-based approach to automatically generate test cases to be executed on the smartphones. In addition, we make use of a specification language to define traffic patterns to be compared with the actual traffic in the device. MVE also includes monitoring and verification support to detect executions that do not fit the patterns. In these cases, the developer will obtain detailed information on the user actions that produce the anomaly in order to improve the application. To validate the approach, the paper presents an experimental study with the well-known Spotify app for Android, in which we detected some interesting behaviors. For instance, some HTTP connections do not end successfully due to timeout errors from the remote Spotify service.


Author(s):  
Chetna Gupta ◽  
Varun Gupta

This paper presents an approach to prioritize program segments within the impact set computed using functional call graph to assist regression testing for test case prioritization. The presented technique will first categorize the type of impact propagation and then prioritize the impacted segments into higher and lower levels based on propagation categorization. This will help in saving maintenance cost and effort by allocating higher priority to those segments which are impacted more within the impacted set. Thus a software engineer can first run those test cases which cover segments with higher impacted priority to minimize regression test selection.


Sign in / Sign up

Export Citation Format

Share Document