scholarly journals Can Defect Prediction Be Useful for Coarse-Level Tasks of Software Testing?

2020 ◽  
Vol 10 (15) ◽  
pp. 5372
Author(s):  
Can Cui ◽  
Bin Liu ◽  
Peng Xiao ◽  
Shihai Wang

It is popular to use software defect prediction (SDP) techniques to predict bugs in software in the past 20 years. Before conducting software testing (ST), the result of SDP assists on resource allocation for ST. However, DP usually works on fine-level tasks (or white-box testing) instead of coarse-level tasks (or black-box testing). Before ST or without historical execution information, it is difficult to get resource allocated properly. Therefore, a SDP-based approach, named DPAHM, is proposed to assist on arranging resource for coarse-level tasks. The method combines analytic hierarchy process (AHP) and variant incidence matrix. Besides, we apply the proposed DPAHM into a proprietary software, named MC. Besides, we conduct an up-to-down structure, including three layers for MC. Additionally, the performance measure of each layer is calculated based on the SDP result. Therefore, the resource allocation strategy for coarse-level tasks is gained according to the prediction result. The experiment indicates our proposed method is effective for resource allocation of coarse-level tasks before executing ST.

2020 ◽  
Vol 15 (1) ◽  
pp. 35-42
Author(s):  
A.O. Balogun ◽  
A.O. Bajeh ◽  
H.A. Mojeed ◽  
A.G. Akintola

Failure of software systems as a result of software testing is very much rampant as modern software systems are large and complex. Software testing which is an integral part of the software development life cycle (SDLC), consumes both human and capital resources. As such, software defect prediction (SDP) mechanisms are deployed to strengthen the software testing phase in SDLC by predicting defect prone modules or components in software systems. Machine learning models are used for developing the SDP models with great successes achieved. Moreover, some studies have highlighted that a combination of machine learning models as a form of an ensemble is better than single SDP models in terms of prediction accuracy. However, the efficiency of machine learning models can change with diverse predictive evaluation metrics. Thus, more studies are needed to establish the effectiveness of ensemble SDP models over single SDP models. This study proposes the deployment of Multi-Criteria Decision Method (MCDM) techniques to rank machine learning models. Analytic Network Process (ANP) and Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) which are types of MCDM techniques are deployed on 9 machine learning models with 11 performance evaluation metrics and 11 software defects datasets. The experimental results showed that ensemble SDP models are best appropriate SDP models as Boosted SMO and Boosted PART ranked highest for each of the MCDM techniques. Besides, the experimental results also validated the stand of not considering accuracy as the only performance evaluation metrics for SDP models. Conclusively, more performance metrics other than predictive accuracy should be considered when ranking and evaluating machine learning models. Keywords: Ensemble; Multi-Criteria Decision Method; Software Defect Prediction


2017 ◽  
Vol 1 (2) ◽  
pp. 102
Author(s):  
Rabiya Abbas ◽  
Zainab Sultan ◽  
Shahid Nazir Bhatti 

Software testing is the process of verifying and validating the user’s requirements. Testing is ongoing process during whole software development. Software testing is characterized into three main types. That is, in Black box testing, user doesn’t know domestic knowledge, internal logics and design of system. In white box testing, Tester knows the domestic logic of code. In Grey box testing, Tester has little bit knowledge about the internal structure and working of the system. It is commonly used in case of Integration testing.Load testing helps us to analyze the performance of the system under heavy load or under Zero load. This is achieved with the help of a Load Testing Tool. The intention for writing this research is to carry out a comparison of four load testing tools i.e. Apache JMeter, LoadRunner, Microsoft Visual Studio (TFS), Siege based on certain criteria  i.e. test scripts generation , result reports, application support, plug-in supports, and cost . The main focus is to study these load testing tools and identify which tool is better and more efficient . We assume this comparison can help in selecting the most appropriate tool and motivates the use of open source load testing tools.


Author(s):  
Nikolai Kosmatov

In this chapter, the authors discuss some innovative applications of artificial intelligence techniques to software engineering, in particular, to automatic test generation. Automatic testing tools translate the program under test, or its model, and the test criterion, or the test objective, into constraints. Constraint solving allows then to find a solution of the constraint solving problem and to obtain test data. The authors focus on two particular applications: model-based testing as an example of black-box testing, and all-paths test generation for C programs as a white-box testing strategy. Each application is illustrated by a running example showing how constraint-based methods allow to automatically generate test data for each strategy. They also give an overview of the main difficulties of constraint-based software testing and outline some directions for future research.


Author(s):  
Khadijah Khadijah ◽  
Priyo Sidik Sasongko

Software testing is a crucial process in software development life cycle which will affect the software quality. However, testing is a tedious task and resource consuming. Software testing can be conducted more efficiently by focusing this activitiy to software modules which is prone to defect. Therefore, an automated software defect prediction is needed. This research implemented Extreme Learning Machine (ELM) as classification algorithm because of its simplicity in training process and good generalization performance. Aside classification algorithm, the most important problem need to be addressed is imbalanced data between samples of positive class (prone to defect) and negative class. Such imbalance problem could bias the performance of classifier. Therefore, this research compared some approaches to handle imbalance problem between SMOTE (resampling method) and weighted-ELM (algorithm-level method).The results of experiment using 10-fold cross validation on NASA MDP dataset show that including imbalance problem handling in building software defect prediction model is able to increase the specificity and g-mean of model. When the value of imbalance ratio is not very small, the SMOTE is better than weighted-ELM. Otherwise, weighted-ELM is better than SMOTE in term of sensitivity and g-mean, but worse in term of specificity and accuracy.Software testing is a crucial process in software development life cycle which will affect the software quality. However, testing is a tedious task and resource consuming. Software testing can be conducted more efficiently by focusing this activitiy to software modules which is prone to defect. Therefore, an automated software defect prediction is needed. This research implemented Extreme Learning Machine (ELM) as classification algorithm because of its simplicity in training process and good generalization performance. Aside classification algorithm, the most important problem need to be addressed is imbalanced data between samples of positive class (prone to defect) and negative class. Such imbalance problem could bias the performance of classifier. Therefore, this research compared some approaches to handle imbalance problem between SMOTE (resampling method) and weighted-ELM (algorithm-level method).The results of experiment using 10-fold cross validation on NASA MDP dataset show that including imbalance problem handling in building software defect prediction model is able to increase the specificity and g-mean of model. When the value of imbalance ratio is not very small, the SMOTE is better than weighted-ELM. Otherwise, weighted-ELM is better than SMOTE in term of sensitivity and g-mean, but worse in term of specificity and accuracy.


2011 ◽  
Vol 34 (6) ◽  
pp. 1148-1154 ◽  
Author(s):  
Hui-Yan JIANG ◽  
Mao ZONG ◽  
Xiang-Ying LIU

2019 ◽  
Vol 28 (5) ◽  
pp. 925-932
Author(s):  
Hua WEI ◽  
Chun SHAN ◽  
Changzhen HU ◽  
Yu ZHANG ◽  
Xiao YU

Sign in / Sign up

Export Citation Format

Share Document