scholarly journals Multi-objective test case prioritization in highly configurable systems: A case study

2016 ◽  
Vol 122 ◽  
pp. 287-310 ◽  
Author(s):  
José A. Parejo ◽  
Ana B. Sánchez ◽  
Sergio Segura ◽  
Antonio Ruiz-Cortés ◽  
Roberto E. Lopez-Herrejon ◽  
...  
2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Ali M. Alakeel

Program assertions have been recognized as a supporting tool during software development, testing, and maintenance. Therefore, software developers place assertions within their code in positions that are considered to be error prone or that have the potential to lead to a software crash or failure. Similar to any other software, programs with assertions must be maintained. Depending on the type of modification applied to the modified program, assertions also might have to undergo some modifications. New assertions may also be introduced in the new version of the program, while some assertions can be kept the same. This paper presents a novel approach for test case prioritization during regression testing of programs that have assertions using fuzzy logic. The main objective of this approach is to prioritize the test cases according to their estimated potential in violating a given program assertion. To develop the proposed approach, we utilize fuzzy logic techniques to estimate the effectiveness of a given test case in violating an assertion based on the history of the test cases in previous testing operations. We have conducted a case study in which the proposed approach is applied to various programs, and the results are promising compared to untreated and randomly ordered test cases.


2019 ◽  
Vol 153 ◽  
pp. 86-104 ◽  
Author(s):  
Dipesh Pradhan ◽  
Shuai Wang ◽  
Shaukat Ali ◽  
Tao Yue ◽  
Marius Liaaen

2013 ◽  
Vol 15 (2) ◽  
pp. 258-270 ◽  
Author(s):  
F. Pianosi ◽  
A. Castelletti ◽  
M. Restelli

Multi-objective Markov decision processes (MOMDPs) provide an effective modeling framework for decision-making problems involving water systems. The traditional approach is to define many single-objective problems (resulting from different combinations of the objectives), each solvable by standard optimization. This paper presents an approach based on reinforcement learning (RL) that can learn the operating policies for all combinations of objectives in a single training process. The key idea is to enlarge the approximation of the action-value function, which is performed by single-objective RL over the state-action space, to the space of the objectives' weights. The batch-mode nature of the algorithm allows for enriching the training dataset without further interaction with the controlled system. The approach is demonstrated on a numerical test case study and evaluated on a real-world application, the Hoa Binh reservoir, Vietnam. Experimental results on the test case show that the proposed approach (multi-objective fitted Q-iteration; MOFQI) becomes computationally preferable over the repeated application of its single-objective version (fitted Q-iteration; FQI) when evaluating more than five weight combinations. In the Hoa Binh case study, the operating policies computed with MOFQI and FQI have comparable efficiency, while MOFQI provides a continuous approximation of the Pareto frontier with no additional computing costs.


Sign in / Sign up

Export Citation Format

Share Document