test oracle
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 7)

H-INDEX

11
(FIVE YEARS 0)

Author(s):  
Chunyan Ma ◽  
Shaoying Liu ◽  
Jinglan Fu ◽  
Tao Zhang

Automatic test oracle generation is a bottleneck in realizing full automation of the entire software testing process. This study proposes a new method for automatically generating a test oracle for a new test input on the basis of several historical test cases by using a backpropagation neural network (BPNN) model. The new method is different from existing test oracle techniques. Specifically, our method has two steps. First, the values of variables are collected as training data when several historical test inputs are used to execute the program at different breakpoints. The test oracles (pass or fail) of these test cases are utilized to classify and label the training data. Second, a new test input is used to execute the program at different breakpoints, where the trained BPNN prediction model automatically generates its test oracle on the basis of the collected values of the variables involved. We conduct an experiment to validate our method. In the experiment, 113 faulty versions of seven types of programs are used as experimental objects. Results show that the average prediction accuracy rate of 74,651 test oracles is 95.8%. Although the failed test cases in the training data account for less than 5%, the overall average recall rate (prediction accuracy of test case execution failure) of all programs is 78.9%. Furthermore, the trained BPNN can reveal not only the impact of the values of variables but also the impact of the logical correspondence between variables in test oracle generation.


2021 ◽  
Author(s):  
Ke Chen ◽  
Yufei Li ◽  
Yingfeng Chen ◽  
Changjie Fan ◽  
Zhipeng Hu ◽  
...  
Keyword(s):  

Author(s):  
Ke Chen ◽  
Yufei Li ◽  
Yingfeng Chen ◽  
Changjie Fan ◽  
Zhipeng Hu ◽  
...  
Keyword(s):  

Author(s):  
Ke Chen ◽  
Yufei Li ◽  
Yingfeng Chen ◽  
Changjie Fan ◽  
Zhipeng Hu ◽  
...  

Graphically-rich applications such as games are ubiquitous with attractive visual effects of Graphical User Interface (GUI) that offers a bridge between software applications and end-users. However, various types of graphical glitches may arise from such GUI complexity and have become one of the main component of software compatibility issues. Our study on bug reports from game development teams in NetEase Inc. indicates that graphical glitches frequently occur during the GUI rendering and severely degrade the quality of graphically-rich applications such as video games. Existing automated testing techniques for such applications focus mainly on generating various GUI test sequences and check whether the test sequences can cause crashes. These techniques require constant human attention to captures non-crashing bugs such as bugs causing graphical glitches. In this paper, we present the first step in automating the test oracle for detecting non-crashing bugs in graphically-rich applications. Specifically, we propose GLIB based on a code-based data augmentation technique to detect game GUI glitches. We perform an evaluation of GLIB on 20 real-world game apps (with bug reports available) and the result shows that GLIB can achieve 100\% precision and 99.5\% recall in detecting non-crashing bugs such as game GUI glitches. Practical application of GLIB on another 14 real-world games (without bug reports) further demonstrates that GLIB can effectively uncover GUI glitches, with 48 of 53 bugs reported by GLIB having been confirmed and fixed so far.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 110
Author(s):  
Mingzhe Zhang ◽  
Yunzhan Gong ◽  
Yawen Wang ◽  
Dahai Jin

A test oracle is a procedure that is used during testing to determine whether software behaves correctly or not. One of most important tasks for a test oracle is to choose oracle data (the set of variables monitored during testing) to observe. However, most literature on test oracles has focused either on formal specification generation or on automated test oracle construction, whereas little work exists for supporting oracle data selection. In this paper, we present a path-sensitive approach, PSODS (path-sensitive oracle data selection), to automatically select oracle data for use by expected value oracles. PSODS ranks paths according to the possibility that potential faults may exist in them, and the ranked paths help testers determine which oracle data should be considered first. To select oracle data for each path, we introduce quantity and quality analysis of oracle data, which use static analysis to estimate oracle data for their substitution capability and fault-detection capability. Quantity analysis can reduce the number of oracle data. Quality analysis can rank oracle data based on their fault-detection capability. By using quantity and quality analysis, PSODS reduces the cost of oracle construction and improves fault-detection efficiency and effectiveness. We have implemented our approach and applied it to a real-world project. The experimental results show that PSODS is efficient in helping testers construct test oracles. Moreover, the oracle datasets produced by our approach are more effective and efficient than output-only oracles at detecting faults.


Author(s):  
Richard Schumi ◽  
Jun Sun

AbstractCompilers are error-prone due to their high complexity. They are relevant for not only general purpose programming languages, but also for many domain specific languages. Bugs in compilers can potentially render all programs at risk. It is thus crucial that compilers are systematically tested, if not verified. Recently, a number of efforts have been made to formalise and standardise programming language semantics, which can be applied to verify the correctness of the respective compilers. In this work, we present a novel specification-based testing method named SpecTest to better utilise these semantics for testing. By applying an executable semantics as test oracle, SpecTest can discover deep semantic errors in compilers. Compared to existing approaches, SpecTest is built upon a novel test coverage criterion called semantic coverage which brings together mutation testing and fuzzing to specifically target less tested language features. We apply SpecTest to systematically test two compilers, i.e., the Java compiler and the Solidity compiler. SpecTest improves the semantic coverage of both compilers considerably and reveals multiple previously unknown bugs.


2020 ◽  
Vol 69 (3) ◽  
pp. 1050-1063
Author(s):  
Ahmet Esat Genc ◽  
Hasan Sozer ◽  
M. Furkan Kirac ◽  
Baris Aktemur
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document