Towards a Framework for Differential Unit Testing of Object-Oriented Programs

Author(s):  
Tao Xie ◽  
Kunal Taneja ◽  
Shreyas Kale ◽  
Darko Marinov
Keyword(s):  
Author(s):  
Jinfu Chen ◽  
Patrick Kwaku Kudjo ◽  
Zufa Zhang ◽  
Chenfei Su ◽  
Yuchi Guo ◽  
...  

Finding an effective method for testing object-oriented software (OOS) has proven elusive in the software community due to the rapid development of object-oriented programming (OOP) technology. Although significant progress has been made by previous studies, challenges still exist in relation to the object distance measurement of OOS using Adaptive Random Testing (ART). This is partly due to the unique features of OOS such as encapsulation, inheritance and polymorphism. In a previous work, we proposed a new similarity metric called the Object and Method Invocation Sequence Similarity (OMISS) metric to facilitate multi-class level testing using ART. In this paper, we broaden the set of models in the metric (OMISS) by considering the method parameter and adding the weight in the metric to develop a new distance metric to improve unit testing of OOS. We used the new distance metric to calculate the distance between the set of objects and the distance between the method sequences of the test cases. Additionally, we integrate the new metric in unit testing with ART and applied it to six open source subject programs. The experimental result shows that the proposed method with method parameter considered in this study is better than previous methods without the method parameter in the case of the single method. Our finding further shows that the proposed unit testing approach is a promising direction for assisting software engineers who seek to improve the failure-detection effectiveness of OOS testing.


Author(s):  
Shadi Banitaan ◽  
Kendall E. Nygard ◽  
Kenneth Magel

Object-oriented software systems contain large number of modules which make unit testing, integration testing, and system testing very difficult and challenging. While the aim of unit testing is to show that individual modules are working properly and the aim of the system testing is to determine whether the whole system meets its specifications, the aim of integration testing is to uncover errors in the interactions between system modules. However, it is generally impossible to test all connections between modules because of time and budget constraints. Thus, it is important to focus the testing on the connections presumed to be more error-prone. The goal of this work is to guide software testers where in a software system to focus when performing integration testing to save time and resources. This paper proposes a new approach to predict and rank error-prone connections. We use method level metrics that capture both dependencies and internal complexity of methods. We performed experiments on several Java applications and used error seeding techniques for evaluation. The experimental results showed that our approach is effective for selecting the test focus in integration testing.


Sign in / Sign up

Export Citation Format

Share Document