scholarly journals Empirical Study of Test Case and Test Framework Presence in Public Projects on GitHub

2021 ◽  
Vol 11 (16) ◽  
pp. 7250
Author(s):  
Matej Madeja ◽  
Jaroslav Porubän ◽  
Sergej Chodarev ◽  
Matúš Sulír ◽  
Filip Gurbáľ

Automated tests are often considered an indicator of project quality. In this paper, we performed a large analysis of 6.3 M public GitHub projects using Java as the primary programming language. We created an overview of tests occurrence in publicly available GitHub projects and the use of test frameworks in them. The results showed that 52% of the projects contain at least one test case. However, there is a large number of example tests that do not represent relevant production code testing. It was also found that there is only a poor correlation between the number of the word “test” in different parts of the project (e.g., file paths, file name, file content, etc.) and the number of test cases, creation date, date of the last commit, number of commits, or number of watchers. Testing framework analysis confirmed that JUnit is the most used testing framework with a 48% share. TestNG, considered the second most popular Java unit testing framework, occurred in only 3% of the projects.

2014 ◽  
Vol 556-562 ◽  
pp. 6149-6153
Author(s):  
Min Gang Chen ◽  
Wen Bin Zhong ◽  
Wen Jie Chen ◽  
Yun Hu ◽  
Li Zhi Cai

With the increasingly fast-paced software releasing or updating, research on the method of an efficient software automation testing framework based on cloud computing has become particularly important. In this paper, we propose an automation testing framework over cloud. We also describe some key technologies in the aspect of the design of hierarchical test case and automatic distribution of test cases in the cloud computing environment. Testing experiments show that our framework can take advantage of on-demand testing resources in the cloud to improve the efficiency of automation testing.


Author(s):  
ALLEN PARRISH ◽  
DAVID CORDES

Abstract data types (ADTs) represent the fundamental building blocks of object-oriented software development. There have been a variety of techniques in the literature for testing ADT modules. Virtually all of the proposed techniques have involved testing sequences of ADT operations (e.g., for a stack ADT, test the sequence PUSH; PUSH; POP) to discover defects in their interactions. However, the operations inside an ADT module are really nothing more than conventional procedures and functions. Consequently, it is conceivable that conventional subprogram unit testing techniques can be adapted to test ADT operations. To support such testing techniques, test cases are best designed and expressed in terms of data values. When test cases are integers, for example, expressing a test case is trivial (e.g., ‘253’). However, when test cases are data abstractions (such as stacks), this problem is much more difficult due to the variety of different formats in which a single data abstraction can be legitimately viewed. In this paper, we provide a conceptual framework for applying classical white-box and black-box unit testing techniques to ADT operations. We then use this framework to develop a collection of guidelines for determining the best format for test case design, given different module characteristics and testing techniques.


2022 ◽  
pp. 671-686
Author(s):  
Manoj Kumar Pachariya

This article presents the empirical study of multi-criteria test case prioritization. In this article, a test case prioritization problem with time constraints is being solved by using the ant colony optimization (ACO) approach. The ACO is a meta-heuristic and nature-inspired approach that has been applied for the statement of a coverage-based test case prioritization problem. The proposed approach ranks test cases using statement coverage as a fitness criteria and the execution time as a constraint. The proposed approach is implemented in MatLab and validated on widely used benchmark dataset, freely available on the Software Infrastructure Repository (SIR). The results of experimental study show that the proposed ACO based approach provides near optimal solution to test case prioritization problem.


2020 ◽  
Vol 8 (2) ◽  
pp. 23-37
Author(s):  
Manoj Kumar Pachariya

This article presents the empirical study of multi-criteria test case prioritization. In this article, a test case prioritization problem with time constraints is being solved by using the ant colony optimization (ACO) approach. The ACO is a meta-heuristic and nature-inspired approach that has been applied for the statement of a coverage-based test case prioritization problem. The proposed approach ranks test cases using statement coverage as a fitness criteria and the execution time as a constraint. The proposed approach is implemented in MatLab and validated on widely used benchmark dataset, freely available on the Software Infrastructure Repository (SIR). The results of experimental study show that the proposed ACO based approach provides near optimal solution to test case prioritization problem.


2014 ◽  
Vol 13 (4) ◽  
pp. 4405-4415
Author(s):  
Deepali Diwase ◽  
Pujashree Vidap

In every business domain Web Services are more popular solutions to implement the software. Composite web service can be created by combining basic web services. Many unreliable web services are deployed on the internet. Hence, testing is required to ensure reliability. Software testers have great challenges to test web services. Source code of web services is unavailable. The Testing Framework is used to test web services without knowledge of its internal structure. In this paper, we have proposed a Testing Framework for Composite Web Services (TFCWS). It generates report which shows the total number of test cases executed for each web service with pass or fail status of each test case. It calculates the throughput of web service and response time of each test case. We have used web services response times for analysis of TFCWS, Soap UI and Storm.


2014 ◽  
Vol 490-491 ◽  
pp. 1617-1623
Author(s):  
R. Deeptha ◽  
Rajeswari Mukesh

.As Web Services draw modules within and across enterprises, dynamically and belligerently testing Web Services has become crucial. Comprehensive Functional, Concert, Interoperability and Susceptibility Testing form the Pillars of Web Services Testing. Only by adopting a comprehensive testing department, enterprises can safeguard that their Web Services is robust, scalable, interoperable, and secure. Overall functionality of web services would be informal towards test. But, only if we methodically trust the applications components (services) before we combine them to complete the application. In current scenario web service technology comprehends various testing apparatuses for manipulating and generating the test cases. But these tools and approaches were negotiating security and execution time and consume more resources. The existing methodologies will generate test cases for the low end web services and limited number of requests, due to these constraints we built new testing framework. In this paper we introduced the new basis with testing of actions, scripts and link for web services by the use of test cases. For this approach we used SOAP web services with SOA. The test case generation and testing reports will gives the accurate testing results and test cases. These test cases are generated using Java JUnit testing tool. We implemented our approach in a java based platform for efficient and secure manner.


2019 ◽  
Vol 6 (6) ◽  
pp. 645
Author(s):  
Arlinta Christy Barus ◽  
Leo Siburian

<p class="IEEEAbtract">Pengujian adalah tahap yang penting dan harus dilalui dalam proses pengembangan perangkat lunak. Pengujian tersebut dilakukan untuk menghindari kesalahan yang mungkin terdapat pada perangkat lunak yang diuji. Ada banyak kasus uji (<em>test case</em>) yang harus dieksekusi dalam proses pengujian. Karena itu, pengujian yang dilakukan secara manual membutuhkan upaya yang besar. Oleh sebab itu pengujian otomatis (<em>automated testing</em>) menjadi hal yang penting untuk dipertimbangkan menggantikan pengujian manual. Pengujian otomatis adalah penggunaan kakas pengujian (<em>testing tools</em> atau <em>testing framework</em>) dalam melakukan pengujian suatu perangkat lunak yang secara signifikan mengurangi waktu yang dibutuhkan untuk melakukan pengujian. Ada banyak kakas yang dapat digunakan untuk melakukan pengujian otomatis, antara lain Selendroid, Calabash, dan UI Automator. Tulisan ini membahas tentang studi perbandingan kakas pengujian otomatis pada aplikasi<em> </em><em>mobile</em> berbasis android dengan menggunakan Selendroid, Calabash, dan UI Automator.  Eksperimen dilakukan untuk mengetahui kelebihan dan kekurangan masing-masing tools. Dari hasil analisis dan eksperimen, penulis merekomendasikan UI Automator sebagai kakas terbaik dalam hal kemudahan penginstalasian dan menjalankan kasus uji dalam sebuah kegiatan pengujian aplikasi <em>mobile</em> berbasis android.</p><p class="IEEEAbtract"> </p><p class="IEEEAbtract"><em><strong>Abstract</strong></em></p><p class="Abstrak"><em>Testing is a must to do phase in software development process. It is perfomed to avoid any bugs that may exist in the </em><em>software. There are many test cases to be executed in the testing process</em><em> to make sure software is running according to its specification and without any bugs. Testing done manually take</em><em>s a long time and extra work. Therefore, automated testing is </em><em>important. Automated testing is the use of testing tools or testing framework in testing a software. Automated testing aims to test or significantly reduce the time required for testing. There are many tools that can be used to perform test automation</em><em> of android mobile application, including Selendroid, Calabash</em><em>, and UI Automator. </em><em>This paper discusses about comparative studies of automated testing tools on android applications using Selendroid, Calabash</em><em>, and UI Automator. </em><em>Some experiments are conducted to know the </em><em>strengths and </em><em>weakness of each tool</em><em>. Based on this study, we give recommendation to UI Automator as the handiest tool to use in term of installation and the execution of the test cases. </em></p><p class="IEEEAbtract"><em><strong><br /></strong></em></p>


Teknologi ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 1
Author(s):  
Moh Arsyad Mubarak Setyawan ◽  
Fajar Pradana ◽  
Bayu Priyambadha

Pengujian perangkat lunak merupakan salah satu bagian penting dari pembuatan perangkat lunak. Pada pengujian perangkat lunak terdapat pengujian unit. Pengujian unit merupakan proses pengujian komponen yang berfokus untuk memverifikasi unit terkecil pada perancangan perangkat lunak. Pada tahap pengujian unit terdapat proses pembangkitan kasus uji. Selama ini, pembangkitan kasus uji dari suatu kode program dilakukan secara manual se-hingga membutuhkan waktu yang lama. Hal ini dikarenakan banyaknya kemungkinan jalur pada kode sumber yang akan diuji.  Dalam penelitian ini dibangun suatu sistem otomatis untuk membangkitkan kasus uji. Alur kerja sistem dimulai dari analisa kode sumber dengan Spoon Library, selanjutnya dibentuk CFG (Control Flow Graph) dan DDG (Dynamic Directed Graph). Dari DDG tersebut akan dibangkitkan jalur layak yang terdapat pada DDG, dengan menggunakan algoritma genetika diharapkan dapat mengoptimalkan penentuan jalur independen. Dari masing-masing jalur independen akan dibangkitkan kasus ujinya dengan metode test case generation. Pengujian akurasi sistem pada sistem otomatisasi pembangkit kasus uji dengan jumlah populasi 5, 10 dan 15 serta jumlah maksimum generasi 50, 100, 200 dan 250 dihasilkan jumlah populasi paling optimal yaitu 10 dan maksimum generasi optimal yaitu 200 dengan akurasi 93,33%. Pada jumlah populasi dan maksimum generasi sesudahnya tidak terjadi peningkatan akurasi yang signifikan. Tiap peningkatan jumlah populasi dan maksimum generasi dapat meningkatkan akurasi sistem.  Software testing is one of the most important part of making software. On the software testing there are unit testing. Unit Testing is a process for verifying component, focusing on the smallest unit of software design. In the unit testing phase contained test case generation process. During this time, the generation of test cases of a program code is done manually. In this study, constructed an automated system to generate test cases. The workflow system starts from the analysis of the source code with the library spoon and then create CFG (Control Flow Graph) and DDG (Dynamic Directed graph). From the DDG will be raised feasible path using a genetic algorithm. Furthermore, from fea-sible path sought independenth path which is a path base d on the level of uniqueness of the path to the other path. From each independenth path raised the test case with a test case generation method. Testing accuracy of the system on the automation system generating test cases with populations of 5,10 and 15 as well as the maximum number of generations 50, 100, 200 and 250 produced the most optimal population number is 15 and the most optimal maximum generation is 200 with accuracy 93.33%. Each increase in the number of population and maximum generation can improve the accuracy of the system. Level accuracy with population number over 10 and maximum generation over 200 has no increace accuracy significant.


Author(s):  
KAI H. CHANG ◽  
JAMES H. CROSS II ◽  
W.HOMER CARLISLE ◽  
SHIH-SUNG LIAO

Software testing is an important step in the development of complex systems. The construction of test cases using traditional methods usually requires considerable manual effort. QUEST/Ada—Query Utility Environment for Software Testing of Ada, is a prototype test case generation system that uses various heuristics-based approaches to generate test cases. The system, which is designed for unit testing, generates test cases by monitoring the branch coverage progress and intelligently modifying existing test cases to achieve additional coverage. Three heuristics-based approaches along with a random test case generation method were studied to compare their branch coverage performance. Although some constraints are imposed by the prototype, the framework provides a useful foundation for further heuristics-based test case generation research. The design of the system, the heuristic rules used in the system, and an evaluation of each rule’s performance are presented.


2021 ◽  
Vol 26 (4) ◽  
Author(s):  
Man Zhang ◽  
Bogdan Marculescu ◽  
Andrea Arcuri

AbstractNowadays, RESTful web services are widely used for building enterprise applications. REST is not a protocol, but rather it defines a set of guidelines on how to design APIs to access and manipulate resources using HTTP over a network. In this paper, we propose an enhanced search-based method for automated system test generation for RESTful web services, by exploiting domain knowledge on the handling of HTTP resources. The proposed techniques use domain knowledge specific to RESTful web services and a set of effective templates to structure test actions (i.e., ordered sequences of HTTP calls) within an individual in the evolutionary search. The action templates are developed based on the semantics of HTTP methods and are used to manipulate the web services’ resources. In addition, we propose five novel sampling strategies with four sampling methods (i.e., resource-based sampling) for the test cases that can use one or more of these templates. The strategies are further supported with a set of new, specialized mutation operators (i.e., resource-based mutation) in the evolutionary search that take into account the use of these resources in the generated test cases. Moreover, we propose a novel dependency handling to detect possible dependencies among the resources in the tested applications. The resource-based sampling and mutations are then enhanced by exploiting the information of these detected dependencies. To evaluate our approach, we implemented it as an extension to the EvoMaster tool, and conducted an empirical study with two selected baselines on 7 open-source and 12 synthetic RESTful web services. Results show that our novel resource-based approach with dependency handling obtains a significant improvement in performance over the baselines, e.g., up to + 130.7% relative improvement (growing from + 27.9% to + 64.3%) on line coverage.


Sign in / Sign up

Export Citation Format

Share Document