Optimal Release Policy for Multi-Release Software System

Author(s):  
Anu G. Aggarwal ◽  
Chandra K. Jaggi ◽  
Nidhi Nijhawan

In software industry, multi release software development process is a latest phenomenon that brings the benefits of newer technologies, while retaining the quality. In this paper, it is assumed that the development of next version or release starts immediately after the launch of the previous version and the field test of each version continues after its release so that undetected faults of just previous version along with the added faults of latest version are detected during the testing of new software code. Today's dynamic customers need timely up gradation. Therefore, to sustain user growth and satisfaction it is imperative for the developers to know the appropriate time to launch upgraded software into the market. In this paper, an optimal release policy for multi release software system has been proposed by taking into consideration the testing as well as the operational phase. A numerical example has been presented to illustrate the optimal release policy. Parameter estimation has been done using the real-life fault data set. Goodness-of-fit curves have been drawn.

Author(s):  
Abhishek Tandon ◽  
Anu G. Aggarwal ◽  
Nidhi Nijhawan

In an environment of intense competition, software upgrades have become the necessity for the survival in software industry. In this paper, the authors propose a discrete Software Reliability Growth Model (SRGM) for the software with successive releases by taking into consideration the realistic assumption that Fault Removal Rate (FRR) may not remain constant during the testing process, it changes due to severity of faults detected and due to change in strategies adapted by testing team and the time point where FRR changes is called the Change Point. Many researchers have developed SRGMs incorporating the concept of Change Point for single release software. The proposed model aims to present multi release software reliability modeling with change point. Discrete logistic distribution function has been used to model relationship between features enhancement and fault removal. It is helpul in developing a flexible SRGM, which is S-shaped in nature. In order to evaluate the proposed SRGM, parameter estimation is done using the real life data set for software with four releases and the goodness-of-fit of this model is analyzed.


2021 ◽  
Vol 71 (5) ◽  
pp. 1291-1308
Author(s):  
Joseph Thomas Eghwerido ◽  
Friday Ikechukwu Agu

Abstract This article proposes a class of generator for classical statistical distribution called the shifted Gompertz-G (SHIGO-G) distribution for generating new continuous distributions. Special models of the proposed model were examined together with some of its statistical properties in closed form which makes it tractable for censored data. Its major properties include heavy tail, approximately symmetric, left and right skewed with a combination of exponential and a reverted Gumbel distributions called the Gompertz. The bivariate SHIGO-G is introduced. The parameters estimate of the proposed model was obtained by maximum likelihood method. A Monte Carlo simulation study was employed to investigate the performance of the estimators of the proposed model mean, variance, bias and mean square error. A two real life illustration was used to examine the empirical goodness-of-fit of the test statistic of the proposed model. The results of the real life applications show that the SHIGO-G model provides a better fit for the data set used.


Author(s):  
P. K. Kapur ◽  
Nitin Sachdeva

For the past several decades, reliability has been presumed as the most important characteristic of any complex software system. Software developer’s assurance of high quality and reliable software is driven on the basis of efficient reliability assessment. Since 1970’s an unprecedented growth has been observed in the area of software reliability growth modeling. Software reliability growth models (SRGM) are linked to the testing stage of software development and provides for insights into ways to improve reliability of the system and optimal time to release the software. Several SRGMs have been proposed in the literature to model fault identification/removal phenomenon based on time; search for more efficient and accurate models that can fit greater number of reliability growth curves is endless. Categorization of faults lying in the software has been widely studied as well. Also, efforts have been made in the past to understand reliability issues concerned with modular software system. In this paper we examine three different types of faults lying in complex software and study their behavior in an N-module software system. In order to attain two fold objectives of maximizing reliability of such a system and minimizing overall debugging cost, we propose two related optimization models. The inbuilt model flexibility takes care of different environment. The model is validated by real life software failure data set to show its goodness of fit and applicability.


2018 ◽  
Vol 15 (02) ◽  
pp. 1850011 ◽  
Author(s):  
Nidhi Nijhawan ◽  
Anu G. Aggarwal ◽  
Vikas Dhaka

A number of software reliability growth models have been reported in the literature for open source software (OSS) systems but the effect of up-gradations on the reliability growth of multi-releases of such software systems has been discussed by a few. In this paper, the discrete modeling framework has been proposed to study the reliability growth process of OSS systems with multiple releases. The proposed model is based upon the assumption that during up-gradation some new faults are introduced in the code in addition to the left over fault content of the previous version. To validate our model, we have chosen two successful open source projects-Mozilla and Apache for its multi release failure datasets. Graphs representing goodness of fit of the proposed model have been drawn. The parameter estimates and measures of goodness of fit criteria suggest that the proposed software reliability growth model for multi-release OSS fits the actual datasets very well. An optimal release policy has been formulated by taking into account the cost of fault removal during testing and operational phases and reliability targets pre-specified by the decision makers. In addition, numerical example along with the sensitivity analysis has been provided to illustrate optimal release policy.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


2021 ◽  
Vol 503 (2) ◽  
pp. 2688-2705
Author(s):  
C Doux ◽  
E Baxter ◽  
P Lemos ◽  
C Chang ◽  
A Alarcon ◽  
...  

ABSTRACT Beyond ΛCDM, physics or systematic errors may cause subsets of a cosmological data set to appear inconsistent when analysed assuming ΛCDM. We present an application of internal consistency tests to measurements from the Dark Energy Survey Year 1 (DES Y1) joint probes analysis. Our analysis relies on computing the posterior predictive distribution (PPD) for these data under the assumption of ΛCDM. We find that the DES Y1 data have an acceptable goodness of fit to ΛCDM, with a probability of finding a worse fit by random chance of p = 0.046. Using numerical PPD tests, supplemented by graphical checks, we show that most of the data vector appears completely consistent with expectations, although we observe a small tension between large- and small-scale measurements. A small part (roughly 1.5 per cent) of the data vector shows an unusually large departure from expectations; excluding this part of the data has negligible impact on cosmological constraints, but does significantly improve the p-value to 0.10. The methodology developed here will be applied to test the consistency of DES Year 3 joint probes data sets.


2019 ◽  
Vol 12 (4) ◽  
pp. 171
Author(s):  
Ashis SenGupta ◽  
Moumita Roy

The aim of this article is to obtain a simple and efficient estimator of the index parameter of symmetric stable distribution that holds universally, i.e., over the entire range of the parameter. We appeal to directional statistics on the classical result on wrapping of a distribution in obtaining the wrapped stable family of distributions. The performance of the estimator obtained is better than the existing estimators in the literature in terms of both consistency and efficiency. The estimator is applied to model some real life financial datasets. A mixture of normal and Cauchy distributions is compared with the stable family of distributions when the estimate of the parameter α lies between 1 and 2. A similar approach can be adopted when α (or its estimate) belongs to (0.5,1). In this case, one may compare with a mixture of Laplace and Cauchy distributions. A new measure of goodness of fit is proposed for the above family of distributions.


Sign in / Sign up

Export Citation Format

Share Document