scholarly journals Separating Predicted Randomness from Residual Behavior

Author(s):  
Jose Apesteguia ◽  
Miguel A Ballester

Abstract We propose a novel measure of goodness of fit for stochastic choice models, that is, the maximal fraction of data that can be reconciled with the model. The procedure is to separate the data into two parts: one generated by the best specification of the model and another representing residual behavior. We claim that the three elements involved in a separation are instrumental in understanding the data. We show how to apply our approach to any stochastic choice model and then study the case of four well-known models, each capturing a different notion of randomness. We illustrate our results with an experimental data set.

2016 ◽  
Vol 72 (6) ◽  
pp. 696-703 ◽  
Author(s):  
Julian Henn

An alternative measure to the goodness of fit (GoF) is developed and applied to experimental data. The alternative goodness of fit squared (aGoFs) demonstrates that the GoF regularly fails to provide evidence for the presence of systematic errors, because certain requirements are not met. These requirements are briefly discussed. It is shown that in many experimental data sets a correlation between the squared residuals and the variance of observed intensities exists. These correlations corrupt the GoF and lead to artificially reduced values in the GoF and in the numerical value of thewR(F2). Remaining systematic errors in the data sets are veiled by this mechanism. In data sets where these correlations do not appear for the entire data set, they often appear for the decile of largest variances of observed intensities. Additionally, statistical errors for the squared goodness of fit, GoFs, and the aGoFs are developed and applied to experimental data. This measure shows how significantly the GoFs and aGoFs deviate from the ideal value one.


1985 ◽  
Vol 22 (4) ◽  
pp. 462-467 ◽  
Author(s):  
Dennis H. Gensch

All disaggregate multiattribute choice models contain the assumption that the population is reasonably homogeneous with respect to the aggregate parameters estimated by the model. The author points out that one particular choice model, logit, has a structure that makes it particularly suited to test a data set for possible segments. A real-world data set is used to illustrate a simple procedure for testing the homogeneity assumption. The analysis provides a warning that managers may easily derive suboptimal or counterproductive strategies if they fail to test this assumption.


Materials ◽  
2018 ◽  
Vol 11 (9) ◽  
pp. 1585 ◽  
Author(s):  
Vito Cedro ◽  
Christian Garcia ◽  
Mark Render

Advanced power plant alloys must endure high temperatures and pressures for durations at which creep data are often not available, necessitating the extrapolation of creep life. Many methods have been proposed to extrapolate creep life, and one of recent significance is a set of equations known as the Wilshire equations. With this method, multiple approaches can be used to determine creep activation energy, increase the goodness of fit of available experimental data, and improve the confidence level of calculating long-term creep strength at times well beyond the available experimental data. In this article, the Wilshire equation is used to extrapolate the creep life of HR6W and Sanicro 25, and different methods to determine creep activation energy, region splitting, the use of short-duration test data, and the omission of very-short-term data are investigated to determine their effect on correlation and calculations. It was found that using a known value of the activation energy of lattice self-diffusion, rather than calculating Q C * from each data set, is both the simplest and most viable method to determine Q C * . Region-splitting improved rupture time calculations for both alloys. Extrapolating creep life from short-term data for these alloys was found to be reasonable.


1995 ◽  
Vol 27 (8) ◽  
pp. 1303-1315 ◽  
Author(s):  
J-C Thill

Contrary to many other types of spatial decisions, shopping destination choice behavior is highly repetitive. For the practitioner looking for good predictors of store patronage, for reliable marginal utility estimates and reliable market share predictions, a central concern is with the type of data best suited to the research question, given the existing logistic and financial constraints. Different approaches can be recognized in the literature in which conventional discrete choice models are applied to shopping destination choice problems. In this paper, two of the most common practices are assessed and compared. First, the choice model is estimated with all choices of a relevant destination observed during a certain period of time (pooled cross-sectional data). The alternative approach consists in an estimation with the choice of the destination where the majority of purchases takes place (cross-sectional data). In the particular data set employed here, no evidence is found to support the idea that a multinomial logit model estimated with cross-sectional data does not perform as well as a model estimated with pooled cross-sectional data. Both models are found to be similar in their ability to identity the main predictors of store choice. Models developed on either data sets have marginal utility estimates that exhibit no statistically significant differences. Finally, market share predictions derived from both models are not statistically different. It appears, therefore, that there is no need to collect repeated patronage data over an extended period of time. The practitioner who wishes to use a conventional discrete choice model may avoid spending much time and money by gathering limited data on regular patronage patterns. In addition to this practical implication, the conclusions suggest that regular shopping destinations are chosen in accordance with the same behavioral motives as ancillary destinations are.


1992 ◽  
Vol 6 (1-4) ◽  
pp. 257-301 ◽  
Author(s):  
Akimi Serizawa ◽  
Isao Kataoka ◽  
Itaru Michiyoshi

Sign in / Sign up

Export Citation Format

Share Document