statistical adequacy
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 0)

H-INDEX

9
(FIVE YEARS 0)

2019 ◽  
Vol 55 (5) ◽  
pp. 4364-4392 ◽  
Author(s):  
Cristina Prieto ◽  
Nataliya Le Vine ◽  
Dmitri Kavetski ◽  
Eduardo García ◽  
Raúl Medina

2015 ◽  
Vol 7 (4) ◽  
Author(s):  
Leandro Campos Pinto ◽  
Pedro Luiz Terra Lima ◽  
Sílvio De Castro Silveira ◽  
Joel Augusto Muniz ◽  
Zélio Resende De Souza ◽  
...  

The knowledge of rainfall distribution and behavior at certain area is indispensable at irrigation systems design and management, as well rainfall tracking regarding soil conservation aspects. Thus the objective of this research was the comparison and statistical adequacy of probability distribution models applied to probable rainfall studies, to determine which probability model is more appropriate to distinct seasons (monthly, fortnightly and decendial) and to estimate probable rainfall for different probability levels on Lambari region, South of Minas Gerais state, Brazil. Log-normal at 3 parameters and Gamma were the most adequate for monthly and decendial periods, for fortnight periods Gamma was the most adequate, with annual average values of monthly, fortnightly and decendial rainfall to 75% probability level of 82.4, 72.1, and 42.4 mm, respectively. 


2014 ◽  
Vol 36 (1) ◽  
pp. 45-65 ◽  
Author(s):  
Kevin D. Hoover

The significance of Haavelmo’s “The Probability Approach in Econometrics” (1944), the foundational document of modern econometrics, has been interpreted in widely different ways. Some regard it as a blueprint for a provocative (but ultimately unsuccessful) program dominated by the need for a priori theoretical identification of econometric models. Others focus more on statistical adequacy than on theoretical identification. They see its deepest insights as unduly neglected. The present article uses bibliometric techniques and a close reading of econometrics articles and textbooks to trace the way in which the economics profession received, interpreted, and transmitted Haavelmo’s ideas. A key irony is that the first group calls for a reform of econometric thinking that goes several steps beyond Haavelmo’s initial vision; the second group argues that essentially what the first group advocates was already in Haavelmo’s “Probability Approach” from the beginning.


2013 ◽  
Vol 6 (1) ◽  
pp. 1-9 ◽  
Author(s):  
Brian J. Rothschild ◽  
Yue Jiao

Attaining maximum sustained yield (MSY) is a central goal in U.S. fisheries management. To attain MSY, fishing mortality is maintained at FMSY and biomass at BMSY. Replacing FMSY and BMSY by “proxies” for FMSY and BMSY is commonplace. However, these proxies are not equivalent to FMSY and BMSY. The lack of equivalency is an important issue with regard to whether MSY is attained or whether biomass production is wasted. In this paper we study the magnitude of the equivalency. We compare FMSY/BMSY (calculated using the ASPIC toolbox) with the proxy estimates, F40%/B40%, published in GARM III. Our calculations confirm that in general the FMSY/BMSY calculations differ from the GARM III proxy estimates. The proxy estimates generally indicate that the stocks are overfished and are at relatively low biomasses, while the ASPIC estimates generally reflect the opposite: the stocks are not overfished and are at relatively high levels of abundance. In comparing the two approaches, the ASPIC estimates appeared favorable over the proxy estimates because 1) the ASPIC estimates involve only a few parameters in contrast to the many parameters estimated in the proxy approach, 2) “real variance” estimates for the proxy are not available so that it is difficult to evaluate the statistical adequacy of the proxy approach relative to the ASPIC approach, and 3) the proxy approach is based on many components (e.g., growth, stock and recruitment, etc.) that are subject to considerable uncertainty.


2011 ◽  
Vol 2011 ◽  
pp. 1-20 ◽  
Author(s):  
Agostino Tarsitano ◽  
Marianna Falcone

We propose a new method of single imputation, reconstruction, and estimation of nonreported, incorrect, implausible, or excluded values in more than one field of the record. In particular, we will be concerned with data sets involving a mixture of numeric, ordinal, binary, and categorical variables. Our technique is a variation of the popular nearest neighbor hot deck imputation (NNHDI) where “nearest” is defined in terms of a global distance obtained as a convex combination of the distance matrices computed for the various types of variables. We address the problem of proper weighting of the partial distance matrices in order to reflect their significance, reliability, and statistical adequacy. Performance of several weighting schemes is compared under a variety of settings in coordination with imputation of the least power mean of the Box-Cox transformation applied to the values of the donors. Through analysis of simulated and actual data sets, we will show that this approach is appropriate. Our main contribution has been to demonstrate that mixed data may optimally be combined to allow the accurate reconstruction of missing values in the target variable even when some data are absent from the other fields of the record.


Sign in / Sign up

Export Citation Format

Share Document