scholarly journals USE OF QUANTILE REGRESSION AND RANSAC ALGORITHM IN FITTING VOLUME EQUATIONS UNDER THE INFLUENCE OF DISCREPANT DATA

FLORESTA ◽  
2021 ◽  
Vol 51 (3) ◽  
pp. 596
Author(s):  
Jadson Coelho De Abreu ◽  
Carlos Pedro Boechat Soares ◽  
Helio Garcia Leite ◽  
Daniel Henrique Breda Binoti ◽  
Gilson Fernandes Da Silva

The objective of this study was to evaluate three estimation methods to fit volume equations in the presence of influential or leverage data. To do so, data from the forest inventory carried out by the Centro Tecnológico de Minas Gerais Foundation were used to fit the Schumacher and Hall (1933) model in its nonlinear form for Cerradão forest, considering the quantile regression (QR), the RANSAC algorithm and the nonlinear Ordinary Least Squares (OLS) method. The correlation coefficient ( ) between the observed and estimated volumes, root-mean-square error (RMSE), as well as graphical analysis of the dispersion and distribution of the residuals were used as criteria to evaluate the performance of the methods. After the analysis, the nonlinear least squares method presented a slightly better result in terms of the goodness-of-fit statistics, however it altered the expected trend of the fitted curve due to the presence of influential data, which did not happen with the QR and the RANSAC algorithm, as these were more robust in the presence of discrepant data.

2020 ◽  
Vol 50 (1) ◽  
Author(s):  
Guilherme Alves Puiatti ◽  
Paulo Roberto Cecon ◽  
Moysés Nascimento ◽  
Ana Carolina Campana Nascimento ◽  
Antônio Policarpo Souza Carneiro ◽  
...  

ABSTRACT: The objective of this study was to adjust nonlinear quantile regression models for the study of dry matter accumulation in garlic plants over time, and to compare them to models fitted by the ordinary least squares method. The total dry matter of nine garlic accessions belonging to the Vegetable Germplasm Bank of Universidade Federal de Viçosa (BGH/UFV) was measured in four stages (60, 90, 120 and 150 days after planting), and those values were used for the nonlinear regression models fitting. For each accession, there was an adjustment of one model of quantile regression (τ=0.5) and one based on the least squares method. The nonlinear regression model fitted was the Logistic. The Akaike Information Criterion was used to evaluate the goodness of fit of the models. Accessions were grouped using the UPGMA algorithm, with the estimates of the parameters with biological interpretation as variables. The nonlinear quantile regression is efficient for the adjustment of models for dry matter accumulation in garlic plants over time. The estimated parameters are more uniform and robust in the presence of asymmetry in the distribution of the data, heterogeneous variances, and outliers.


2002 ◽  
Vol 18 (2) ◽  
pp. 505-524 ◽  
Author(s):  
Alfred A. Haug

The Wald test for linear restrictions on cointegrating vectors is compared in finite samples using the Monte Carlo method. The Wald test is calculated within the vector error-correction based estimation methods of Bewley, Orden, Yang, and Fisher (1994, Journal of Econometrics 64, 3–27) and of Johansen (1991, Econometrica 59, 1551–1580), the canonical cointegration method of Park (1992, Econometrica 60, 119–143), the dynamic ordinary least squares method of Phillips and Loretan (1991, Review of Economic Studies 58, 407–436), Saikkonen (1991, Econometric Theory 7, 1–21), and Stock and Watson (1993, Econometrica 61, 783–820), the fully modified ordinary least squares method of Phillips and Hansen (1990, Review of Economic Studies 57, 99–125), and the band spectral techniques of Phillips (1991, in W. Barnett, J. Powell, & G. E. Tauchen (eds.), Nonparametric and Semiparametric Methods in Economics and Statistics, pp. 413–435). The Wald test performance is also compared to that of the likelihood ratio test suggested by Johansen and Juselius (1990, Oxford Bulletin of Economics and Statistics 52, 169–210) and to a Bartlett correction of that test as proposed by Johansen (1998, A Small Sample Test for Tests of Hypotheses on Cointegrating Vectors, European University Institute).


Author(s):  
Ibrahim Abdullahi ◽  
Abubakar Yahaya

<p>In this article, an alternative to ordinary least squares (OLS) regression based on analytical solution in the Statgraphics software is considered, and this alternative is no other than quantile regression (QR) model. We also present goodness of fit statistic as well as approximate distributions of the associated test statistics for the parameters. Furthermore, we suggest a goodness of fit statistic called the least absolute deviation (LAD) coefficient of determination. The procedure is well presented, illustrated and validated by a numerical example based on publicly available dataset on fuel consumption in miles per gallon in highway driving.</p>


CAUCHY ◽  
2017 ◽  
Vol 5 (1) ◽  
pp. 36
Author(s):  
Ferra Yanuar

<div><p class="Keywords">The purpose of this article was to describe the ability of the quantile regression method in overcoming the violation of classical assumptions. The classical assumptions that are violated in this study are variations of non-homogeneous error or heteroscedasticity. To achieve this goal, the simulated data generated with the design of certain data distribution. This study did a comparison between the models resulting from the use of the ordinary least squares and the quantile regression method to the same simulated data. Consistency of both methods was compared with conducting simulation studies as well. This study proved that the quantile regression method had standard error, confidence interval width and mean square error (MSE) value smaller than the ordinary least squares method. Thus it can be concluded that the quantile regression method is able to solve the problem of heteroscedasticity and produce better model than the ordinary least squares. In addition the ordinary least squares is not able to solve the problem of heteroscedasticity.</p></div>


2013 ◽  
Vol 278-280 ◽  
pp. 1323-1326
Author(s):  
Yan Hua Yu ◽  
Li Xia Song ◽  
Kun Lun Zhang

Fuzzy linear regression has been extensively studied since its inception symbolized by the work of Tanaka et al. in 1982. As one of the main estimation methods, fuzzy least squares approach is appealing because it corresponds, to some extent, to the well known statistical regression analysis. In this article, a restricted least squares method is proposed to fit fuzzy linear models with crisp inputs and symmetric fuzzy output. The paper puts forward a kind of fuzzy linear regression model based on structured element, This model has precise input data and fuzzy output data, Gives the regression coefficient and the fuzzy degree function determination method by using the least square method, studies the imitation degree question between the observed value and the forecast value.


Geophysics ◽  
2003 ◽  
Vol 68 (4) ◽  
pp. 1126-1131 ◽  
Author(s):  
Melissa Whitten Bryan ◽  
Kenneth W. Holladay ◽  
Clyde J. Bergeron ◽  
Juliette W. Ioup ◽  
George E. Ioup

An airborne electromagnetic survey was performed over the marsh and estuarine waters of the Barataria basin of Louisiana. Two inversion methods were applied to the measured data to calculate layer thicknesses and conductivities: the modified image method (MIM) and a nonlinear least‐squares method of inversion using two two‐layer forward models and one three‐layer forward model, with results generally in good agreement. Uniform horizontal water layers in the near‐shore Gulf of Mexico with the fresher (less saline, less conductive) water above the saltier (more saline, more conductive) water can be seen clearly. More complex near‐surface layering showing decreasing salinity/conductivity with depth can be seen in the marshes and inland areas. The first‐layer water depth is calculated to be 1–2 m, with the second‐layer water depth around 4 m. The first‐layer marsh and beach depths are computed to be 0–3 m, and the second‐layer marsh and beach depths vary from 2 to 9 m. The first‐layer water conductivity is calculated to be 2–3 S/m, with the second‐layer water conductivity around 3 to 4 S/m and the third‐layer water conductivity 4–5 S/m. The first‐layer marsh conductivity is computed to be mainly 1–2 S/m, and the second‐ and third‐layer marsh conductivities vary from 0.5 to 1.5 S/m, with the conductivities decreasing as depth increases except on the beach, where layer three has a much higher conductivity, ranging up to 3 S/m.


2011 ◽  
Vol 462-463 ◽  
pp. 1164-1169
Author(s):  
Jing Xiang Yang ◽  
Ya Xin Zhang ◽  
Mamtimin Gheni ◽  
Ping Ping Chang ◽  
Kai Yin Chen ◽  
...  

In this paper, strength evaluations and reliability analysis are conducted for different types of PSSS(Periodically Symmetric Struts Supports) based on the FEA(Finite Element Analysis). The numerical models are established at first, and the PMA(Prestressed Modal Analysis) is conducted. The nodal stress value of all of the gauss points in elements are extracted out and the stress distributions are evaluated for each type of PSSS. Then using nonlinear least squares method, curve fitting is carried out, and the stress probability distribution function is obtained. The results show that although using different number of struts, the stress distribution function obeys the exponential distribution. By using nonlinear least squares method again for the distribution parameters a and b of different exponential functions, the relationship between number of struts and distribution function is obtained, and the mathematical models of the stress probability distribution functions for different supports are established. Finally, the new stress distribution model is introduced by considering the DSSI(Damaged Stress-Strength Interference), and the reliability evaluation for different types of periodically symmetric struts supports is carried out.


Geophysics ◽  
1972 ◽  
Vol 37 (2) ◽  
pp. 260-272 ◽  
Author(s):  
Leonidas C. Ocola

An iterative inversion method (Reframap) based on the kinematic properties of critically refracted waves is developed. The method is based on ray tracing and assumes homogeneous and isotropic media and ray paths confined to a vertical plane through each source‐detector pair. Unlike the earlier Profile or Time‐Term Methods, no restrictions are imposed on interface topography except that it be continuous almost everywhere (in the mathematical sense). As in the preexisting methods, more observations than unknowns are assumed. The algorithm and procedure, on which the Reframap Method is based, generate apparent dips for each source detector pair at the noncritical interfaces from the slope of a least‐squares line approximation to the interface functional in the neighborhood of each refraction point. In turn, the dip and path along the critical refractor is, at every iteration, pairwise approximated by a line through the critical refracting points. The incidence angles are computed recursively by Snell’s law. The solution of the overdetermined, nonlinear multiple refractor time‐distance system of simultaneous equations is sought by Marquardt’s algorithm for least‐squares estimation of critical refractor velocity and vertical thickness under each element.


Sign in / Sign up

Export Citation Format

Share Document