scholarly journals Application of Robust Methods in Evaluation the Accuracy of Interlaboratory Measurements. Part 1. Bases of Robust Statistics. Huber Method, i.e. Algorithm A

2017 ◽  
Vol 21 (2) ◽  
pp. 47-55
Author(s):  
Zygmunt Warsza ◽  
Evgeniy Volodarsky
2013 ◽  
Vol 19 (4) ◽  
pp. 548-557 ◽  
Author(s):  
Serif Hekimoglu ◽  
Bahattin Erdogan

In geodetic measurements some outliers may occur sometimes in data sets, depending on different reasons. There are two main approaches to detect outliers as Tests for outliers (Baarda's and Pope's Tests) and robust methods (Danish method, Huber method etc.). These methods use the Least Squares Estimation (LSE). The outliers affect the LSE results, especially it smears the effects of the outliers on the good observations and sometimes wrong results may be obtained. To avoid these effects, a method that does not use LSE should be preferred. The median is a high breakdown point estimator and if it is applied for the outlier detection, reliable results can be obtained. In this study, a robust method which uses median with or as a treshould value on median residuals that are obtained from median equations is proposed. If the a priori variance of the observations is known, the reliability of the new approch is greater than the one in the case where the a priori variance is unknown.


2017 ◽  
Author(s):  
Donald Ray Williams ◽  
Stephen Ross Martin

Developing robust statistical methods is an important goal for psychological science. Whereas classical methods (i.e., sampling distributions, p-values, etc.) have been thoroughly characterized, Bayesian robust methods remain relatively uncommon in practice and methodological literatures. Here we propose a robust Bayesian model (BHS t ) that accommodates heterogeneous (H) variances by predicting the scale parameter on the log scale and tail-heaviness with a Student-t likelihood (S t). Through simulations with normative and contaminated (i.e., heavy-tailed) data, we demonstrate that BHS t has consistent frequentist properties in terms of type I error, power, and mean squared error compared to three classical robust methods. With a motivating example, we illustrate Bayesian inferential methods such as approximate leave-one-out cross-validation and posterior predictive checks. We end by suggesting areas of improvement for BHS t and discussing Bayesian robust methods in practice.


1976 ◽  
Vol 1 (4) ◽  
pp. 285-312 ◽  
Author(s):  
Howard Wainer

It is noted that the usual estimators that are optimal under a Gaussian assumption are very vulnerable to the effects of outliers. A survey of robust alternatives to the mean, standard deviation, product moment correlation, t-test, and analysis of variance is offered. Robust methods of factor analysis, principal components analysis and multivariate analysis of variance are also surveyed, as are schemes for outlier detection.


2006 ◽  
Vol 1 (1) ◽  
pp. 27-41 ◽  
Author(s):  
Anna Chernobai ◽  
Svetlozar Rachev

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1250
Author(s):  
Daniel Medina ◽  
Haoqing Li ◽  
Jordi Vilà-Valls ◽  
Pau Closas

Global navigation satellite systems (GNSSs) play a key role in intelligent transportation systems such as autonomous driving or unmanned systems navigation. In such applications, it is fundamental to ensure a reliable precise positioning solution able to operate in harsh propagation conditions such as urban environments and under multipath and other disturbances. Exploiting carrier phase observations allows for precise positioning solutions at the complexity cost of resolving integer phase ambiguities, a procedure that is particularly affected by non-nominal conditions. This limits the applicability of conventional filtering techniques in challenging scenarios, and new robust solutions must be accounted for. This contribution deals with real-time kinematic (RTK) positioning and the design of robust filtering solutions for the associated mixed integer- and real-valued estimation problem. Families of Kalman filter (KF) approaches based on robust statistics and variational inference are explored, such as the generalized M-based KF or the variational-based KF, aiming to mitigate the impact of outliers or non-nominal measurement behaviors. The performance assessment under harsh propagation conditions is realized using a simulated scenario and real data from a measurement campaign. The proposed robust filtering solutions are shown to offer excellent resilience against outlying observations, with the variational-based KF showcasing the overall best performance in terms of Gaussian efficiency and robustness.


Author(s):  
Jonah T Hansen ◽  
Luca Casagrande ◽  
Michael J Ireland ◽  
Jane Lin

Abstract Statistical studies of exoplanets and the properties of their host stars have been critical to informing models of planet formation. Numerous trends have arisen in particular from the rich Kepler dataset, including that exoplanets are more likely to be found around stars with a high metallicity and the presence of a “gap” in the distribution of planetary radii at 1.9 R⊕. Here we present a new analysis on the Kepler field, using the APOGEE spectroscopic survey to build a metallicity calibration based on Gaia, 2MASS and Strömgren photometry. This calibration, along with masses and radii derived from a Bayesian isochrone fitting algorithm, is used to test a number of these trends with unbiased, photometrically derived parameters, albeit with a smaller sample size in comparison to recent studies. We recover that planets are more frequently found around higher metallicity stars; over the entire sample, planetary frequencies are 0.88 ± 0.12 percent for [Fe/H] < 0 and 1.37 ± 0.16 percent for [Fe/H] ≥ 0 but at two sigma we find that the size of exoplanets influences the strength of this trend. We also recover the planet radius gap, along with a slight positive correlation with stellar mass. We conclude that this method shows promise to derive robust statistics of exoplanets. We also remark that spectrophotometry from Gaia DR3 will have an effective resolution similar to narrow band filters and allow to overcome the small sample size inherent in this study.


METRON ◽  
2021 ◽  
Author(s):  
Marco Riani ◽  
Mia Hubert

AbstractStarting with 2020 volume, the journal Metron has decided to celebrate the centenary since its foundation with three special issues. This volume is dedicated to robust statistics. A striking feature of most applied statistical analyses is the use of methods that are well known to be sensitive to outliers or to other departures from the postulated model. Robust statistical methods provide useful tools for reducing this sensitivity, through the detection of the outliers by first fitting the majority of the data and then by flagging deviant data points. The six papers in this issue cover a wide orientation in all fields of robustness. This editorial first provides some facts about the history and current state of robust statistics and then summarizes the contents of each paper.


2002 ◽  
Vol 8 (2-3) ◽  
pp. 93-96
Author(s):  
AFZAL BALLIM ◽  
VINCENZO PALLOTTA

The automated analysis of natural language data has become a central issue in the design of intelligent information systems. Processing unconstrained natural language data is still considered as an AI-hard task. However, various analysis techniques have been proposed to address specific aspects of natural language. In particular, recent interest has been focused on providing approximate analysis techniques, assuming that when perfect analysis is not possible, partial results may be still very useful.


Sign in / Sign up

Export Citation Format

Share Document