scholarly journals Some competing nonresponse adjustment estimators

2020 ◽  
Vol 23 (2) ◽  
pp. 291-306
Author(s):  
Imbi Traat

The nonresponse adjustment estimator, derived in this paper by standard regression tools, is surprising by its form. The weights of the new estimator, called the f-estimator, are (general) inverses of the respective weights in the classical linear calibration estimator and propensity adjusted estimator. In a simulation experiment on real data, the new estimator is the best for several study variables.

PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254152
Author(s):  
Alejandro Rodríguez-Collado ◽  
Cristina Rueda

The Hodgkin-Huxley model, decades after its first presentation, is still a reference model in neuroscience as it has successfully reproduced the electrophysiological activity of many organisms. The primary signal in the model represents the membrane potential of a neuron. A simple representation of this signal is presented in this paper. The new proposal is an adapted Frequency Modulated Möbius multicomponent model defined as a signal plus error model in which the signal is decomposed as a sum of waves. The main strengths of the method are the simple parametric formulation, the interpretability and flexibility of the parameters that describe and discriminate the waveforms, the estimators’ identifiability and accuracy, and the robustness against noise. The approach is validated with a broad simulation experiment of Hodgkin-Huxley signals and real data from squid giant axons. Interesting differences between simulated and real data emerge from the comparison of the parameter configurations. Furthermore, the potential of the FMM parameters to predict Hodgkin-Huxley model parameters is shown using different Machine Learning methods. Finally, promising contributions of the approach in Spike Sorting and cell-type classification are detailed.


METRON ◽  
2021 ◽  
Author(s):  
Massimiliano Giacalone

AbstractA well-known result in statistics is that a linear combination of two-point forecasts has a smaller Mean Square Error (MSE) than the two competing forecasts themselves (Bates and Granger in J Oper Res Soc 20(4):451–468, 1969). The only case in which no improvements are possible is when one of the single forecasts is already the optimal one in terms of MSE. The kinds of combination methods are various, ranging from the simple average (SA) to more robust methods such as the one based on median or Trimmed Average (TA) or Least Absolute Deviations or optimization techniques (Stock and Watson in J Forecast 23(6):405–430, 2004). Standard regression-based combination approaches may fail to get a realistic result if the forecasts show high collinearity in several situations or the data distribution is not Gaussian. Therefore, we propose a forecast combination method based on Lp-norm estimators. These estimators are based on the Generalized Error Distribution, which is a generalization of the Gaussian distribution, and they can be used to solve the cases of multicollinearity and non-Gaussianity. In order to demonstrate the potential of Lp-norms, we conducted a simulated and an empirical study, comparing its performance with other standard-regression combination approaches. We carried out the simulation study with different values of the autoregressive parameter, by alternating heteroskedasticity and homoskedasticity. On the other hand, the real data application is based on the daily Bitfinex historical series of bitcoins (2014–2020) and the 25 historical series relating to companies included in the Dow Jonson, were subsequently considered. We showed that, by combining different GARCH and the ARIMA models, assuming both Gaussian and non-Gaussian distributions, the Lp-norm scheme improves the forecasting accuracy with respect to other regression-based combination procedures.


Author(s):  
Stanislav Anatolyev ◽  
Alena Skolkova

In recent decades, econometric tools for handling instrumental-variable regressions characterized by many instruments have been developed. We introduce a command, mivreg, that implements consistent estimation and testing in linear instrumental-variables regressions with many (possibly weak) instruments. mivreg covers both homoskedastic and heteroskedastic environments, estimators that are both nonrobust and robust to error nonnormality and projection matrix limit, and parameter tests and specification tests both with and without correction for existence of moments. We also run a small simulation experiment using mivreg and illustrate how mivreg works with real data.


2019 ◽  
Vol 11 (7) ◽  
pp. 812 ◽  
Author(s):  
Sen Du ◽  
Yunjia Wang ◽  
Meinan Zheng ◽  
Dawei Zhou ◽  
Yuanping Xia

Mining goafs can cause many hazards, such as burst water, spontaneous combustion of coal seams, surface collapse, etc. In this paper, a feature-points-based method for the efficient location of mining goafs is proposed. Different interferometric synthetic aperture radar (DInSAR) is used to monitor the subsidence basin caused by mining. Using the principles of the probability integral method (PIM), the inflection points and the boundary points of the basin monitored by DInSAR are determined and used as feature points to locate the goaf. In this paper, the necessity of locating goafs and the traditional methods used for this task are discussed first. Then, the results of verifying the proposed method by both a simulation experiment and real data experiment are presented. Six RADARSAT-2 images from 13th October 2015 to 5th March 2016 were used to acquire the subsidence basin caused by the 15235 working faces of the Jiulong mining area. The average relative errors of the simulation experiment and real data experiment were about 6.43% and 12.59%, respectively. The average absolute errors of the simulation experiment and real data experiment were about 28 m and 38 m, respectively. In the final part of this paper, the error sources are discussed to illustrate the factors that can affect the location result.


2015 ◽  
Vol 2015 ◽  
pp. 1-9
Author(s):  
Chao Zhang ◽  
Shaogao Lv

Kernel selection is a central issue in kernel methods of machine learning. In this paper, we investigate the regularized learning schemes based on kernel design methods. Our ideal kernel is derived from a simple iterative procedure using large scale unlabeled data in a semisupervised framework. Compared with most of existing approaches, our algorithm avoids multioptimization in the process of learning kernels and its computation is as efficient as the standard single kernel-based algorithms. Moreover, large amounts of information associated with input space can be exploited, and thus generalization ability is improved accordingly. We provide some theoretical support for the least square cases in our settings; also these advantages are shown by a simulation experiment and a real data analysis.


2015 ◽  
Vol 32 (06) ◽  
pp. 1550041
Author(s):  
Xiang Chu ◽  
Qiu-Yan Zhong ◽  
Shahid G. Khokhar

This paper discusses casualty and medical group scheduling for disaster response in order to reduce casualities. First, triage levels and their stochastic transition, Markov Chain, are introduced to establish a function from treated time to probability of death. Then, job shop scheduling model and a genetic algorithm are designed. Finally, we performed a simulation experiment, partially using real data of 5.12 Wenchuan earthquakes for reference. The results show that the algorithm reduces number of mortalities stably and efficiently while considering disturbance events.


2019 ◽  
Vol 35 (1) ◽  
pp. 126-136 ◽  
Author(s):  
Tour Liu ◽  
Tian Lan ◽  
Tao Xin

Abstract. Random response is a very common aberrant response behavior in personality tests and may negatively affect the reliability, validity, or other analytical aspects of psychological assessment. Typically, researchers use a single person-fit index to identify random responses. This study recommends a three-step person-fit analysis procedure. Unlike the typical single person-fit methods, the three-step procedure identifies both global misfit and local misfit individuals using different person-fit indices. This procedure was able to identify more local misfit individuals than single-index method, and a graphical method was used to visualize those particular items in which random response behaviors appear. This method may be useful to researchers in that it will provide them with more information about response behaviors, allowing better evaluation of scale administration and development of more plausible explanations. Real data were used in this study instead of simulation data. In order to create real random responses, an experimental test administration was designed. Four different random response samples were produced using this experimental system.


TAPPI Journal ◽  
2019 ◽  
Vol 18 (10) ◽  
pp. 607-618
Author(s):  
JÉSSICA MOREIRA ◽  
BRUNO LACERDA DE OLIVEIRA CAMPOS ◽  
ESLY FERREIRA DA COSTA JUNIOR ◽  
ANDRÉA OLIVEIRA SOUZA DA COSTA

The multiple effect evaporator (MEE) is an energy intensive step in the kraft pulping process. The exergetic analysis can be useful for locating irreversibilities in the process and pointing out which equipment is less efficient, and it could also be the object of optimization studies. In the present work, each evaporator of a real kraft system has been individually described using mass balance and thermodynamics principles (the first and the second laws). Real data from a kraft MEE were collected from a Brazilian plant and were used for the estimation of heat transfer coefficients in a nonlinear optimization problem, as well as for the validation of the model. An exergetic analysis was made for each effect individually, which resulted in effects 1A and 1B being the least efficient, and therefore having the greatest potential for improvement. A sensibility analysis was also performed, showing that steam temperature and liquor input flow rate are sensible parameters.


Sign in / Sign up

Export Citation Format

Share Document