Testing the hypothesis of preservation of the properties of a normal linear model if the possible change point is known

1995 ◽  
Vol 75 (2) ◽  
pp. 1563-1567
Author(s):  
P. N. Sapozhnikov
Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 1036
Author(s):  
Yoshihiro Hirose

We propose regularization methods for linear models based on the Lq-likelihood, which is a generalization of the log-likelihood using a power function. Regularization methods are popular for the estimation in the normal linear model. However, heavy-tailed errors are also important in statistics and machine learning. We assume q-normal distributions as the errors in linear models. A q-normal distribution is heavy-tailed, which is defined using a power function, not the exponential function. We find that the proposed methods for linear models with q-normal errors coincide with the ordinary regularization methods that are applied to the normal linear model. The proposed methods can be computed using existing packages because they are penalized least squares methods. We examine the proposed methods using numerical experiments, showing that the methods perform well, even when the error is heavy-tailed. The numerical experiments also illustrate that our methods work well in model selection and generalization, especially when the error is slightly heavy-tailed.


1980 ◽  
Vol 29 (3-4) ◽  
pp. 169-172
Author(s):  
Bikas Kumar Sinha ◽  
Banshi Badan Mukhopadhyay

For the usual normal linear model with an lntraclass covariance structure, Ghosh and Sinha (1978) has given a complete characterization of tho design matrix for the robustness of the likelihood ratio test for linear hypotheses. We indicate here an alternative proof of the result which gives a better Insight into the problem.


1987 ◽  
Vol 34 (1-2) ◽  
pp. 33-61 ◽  
Author(s):  
Andrew Chesher ◽  
Margaret Irish

Sign in / Sign up

Export Citation Format

Share Document