scholarly journals Curves Classification by Using a Local Likelihood Function and Its Practical Usefulness for Real Data

Author(s):  
Mustapha Rachdi ◽  
Ali Laksaci ◽  
Ali Hamié ◽  
Jacques Demongeot ◽  
Idir Ouassou

We extend the classical approach in supervised classification based on the local likelihood estimation to the functional covariates case. The estimation procedure of the functional parameter (slope parameter) in the linear model when the covariate is of functional kind is investigated. We show, on simulated as well on real data, that classification error rates estimated using test samples, and the estimation procedure by local likelihood seem to lead to better estimators than the classical kernel estimation. In addition, this approach is no longer assuming that the linear predictors have a specific parametric form. However, this approach also has two drawbacks. Indeed, it was more expensive and slower than the kernel regression. Thus, as mentioned earlier, kernels other than the Gaussian kernel can lead to a divergence of the Newton-Raphson algorithm. In contrast, using a Gaussian kernel, 4 to 6 iterations are then sufficient to achieve convergence.

2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Fan Yang ◽  
Hu Ren ◽  
Zhili Hu

The maximum likelihood estimation is a widely used approach to the parameter estimation. However, the conventional algorithm makes the estimation procedure of three-parameter Weibull distribution difficult. Therefore, this paper proposes an evolutionary strategy to explore the good solutions based on the maximum likelihood method. The maximizing process of likelihood function is converted to an optimization problem. The evolutionary algorithm is employed to obtain the optimal parameters for the likelihood function. Examples are presented to demonstrate the proposed method. The results show that the proposed method is suitable for the parameter estimation of the three-parameter Weibull distribution.


Risks ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 110
Author(s):  
Qiyue He ◽  
Anatoliy Swishchuk

In this paper, we solve the problem of mid price movements arising in high-frequency and algorithmic trading using real data. Namely, we introduce different new types of General Compound Hawkes Processes (GCHPDO, GCHP2SDO, GCHPnSDO) and find their diffusive limits to model the mid price movements of 6 stocks-EBAY, FB, MU, PCAR, SMH, CSCO. We also define error rates to estimate the models fitting accuracy. Maximum Likelihood Estimation (MLE) and Particle Swarm Optimization (PSO) are used for Hawkes processes and models parameters’ calibration.


2018 ◽  
Vol 10 (04) ◽  
pp. 1850009 ◽  
Author(s):  
Gamze Ozel ◽  
Emrah Altun ◽  
Morad Alizadeh ◽  
Mahdieh Mozafari

In this paper, a new heavy-tailed distribution is used to model data with a strong right tail, as often occuring in practical situations. The proposed distribution is derived from the log-normal distribution, by using odd log-logistic distribution. Statistical properties of this distribution, including hazard function, moments, quantile function, and asymptotics, are derived. The unknown parameters are estimated by the maximum likelihood estimation procedure. For different parameter settings and sample sizes, a simulation study is performed and the performance of the new distribution is compared to beta log-normal. The new lifetime model can be very useful and its superiority is illustrated by means of two real data sets.


Stats ◽  
2018 ◽  
Vol 2 (1) ◽  
pp. 15-31
Author(s):  
Arslan Nasir ◽  
Haitham Yousof ◽  
Farrukh Jamal ◽  
Mustafa Korkmaz

In this work, we introduce a new Burr XII power series class of distributions, which is obtained by compounding exponentiated Burr XII and power series distributions and has a strong physical motivation. The new distribution contains several important lifetime models. We derive explicit expressions for the ordinary and incomplete moments and generating functions. We discuss the maximum likelihood estimation of the model parameters. The maximum likelihood estimation procedure is presented. We assess the performance of the maximum likelihood estimators in terms of biases, standard deviations, and mean square of errors by means of two simulation studies. The usefulness of the new model is illustrated by means of three real data sets. The new proposed models provide consistently better fits than other competitive models for these data sets.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 959
Author(s):  
Antonio Barrera ◽  
Patricia Román-Román ◽  
Francisco Torres-Ruiz

The main objective of this work is to introduce a stochastic model associated with the one described by the T-growth curve, which is in turn a modification of the logistic curve. By conveniently reformulating the T curve, it may be obtained as a solution to a linear differential equation. This greatly simplifies the mathematical treatment of the model and allows a diffusion process to be defined, which is derived from the non-homogeneous lognormal diffusion process, whose mean function is a T curve. This allows the phenomenon under study to be viewed in a dynamic way. In these pages, the distribution of the process is obtained, as are its main characteristics. The maximum likelihood estimation procedure is carried out by optimization via metaheuristic algorithms. Thanks to an exhaustive study of the curve, a strategy is obtained to bound the parametric space, which is a requirement for the application of various swarm-based metaheuristic algorithms. A simulation study is presented to show the validity of the bounding procedure and an example based on real data is provided.


Author(s):  
Hao Xiong ◽  
Nicholas Ruozzi

Maximum likelihood learning is a well-studied approach for fitting discrete Markov random fields (MRFs) to data. However, general purpose maximum likelihood estimation for fitting MRFs with continuous variables have only been studied in much more limited settings. In this work, we propose a generic MLE estimation procedure for MRFs whose potential functions are modeled by neural networks. To make learning effective in practice, we show how to leverage a highly parallelizable variational inference method that can easily fit into popular machining learning frameworks like TensorFlow. We demonstrate experimentally that our approach is capable of effectively modeling the data distributions of a variety of real data sets and that it can compete effectively with other common methods on multilabel classification and generative modeling tasks.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 62
Author(s):  
Zhengwei Liu ◽  
Fukang Zhu

The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.


2021 ◽  
pp. 001316442199489
Author(s):  
Luyao Peng ◽  
Sandip Sinharay

Wollack et al. (2015) suggested the erasure detection index (EDI) for detecting fraudulent erasures for individual examinees. Wollack and Eckerly (2017) and Sinharay (2018) extended the index of Wollack et al. (2015) to suggest three EDIs for detecting fraudulent erasures at the aggregate or group level. This article follows up on the research of Wollack and Eckerly (2017) and Sinharay (2018) and suggests a new aggregate-level EDI by incorporating the empirical best linear unbiased predictor from the literature of linear mixed-effects models (e.g., McCulloch et al., 2008). A simulation study shows that the new EDI has larger power than the indices of Wollack and Eckerly (2017) and Sinharay (2018). In addition, the new index has satisfactory Type I error rates. A real data example is also included.


Psych ◽  
2021 ◽  
Vol 3 (2) ◽  
pp. 197-232
Author(s):  
Yves Rosseel

This paper discusses maximum likelihood estimation for two-level structural equation models when data are missing at random at both levels. Building on existing literature, a computationally efficient expression is derived to evaluate the observed log-likelihood. Unlike previous work, the expression is valid for the special case where the model implied variance–covariance matrix at the between level is singular. Next, the log-likelihood function is translated to R code. A sequence of R scripts is presented, starting from a naive implementation and ending at the final implementation as found in the lavaan package. Along the way, various computational tips and tricks are given.


2016 ◽  
Vol 12 (S325) ◽  
pp. 259-262
Author(s):  
Susana Eyheramendy ◽  
Felipe Elorrieta ◽  
Wilfredo Palma

AbstractThis paper discusses an autoregressive model for the analysis of irregularly observed time series. The properties of this model are studied and a maximum likelihood estimation procedure is proposed. The finite sample performance of this estimator is assessed by Monte Carlo simulations, showing accurate estimators. We implement this model to the residuals after fitting an harmonic model to light-curves from periodic variable stars from the Optical Gravitational Lensing Experiment (OGLE) and Hipparcos surveys, showing that the model can identify time dependency structure that remains in the residuals when, for example, the period of the light-curves was not properly estimated.


Sign in / Sign up

Export Citation Format

Share Document