Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

Biometrics ◽  
1993 ◽  
Vol 49 (1) ◽  
pp. 227 ◽  
Author(s):  
R. C. Jansen
Author(s):  
Seuk Yen Phoong ◽  
Seuk Wai Phoong

The mixture model is known as model-based clustering that is used to model a mixture of unknown distributions. The clustering of mixture model is based on four important criteria, including the number of components in the mixture model, clustering kernel (such as Gaussian mixture models, Dirichlet, etc.), estimation methods, and dimensionality (Lai et al., 2019). Finite mixture model is a finite dimensional of a hierarchical model. It is useful in modeling the data with outliers, non-normal distributed or heavy tails. Furthermore, finite mixture model is flexible when fitted with the models that have multiple modes or skewed distribution. The flexibility depends on the increasing number of parameters with the existence of a number of components. The finite mixture model is a flexible model family and widely applied for large heterogeneous datasets. In addition, the finite mixture model is a probabilistic model that is used to examine the presence of unobserved situations or groups and to measure the distinct parameters or distribution. The situations, such as trend, seasoning, crisis time, normal situation, etc., might affect the number of components that exist for a probabilistic distribution. Furthermore, the finite mixture model is essential for time series data because these data exhibit nonlinearity properties and may have missing data or a jump-diffusion situation (Gensler, 2017; McLachlan and Lee, 2019). Keywords: Bayesian method; Finite Mixture Model; Maximum Likelihood Estimation; Prior distribution; Likelihood Function.


Author(s):  
Loc Nguyen

Dyadic data contains co-occurrences of objects, which is often modeled by finite mixture model which in turn is learned by expectation maximization (EM) algorithm. Objects in traditional dyadic data are identified by names, causing the drawback which is that it is impossible to extract implicit valuable knowledge under objects. In this research, I propose the so-called attributed dyadic data (ADD) in which each object has an informative attribute and each co-occurrence of two objects is associated with a value. ADD is flexible and covers most of structures / forms of dyadic data. Conditional mixture model (CMM), which is a variant of finite mixture model, is applied into learning ADD. Moreover, a significant feature of CMM is that any co-occurrence of two objects is based on some conditional variable. As a result, CMM can predict or estimate co-occurrent values based on regression model, which extends applications of ADD and CMM.


2021 ◽  
Author(s):  
Samyajoy Pal ◽  
Christian Heumann

Abstract A generalized way of building mixture models using different distributions is explored in this article. The EM algorithm is used with some modifications to accommodate different distributions within the same model. The model uses any point estimate available for the respective distributions to estimate the mixture components and model parameters. The study is focused on the application of mixture models in unsupervised learning problems, especially cluster analysis. The convenience of building mixture models using the generalized approach is further emphasised by appropriate examples, exploiting the well-known maximum likelihood and Bayesian estimates of the parameters of the parent distributions.


Sign in / Sign up

Export Citation Format

Share Document