scholarly journals Noisy independent factor analysis model for density estimation and classification

2010 ◽  
Vol 4 (0) ◽  
pp. 707-736 ◽  
Author(s):  
Umberto Amato ◽  
Anestis Antoniadis ◽  
Alexander Samarov ◽  
Alexandre B. Tsybakov
2005 ◽  
Vol 2 (2) ◽  
Author(s):  
Cinzia Viroli

Independent Factor Analysis (IFA) has recently been proposed in the signal processing literature as a way to model a set of observed variables through linear combinations of hidden independent ones plus a noise term. Despite the peculiarity of its origin the method can be framed within the latent variable model domain and some parallels with the ordinary factor analysis can be drawn. If no prior information on the latent structure is available a relevant issue concerns the correct specification of the model. In this work some methods to detect the number of significant latent variables are investigated. Moreover, since the method defines a probability density function for the latent variables by mixtures of gaussians, the correct number of mixture components must also be determined. This issue will be treated according to two main approaches. The first one amounts to carry out a likelihood ratio test. The other one is based on a penalized form of the likelihood, that leads to the so called information criteria. Some simulations and empirical results on real data sets are finally presented.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 1012
Author(s):  
Sebastian Ciobanu ◽  
Liviu Ciortuz

Linear regression (LR) is a core model in supervised machine learning performing a regression task. One can fit this model using either an analytic/closed-form formula or an iterative algorithm. Fitting it via the analytic formula becomes a problem when the number of predictors is greater than the number of samples because the closed-form solution contains a matrix inverse that is not defined when having more predictors than samples. The standard approach to solve this issue is using the Moore–Penrose inverse or the L2 regularization. We propose another solution starting from a machine learning model that, this time, is used in unsupervised learning performing a dimensionality reduction task or just a density estimation one—factor analysis (FA)—with one-dimensional latent space. The density estimation task represents our focus since, in this case, it can fit a Gaussian distribution even if the dimensionality of the data is greater than the number of samples; hence, we obtain this advantage when creating the supervised counterpart of factor analysis, which is linked to linear regression. We also create its semisupervised counterpart and then extend it to be usable with missing data. We prove an equivalence to linear regression and create experiments for each extension of the factor analysis model. The resulting algorithms are either a closed-form solution or an expectation–maximization (EM) algorithm. The latter is linked to information theory by optimizing a function containing a Kullback–Leibler (KL) divergence or the entropy of a random variable.


1997 ◽  
Vol 24 (1) ◽  
pp. 3-18 ◽  
Author(s):  
Michael W. Browne ◽  
Krishna Tateneni

2018 ◽  
Vol 66 ◽  
pp. S11-S12 ◽  
Author(s):  
A. Coni ◽  
S. Mellone ◽  
M. Colpo ◽  
S. Bandinelli ◽  
L. Chiari

Sign in / Sign up

Export Citation Format

Share Document