information theoretic learning
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 10)

H-INDEX

12
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Rangeet MItra ◽  
Georges Kaddoum ◽  
Daniel B. da Costa

Information theoretic learning (ITL) criteria have emerged useful for mitigating degradations caused by unknown non-Gaussian noise processes in future wireless communication systems. Specifically, the reproducing kernel Hilbert space (RKHS) based approaches relying on ITL based learning criteria are envisioned to provide nearoptimal mitigation of unknown hardware impairments and non-Gaussian noises. Among several ITL criteria, the recent works find the minimum error entropy with fiducial points (MEE-FP) promising due to its guarantee of unbiased estimation and generalization over generic noise distributions. However, MEE-FP based learning approaches are known to depend on an accurate kernel-width initialization. Also, the optimal value of this kernelwidth is well-known to vary temporally and across deployment scenarios. To remove the dependency on kernelwidth, a hyperparameter-free MEE-FP based adaptive algorithm is derived using random-Fourier features with sampled kernel widths (RFF-SKW). In addition, a detailed convergence analysis is presented for the proposed hyperparameter-free MEE-FP, which promises a near-optimal error-floor independent of step-size and guarantees convergence for a wide range of step sizes. The promised hyperparameter-independence and improved convergence for the proposed hyperparameter-free MEE-FP are validated by computer simulations considering different case studies.


2021 ◽  
Author(s):  
Rangeet MItra ◽  
Georges Kaddoum ◽  
Daniel B. da Costa

Information theoretic learning (ITL) criteria have emerged useful for mitigating degradations caused by unknown non-Gaussian noise processes in future wireless communication systems. Specifically, the reproducing kernel Hilbert space (RKHS) based approaches relying on ITL based learning criteria are envisioned to provide nearoptimal mitigation of unknown hardware impairments and non-Gaussian noises. Among several ITL criteria, the recent works find the minimum error entropy with fiducial points (MEE-FP) promising due to its guarantee of unbiased estimation and generalization over generic noise distributions. However, MEE-FP based learning approaches are known to depend on an accurate kernel-width initialization. Also, the optimal value of this kernelwidth is well-known to vary temporally and across deployment scenarios. To remove the dependency on kernelwidth, a hyperparameter-free MEE-FP based adaptive algorithm is derived using random-Fourier features with sampled kernel widths (RFF-SKW). In addition, a detailed convergence analysis is presented for the proposed hyperparameter-free MEE-FP, which promises a near-optimal error-floor independent of step-size and guarantees convergence for a wide range of step sizes. The promised hyperparameter-independence and improved convergence for the proposed hyperparameter-free MEE-FP are validated by computer simulations considering different case studies.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6165
Author(s):  
Nabil Shaukat ◽  
Muhammad Moinuddin ◽  
Pablo Otero

The ability of the underwater vehicle to determine its precise position is vital to completing a mission successfully. Multi-sensor fusion methods for underwater vehicle positioning are commonly based on Kalman filtering, which requires the knowledge of process and measurement noise covariance. As the underwater conditions are continuously changing, incorrect process and measurement noise covariance affect the accuracy of position estimation and sometimes cause divergence. Furthermore, the underwater multi-path effect and nonlinearity cause outliers that have a significant impact on positional accuracy. These non-Gaussian outliers are difficult to handle with conventional Kalman-based methods and their fuzzy variants. To address these issues, this paper presents a new and improved adaptive multi-sensor fusion method by using information-theoretic, learning-based fuzzy rules for Kalman filter covariance adaptation in the presence of outliers. Two novel metrics are proposed by utilizing correntropy Gaussian and Versoria kernels for matching theoretical and actual covariance. Using correntropy-based metrics and fuzzy logic together makes the algorithm robust against outliers in nonlinear dynamic underwater conditions. The performance of the proposed sensor fusion technique is compared and evaluated using Monte-Carlo simulations, and substantial improvements in underwater position estimation are obtained.


Author(s):  
Shujian Yu ◽  
Luis Sanchez Giraldo ◽  
Jose Principe

We present a review on the recent advances and emerging opportunities around the theme of analyzing deep neural networks (DNNs) with information-theoretic methods. We first discuss popular information-theoretic quantities and their estimators. We then introduce recent developments on information-theoretic learning principles (e.g., loss functions, regularizers and objectives) and their parameterization with DNNs. We finally briefly review current usages of information-theoretic concepts in a few modern machine learning problems and list a few emerging opportunities.


Author(s):  
Yunxiang Zhang ◽  
Yuyang Zhao ◽  
Gang Wang ◽  
Rui Xue

AbstractMost of the cost functions of adaptive filtering algorithms include the square error, which depends on the current error signal. When the additive noise is impulsive, we can expect that the square error will be very large. By contrast, the cross error, which is the correlation of the error signal and its delay, may be very small. Based on this fact, we propose a new cost function called the mean square cross error for adaptive filters, and provide the mean value and mean square performance analysis in detail. Furthermore, we present a two-stage method to estimate the closed-form solutions for the proposed method, and generalize the two-stage method to estimate the closed-form solution of the information theoretic learning methods, including least mean fourth, maximum correntropy criterion, generalized maximum correntropy criterion, and minimum kernel risk-sensitive loss. The simulations of the adaptive solutions and closed-form solution show the effectivity of the new method.


2021 ◽  
Vol 33 (1) ◽  
pp. 157-173
Author(s):  
Yunlong Feng

Stemming from information-theoretic learning, the correntropy criterion and its applications to machine learning tasks have been extensively studied and explored. Its application to regression problems leads to the robustness-enhanced regression paradigm: correntropy-based regression. Having drawn a great variety of successful real-world applications, its theoretical properties have also been investigated recently in a series of studies from a statistical learning viewpoint. The resulting big picture is that correntropy-based regression regresses toward the conditional mode function or the conditional mean function robustly under certain conditions. Continuing this trend and going further, in this study, we report some new insights into this problem. First, we show that under the additive noise regression model, such a regression paradigm can be deduced from minimum distance estimation, implying that the resulting estimator is essentially a minimum distance estimator and thus possesses robustness properties. Second, we show that the regression paradigm in fact provides a unified approach to regression problems in that it approaches the conditional mean, the conditional mode, and the conditional median functions under certain conditions. Third, we present some new results when it is used to learn the conditional mean function by developing its error bounds and exponential convergence rates under conditional ([Formula: see text])-moment assumptions. The saturation effect on the established convergence rates, which was observed under ([Formula: see text])-moment assumptions, still occurs, indicating the inherent bias of the regression estimator. These novel insights deepen our understanding of correntropy-based regression, help cement the theoretic correntropy framework, and enable us to investigate learning schemes induced by general bounded nonconvex loss functions.


Sign in / Sign up

Export Citation Format

Share Document