scholarly journals Minimax Rates of ℓp-Losses for High-Dimensional Linear Errors-in-Variables Models over ℓq-Balls

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 722
Author(s):  
Xin Li ◽  
Dongya Wu

In this paper, the high-dimensional linear regression model is considered, where the covariates are measured with additive noise. Different from most of the other methods, which are based on the assumption that the true covariates are fully obtained, results in this paper only require that the corrupted covariate matrix is observed. Then, by the application of information theory, the minimax rates of convergence for estimation are investigated in terms of the ℓp(1≤p<∞)-losses under the general sparsity assumption on the underlying regression parameter and some regularity conditions on the observed covariate matrix. The established lower and upper bounds on minimax risks agree up to constant factors when p=2, which together provide the information-theoretic limits of estimating a sparse vector in the high-dimensional linear errors-in-variables model. An estimator for the underlying parameter is also proposed and shown to be minimax optimal in the ℓ2-loss.

2020 ◽  
Author(s):  
Yonatan Gutman ◽  
Adam Śpiewak

<div>Wu and Verdú developed a theory of almost lossless analog compression, where one imposes various regularity conditions on the compressor and the decompressor with the input signal being modelled by a (typically infinite-entropy) stationary stochastic process. In this work we consider all stationary stochastic processes with trajectories in a prescribed set of (bi-)infinite sequences and find uniform lower and upper bounds for certain compression rates in terms of metric mean dimension and mean box dimension. An essential tool is the recent Lindenstrauss-Tsukamoto variational principle expressing metric mean dimension in terms of rate-distortion functions. We obtain also lower bounds on compression rates for a fixed stationary process in terms of the rate-distortion dimension rates and study several examples.</div>


2021 ◽  
Author(s):  
Brandon Legried ◽  
Jonathan Terhorst

AbstractA number of powerful demographic inference methods have been developed in recent years, with the goal of fitting rich evolutionary models to genetic data obtained from many populations. In this paper we investigate the statistical performance of these methods in the specific case where there is continuous migration between populations. Compared with earlier work, migration significantly complicates the theoretical analysis and demands new techniques. We employ the theories of phase-type distributions and concentration of measure in order to study the two-island and isolation-with-migration models, resulting in both upper and lower bounds. For the upper bounds, we consider inferring rates of coalescent and migration on the basis of directly observing pairwise coalescent times, and, more realistically, when (conditionally) Poisson-distributed mutations dropped on latent trees are observed. We complement these upper bounds with information-theoretic lower bounds which establish a limit, in terms of sample size, below which inference is effectively impossible.


Author(s):  
Frank Nielsen ◽  
Ke Sun

Information-theoretic measures such as the entropy, cross-entropy and the Kullback-Leibler divergence between two mixture models is a core primitive in many signal processing tasks. Since the Kullback-Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte-Carlo stochastic integration, approximated, or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds on the entropy, the cross-entropy and the Kullback-Leibler divergence of mixtures. We illustrate the versatile method by reporting on our experiments for approximating the Kullback-Leibler divergence between univariate exponential mixtures, Gaussian mixtures, Rayleigh mixtures, and Gamma mixtures.


2020 ◽  
Author(s):  
Yonatan Gutman ◽  
Adam Śpiewak

<div>Wu and Verdú developed a theory of almost lossless analog compression, where one imposes various regularity conditions on the compressor and the decompressor with the input signal being modelled by a (typically infinite-entropy) stationary stochastic process. In this work we consider all stationary stochastic processes with trajectories in a prescribed set of (bi-)infinite sequences and find uniform lower and upper bounds for certain compression rates in terms of metric mean dimension and mean box dimension. An essential tool is the recent Lindenstrauss-Tsukamoto variational principle expressing metric mean dimension in terms of rate-distortion functions. We obtain also lower bounds on compression rates for a fixed stationary process in terms of the rate-distortion dimension rates and study several examples.</div>


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 545
Author(s):  
Wei Cao ◽  
Alex Dytso ◽  
Michael Fauß ◽  
H. Vincent Poor

Finite-sample bounds on the accuracy of Bhattacharya’s plug-in estimator for Fisher information are derived. These bounds are further improved by introducing a clipping step that allows for better control over the score function. This leads to superior upper bounds on the rates of convergence, albeit under slightly different regularity conditions. The performance bounds on both estimators are evaluated for the practically relevant case of a random variable contaminated by Gaussian noise. Moreover, using Brown’s identity, two corresponding estimators of the minimum mean-square error are proposed.


1997 ◽  
Vol 84 (1) ◽  
pp. 176-178
Author(s):  
Frank O'Brien

The author's population density index ( PDI) model is extended to three-dimensional distributions. A derived formula is presented that allows for the calculation of the lower and upper bounds of density in three-dimensional space for any finite lattice.


Author(s):  
S. Yahya Mohamed ◽  
A. Mohamed Ali

In this paper, the notion of energy extended to spherical fuzzy graph. The adjacency matrix of a spherical fuzzy graph is defined and we compute the energy of a spherical fuzzy graph as the sum of absolute values of eigenvalues of the adjacency matrix of the spherical fuzzy graph. Also, the lower and upper bounds for the energy of spherical fuzzy graphs are obtained.


Sign in / Sign up

Export Citation Format

Share Document