locally strongly convex
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 1)

H-INDEX

6
(FIVE YEARS 1)

2019 ◽  
Vol 9 (2) ◽  
pp. 361-422
Author(s):  
Martin Genzel ◽  
Alexander Stollenwerk

Abstract This work theoretically studies the problem of estimating a structured high-dimensional signal $\boldsymbol{x}_0 \in{\mathbb{R}}^n$ from noisy $1$-bit Gaussian measurements. Our recovery approach is based on a simple convex program which uses the hinge loss function as data fidelity term. While such a risk minimization strategy is very natural to learn binary output models, such as in classification, its capacity to estimate a specific signal vector is largely unexplored. A major difficulty is that the hinge loss is just piecewise linear, so that its ‘curvature energy’ is concentrated in a single point. This is substantially different from other popular loss functions considered in signal estimation, e.g. the square or logistic loss, which are at least locally strongly convex. It is therefore somewhat unexpected that we can still prove very similar types of recovery guarantees for the hinge loss estimator, even in the presence of strong noise. More specifically, our non-asymptotic error bounds show that stable and robust reconstruction of $\boldsymbol{x}_0$ can be achieved with the optimal oversampling rate $O(m^{-1/2})$ in terms of the number of measurements $m$. Moreover, we permit a wide class of structural assumptions on the ground truth signal, in the sense that $\boldsymbol{x}_0$ can belong to an arbitrary bounded convex set $K \subset{\mathbb{R}}^n$. The proofs of our main results rely on some recent advances in statistical learning theory due to Mendelson. In particular, we invoke an adapted version of Mendelson’s small ball method that allows us to establish a quadratic lower bound on the error of the first-order Taylor approximation of the empirical hinge loss function.


2019 ◽  
Vol 23 ◽  
pp. 841-873 ◽  
Author(s):  
Antoine Godichon-Baggioni

An usual problem in statistics consists in estimating the minimizer of a convex function. When we have to deal with large samples taking values in high dimensional spaces, stochastic gradient algorithms and their averaged versions are efficient candidates. Indeed, (1) they do not need too much computational efforts, (2) they do not need to store all the data, which is crucial when we deal with big data, (3) they allow to simply update the estimates, which is important when data arrive sequentially. The aim of this work is to give asymptotic and non asymptotic rates of convergence of stochastic gradient estimates as well as of their averaged versions when the function we would like to minimize is only locally strongly convex.


2016 ◽  
Vol 108 (1) ◽  
pp. 119-147
Author(s):  
Abdelouahab Chikh Salah ◽  
Luc Vrancken

Sign in / Sign up

Export Citation Format

Share Document