Recovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization

Author(s):  
Fanhua Shang ◽  
Yuanyuan Liu ◽  
James Cheng ◽  
Hong Cheng
Keyword(s):  
2015 ◽  
Vol 22 (11) ◽  
pp. 1945-1949 ◽  
Author(s):  
Sampurna Biswas ◽  
Hema K. Achanta ◽  
Mathews Jacob ◽  
Soura Dasgupta ◽  
Raghuraman Mudumbai
Keyword(s):  

Biometrika ◽  
2019 ◽  
Vol 107 (1) ◽  
pp. 205-221 ◽  
Author(s):  
Antik Chakraborty ◽  
Anirban Bhattacharya ◽  
Bani K Mallick

Summary We develop a Bayesian methodology aimed at simultaneously estimating low-rank and row-sparse matrices in a high-dimensional multiple-response linear regression model. We consider a carefully devised shrinkage prior on the matrix of regression coefficients which obviates the need to specify a prior on the rank, and shrinks the regression matrix towards low-rank and row-sparse structures. We provide theoretical support to the proposed methodology by proving minimax optimality of the posterior mean under the prediction risk in ultra-high-dimensional settings where the number of predictors can grow subexponentially relative to the sample size. A one-step post-processing scheme induced by group lasso penalties on the rows of the estimated coefficient matrix is proposed for variable selection, with default choices of tuning parameters. We additionally provide an estimate of the rank using a novel optimization function achieving dimension reduction in the covariate space. We exhibit the performance of the proposed methodology in an extensive simulation study and a real data example.


2021 ◽  
Vol 47 (3) ◽  
pp. 1-37
Author(s):  
Srinivas Eswar ◽  
Koby Hayashi ◽  
Grey Ballard ◽  
Ramakrishnan Kannan ◽  
Michael A. Matheson ◽  
...  

We consider the problem of low-rank approximation of massive dense nonnegative tensor data, for example, to discover latent patterns in video and imaging applications. As the size of data sets grows, single workstations are hitting bottlenecks in both computation time and available memory. We propose a distributed-memory parallel computing solution to handle massive data sets, loading the input data across the memories of multiple nodes, and performing efficient and scalable parallel algorithms to compute the low-rank approximation. We present a software package called Parallel Low-rank Approximation with Nonnegativity Constraints, which implements our solution and allows for extension in terms of data (dense or sparse, matrices or tensors of any order), algorithm (e.g., from multiplicative updating techniques to alternating direction method of multipliers), and architecture (we exploit GPUs to accelerate the computation in this work). We describe our parallel distributions and algorithms, which are careful to avoid unnecessary communication and computation, show how to extend the software to include new algorithms and/or constraints, and report efficiency and scalability results for both synthetic and real-world data sets.


2013 ◽  
Vol 59 (8) ◽  
pp. 5186-5205 ◽  
Author(s):  
Morteza Mardani ◽  
Gonzalo Mateos ◽  
Georgios B. Giannakis
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document