low rank representation
Recently Published Documents


TOTAL DOCUMENTS

442
(FIVE YEARS 61)

H-INDEX

27
(FIVE YEARS 0)

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Wenyun Gao ◽  
Xiaoyun Li ◽  
Sheng Dai ◽  
Xinghui Yin ◽  
Stanley Ebhohimhen Abhadiomhen

The low-rank representation (LRR) method has recently gained enormous popularity due to its robust approach in solving the subspace segmentation problem, particularly those concerning corrupted data. In this paper, the recursive sample scaling low-rank representation (RSS-LRR) method is proposed. The advantage of RSS-LRR over traditional LRR is that a cosine scaling factor is further introduced, which imposes a penalty on each sample to minimize noise and outlier influence better. Specifically, the cosine scaling factor is a similarity measure learned to extract each sample’s relationship with the low-rank representation’s principal components in the feature space. In order words, the smaller the angle between an individual data sample and the low-rank representation’s principal components, the more likely it is that the data sample is clean. Thus, the proposed method can then effectively obtain a good low-rank representation influenced mainly by clean data. Several experiments are performed with varying levels of corruption on ORL, CMU PIE, COIL20, COIL100, and LFW in order to evaluate RSS-LRR’s effectiveness over state-of-the-art low-rank methods. The experimental results show that RSS-LRR consistently performs better than the compared methods in image clustering and classification tasks.


2021 ◽  
Vol 12 ◽  
Author(s):  
Shuguang Han ◽  
Ning Wang ◽  
Yuxin Guo ◽  
Furong Tang ◽  
Lei Xu ◽  
...  

Inspired by L1-norm minimization methods, such as basis pursuit, compressed sensing, and Lasso feature selection, in recent years, sparse representation shows up as a novel and potent data processing method and displays powerful superiority. Researchers have not only extended the sparse representation of a signal to image presentation, but also applied the sparsity of vectors to that of matrices. Moreover, sparse representation has been applied to pattern recognition with good results. Because of its multiple advantages, such as insensitivity to noise, strong robustness, less sensitivity to selected features, and no “overfitting” phenomenon, the application of sparse representation in bioinformatics should be studied further. This article reviews the development of sparse representation, and explains its applications in bioinformatics, namely the use of low-rank representation matrices to identify and study cancer molecules, low-rank sparse representations to analyze and process gene expression profiles, and an introduction to related cancers and gene expression profile database.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Xi-Cheng Lou ◽  
Xin Feng

A multimodal medical image fusion algorithm based on multiple latent low-rank representation is proposed to improve imaging quality by solving fuzzy details and enhancing the display of lesions. Firstly, the proposed method decomposes the source image repeatedly using latent low-rank representation to obtain several saliency parts and one low-rank part. Secondly, the VGG-19 network identifies the low-rank part’s features and generates the weight maps. Then, the fused low-rank part can be obtained by making the Hadamard product of the weight maps and the source images. Thirdly, the fused saliency parts can be obtained by selecting the max value. Finally, the fused saliency parts and low-rank part are superimposed to obtain the fused image. Experimental results show that the proposed method is superior to the traditional multimodal medical image fusion algorithms in the subjective evaluation and objective indexes.


Sign in / Sign up

Export Citation Format

Share Document