scholarly journals Weighted Low-Rank Tensor Representation for Multi-View Subspace Clustering

2021 ◽  
Vol 8 ◽  
Author(s):  
Shuqin Wang ◽  
Yongyong Chen ◽  
Fangying Zheng

Multi-view clustering has been deeply explored since the compatible and complementary information among views can be well captured. Recently, the low-rank tensor representation-based methods have effectively improved the clustering performance by exploring high-order correlations between multiple views. However, most of them often express the low-rank structure of the self-representative tensor by the sum of unfolded matrix nuclear norms, which may cause the loss of information in the tensor structure. In addition, the amount of effective information in all views is not consistent, and it is unreasonable to treat their contribution to clustering equally. To address the above issues, we propose a novel weighted low-rank tensor representation (WLRTR) method for multi-view subspace clustering, which encodes the low-rank structure of the representation tensor through Tucker decomposition and weights the core tensor to retain the main information of the views. Under the augmented Lagrangian method framework, an iterative algorithm is designed to solve the WLRTR method. Numerical studies on four real databases have proved that WLRTR is superior to eight state-of-the-art clustering methods.

2021 ◽  
pp. 1-15
Author(s):  
Zhixuan xu ◽  
Caikou Chen ◽  
Guojiang Han ◽  
Jun Gao

As a successful improvement on Low Rank Representation (LRR), Latent Low Rank Representation (LatLRR) has been one of the state-of-the-art models for subspace clustering due to the capability of discovering the low dimensional subspace structures of data, especially when the data samples are insufficient and/or extremely corrupted. However, the LatLRR method does not consider the nonlinear geometric structures within data, which leads to the loss of the locality information among data in the learning phase. Moreover, the coefficients of the learnt representation matrix can be negative, which lack the interpretability. To solve the above drawbacks of LatLRR, this paper introduces Laplacian, sparsity and non-negativity to LatLRR model and proposes a novel subspace clustering method, termed latent low rank representation with non-negative, sparse and laplacian constraints (NNSLLatLRR), in which we jointly take into account non-negativity, sparsity and laplacian properties of the learnt representation. As a result, the NNSLLatLRR can not only capture the global low dimensional structure and intrinsic non-linear geometric information of the data, but also enhance the interpretability of the learnt representation. Extensive experiments on two face benchmark datasets and a handwritten digit dataset show that our proposed method outperforms existing state-of-the-art subspace clustering methods.


Author(s):  
Boyue Wang ◽  
Yongli Hu ◽  
Junbin Gao ◽  
Yanfeng Sun ◽  
Baocai Yin

Inspired by low rank representation and sparse subspace clustering acquiring success, ones attempt to simultaneously perform low rank and sparse constraints on the affinity matrix to improve the performance. However, it is just a trade-off between these two constraints. In this paper, we propose a novel Cascaded Low Rank and Sparse Representation (CLRSR) method for subspace clustering, which seeks the sparse expression on the former learned low rank latent representation. To make our proposed method suitable to multi-dimension or imageset data, we extend CLRSR onto Grassmann manifolds. An effective solution and its convergence analysis are also provided. The excellent experimental results demonstrate the proposed method is more robust than other state-of-the-art clustering methods on imageset data.


2021 ◽  
Author(s):  
Shuqin Wang ◽  
Yongyong Chen ◽  
Yigang Ce ◽  
Linna Zhang ◽  
Viacheslav Voronin

Author(s):  
Yongyong Chen ◽  
Xiaolin Xiao ◽  
Chong Peng ◽  
Guangming Lu ◽  
Yicong Zhou

2020 ◽  
Vol 125 ◽  
pp. 214-223
Author(s):  
Gui-Fu Lu ◽  
Qin-Ru Yu ◽  
Yong Wang ◽  
Ganyi Tang

2015 ◽  
Vol 32 (01) ◽  
pp. 1540008 ◽  
Author(s):  
Lei Yang ◽  
Zheng-Hai Huang ◽  
Yu-Fan Li

This paper studies a recovery task of finding a low multilinear-rank tensor that fulfills some linear constraints in the general settings, which has many applications in computer vision and graphics. This problem is named as the low multilinear-rank tensor recovery problem. The variable splitting technique and convex relaxation technique are used to transform this problem into a tractable constrained optimization problem. Considering the favorable structure of the problem, we develop a splitting augmented Lagrangian method (SALM) to solve the resulting problem. The proposed algorithm is easily implemented and its convergence can be proved under some conditions. Some preliminary numerical results on randomly generated and real completion problems show that the proposed algorithm is very effective and robust for tackling the low multilinear-rank tensor completion problem.


2020 ◽  
Vol 34 (04) ◽  
pp. 4412-4419 ◽  
Author(s):  
Zhao Kang ◽  
Wangtao Zhou ◽  
Zhitong Zhao ◽  
Junming Shao ◽  
Meng Han ◽  
...  

A plethora of multi-view subspace clustering (MVSC) methods have been proposed over the past few years. Researchers manage to boost clustering accuracy from different points of view. However, many state-of-the-art MVSC algorithms, typically have a quadratic or even cubic complexity, are inefficient and inherently difficult to apply at large scales. In the era of big data, the computational issue becomes critical. To fill this gap, we propose a large-scale MVSC (LMVSC) algorithm with linear order complexity. Inspired by the idea of anchor graph, we first learn a smaller graph for each view. Then, a novel approach is designed to integrate those graphs so that we can implement spectral clustering on a smaller graph. Interestingly, it turns out that our model also applies to single-view scenario. Extensive experiments on various large-scale benchmark data sets validate the effectiveness and efficiency of our approach with respect to state-of-the-art clustering methods.


Author(s):  
Lei Zhou ◽  
Xiao Bai ◽  
Dong Wang ◽  
Xianglong Liu ◽  
Jun Zhou ◽  
...  

Subspace clustering is a useful technique for many computer vision applications in which the intrinsic dimension of high-dimensional data is smaller than the ambient dimension. Traditional subspace clustering methods often rely on the self-expressiveness property, which has proven effective for linear subspace clustering. However, they perform unsatisfactorily on real data with complex nonlinear subspaces. More recently, deep autoencoder based subspace clustering methods have achieved success owning to the more powerful representation extracted by the autoencoder network. Unfortunately, these methods only considering the reconstruction of original input data can hardly guarantee the latent representation for the data distributed in subspaces, which inevitably limits the performance in practice. In this paper, we propose a novel deep subspace clustering method based on a latent distribution-preserving autoencoder, which introduces a distribution consistency loss to guide the learning of distribution-preserving latent representation, and consequently enables strong capacity of characterizing the real-world data for subspace clustering. Experimental results on several public databases show that our method achieves significant improvement compared with the state-of-the-art subspace clustering methods.


Procedia CIRP ◽  
2019 ◽  
Vol 83 ◽  
pp. 665-669
Author(s):  
Jing Huang ◽  
Jiangzhong Cao ◽  
Qingyun Dai ◽  
Xiaopeng Chao ◽  
Xiaodong Shi

Sign in / Sign up

Export Citation Format

Share Document