scholarly journals A Dictionary Learning Algorithm Based on Dictionary Reconstruction and Its Application in Face Recognition

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Shijun Zheng ◽  
Yongjun Zhang ◽  
Wenjie Liu ◽  
Yongjie Zou ◽  
Xuexue Zhang

In recent years, dictionary learning has received more and more attention in the study of face recognition. However, most dictionary learning algorithms directly use the original training samples to learn the dictionary, ignoring noise existing in the training samples. For example, there are differences between different images of the same subject due to changes in illumination, expression, etc. To address the above problems, this paper proposes the dictionary relearning algorithm (DRLA) based on locality constraint and label embedding, which can effectively reduce the influence of noise on the dictionary learning algorithm. In our proposed dictionary learning algorithm, first, the initial dictionary and coding coefficient matrix are directly obtained from the training samples, and then the original training samples are reconstructed by the product of the initial dictionary and coding coefficient matrix. Finally, the dictionary learning algorithm is reapplied to obtain a new dictionary and coding coefficient matrix, and the newly obtained dictionary and coding coefficient matrix are used for subsequent image classification. The dictionary reconstruction method can partially eliminate noise in the original training samples. Therefore, the proposed algorithm can obtain more robust classification results. The experimental results demonstrate that the proposed algorithm performs better in recognition accuracy than some state-of-the-art algorithms.

2016 ◽  
Vol 25 (04) ◽  
pp. 1650017 ◽  
Author(s):  
Zhengming Li

Dictionary learning (DL) algorithms have shown very good performance in face recognition. However, conventional DL algorithms exploit only the training samples to obtain the dictionary and totally neglect the test sample in the learning procedure. As a result, if DL is associated with the linear representation of test sample, DL may be able to perform better in classifying the test samples than conventional DL algorithms. In this paper, we propose a test sample oriented dictionary learning (TSODL) algorithm for face recognition. We combine the linear representation (including the [Formula: see text]-norm, [Formula: see text]-norm and [Formula: see text]-norm) of a test sample and the basic model of DL to learn a single dictionary for each test sample. Thus, it can simultaneously obtain the dictionary and representation coefficients of the test sample by minimizing only one objective function. In order to make the learning procedure more efficient, we initialize a dictionary for the new test sample by selecting from the dictionaries of previous test samples. The experimental results show that the TSODL algorithm can classify test samples more accurately than some of the state-of-the-art DL and sparse coding algorithms by using a linear classifier method on three public face databases.


Author(s):  
Guojun Lin ◽  
Meng Yang ◽  
Linlin Shen ◽  
Mingzhong Yang ◽  
Mei Xie

For face recognition, conventional dictionary learning (DL) methods have some disadvantages. First, face images of the same person vary with facial expressions and pose, illumination and disguises, so it is hard to obtain a robust dictionary for face recognition. Second, they don’t cover important components (e.g., particularity and disturbance) completely, which limit their performance. In the paper, we propose a novel robust and discriminative DL (RDDL) model. The proposed model uses sample diversities of the same face image to learn a robust dictionary, which includes class-specific dictionary atoms and disturbance dictionary atoms. These atoms can well represent the data from different classes. Discriminative regularizations on the dictionary and the representation coefficients are used to exploit discriminative information, which improves effectively the classification capability of the dictionary. The proposed RDDL is extensively evaluated on benchmark face image databases, and it shows superior performance to many state-of-the-art dictionary learning methods for face recognition.


2019 ◽  
Vol 9 (6) ◽  
pp. 1189 ◽  
Author(s):  
Biwei Ding ◽  
Hua Ji

In this paper, a kernel-based robust disturbance dictionary (KRDD) is proposed for face recognition that solves the problem in modern dictionary learning in which significant components of signal representation cannot be entirely covered. KRDD can effectively extract the principal components of the kernel by dimensionality reduction. KRDD not only performs well with occluded face data, but is also good at suppressing intraclass variation. KRDD learns the robust disturbance dictionaries by extracting and generating the diversity of comprehensive training samples generated by facial changes. In particular, a basic dictionary, a real disturbance dictionary, and a simulated disturbance dictionary are acquired to represent data from distinct subjects to fully represent commonality and disturbance. Two of the disturbance dictionaries are modeled by learning few kernel principal components of the disturbance changes, and then the corresponding dictionaries are obtained by kernel discriminant analysis (KDA) projection modeling. Finally, extended sparse representation classifier (SRC) is used for classification. In the experimental results, KRDD performance displays great advantages in recognition rate and computation time compared with many of the most advanced dictionary learning methods for face recognition.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Yujie Li ◽  
Benying Tan ◽  
Atsunori Kanemura ◽  
Shuxue Ding ◽  
Wuhui Chen

Analysis sparse representation has recently emerged as an alternative approach to the synthesis sparse model. Most existing algorithms typically employ the l0-norm, which is generally NP-hard. Other existing algorithms employ the l1-norm to relax the l0-norm, which sometimes cannot promote adequate sparsity. Most of these existing algorithms focus on general signals and are not suitable for nonnegative signals. However, many signals are necessarily nonnegative such as spectral data. In this paper, we present a novel and efficient analysis dictionary learning algorithm for nonnegative signals with the determinant-type sparsity measure which is convex and differentiable. The analysis sparse representation can be cast in three subproblems, sparse coding, dictionary update, and signal update, because the determinant-type sparsity measure would result in a complex nonconvex optimization problem, which cannot be easily solved by standard convex optimization methods. Therefore, in the proposed algorithms, we use a difference of convex (DC) programming scheme for solving the nonconvex problem. According to our theoretical analysis and simulation study, the main advantage of the proposed algorithm is its greater dictionary learning efficiency, particularly compared with state-of-the-art algorithms. In addition, our proposed algorithm performs well in image denoising.


2018 ◽  
Vol 2018 ◽  
pp. 1-11
Author(s):  
Li Wang ◽  
Yan-Jiang Wang ◽  
Bao-Di Liu

The sparse representation based classification (SRC) method and collaborative representation based classification (CRC) method have attracted more and more attention in recent years due to their promising results and robustness. However, both SRC and CRC algorithms directly use the training samples as the dictionary, which leads to a large fitting error. In this paper, we propose the Laplace graph embedding class specific dictionary learning (LGECSDL) algorithm, which trains a weight matrix and embeds a Laplace graph to reconstruct the dictionary. Firstly, it can increase the dimension of the dictionary matrix, which can be used to classify the small sample database. Secondly, it gives different dictionary atoms with different weights to improve classification accuracy. Additionally, in each class dictionary training process, the LGECSDL algorithm introduces the Laplace graph embedding method to the objective function in order to keep the local structure of each class, and the proposed method is capable of improving the performance of face recognition according to the class specific dictionary learning and Laplace graph embedding regularizer. Moreover, we also extend the proposed method to an arbitrary kernel space. Extensive experimental results on several face recognition benchmark databases demonstrate the superior performance of our proposed algorithm.


2008 ◽  
Vol 2008 ◽  
pp. 1-17 ◽  
Author(s):  
Wen-Sheng Chen ◽  
Binbin Pan ◽  
Bin Fang ◽  
Ming Li ◽  
Jianliang Tang

Nonnegative matrix factorization (NMF) is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF) for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.


2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Zhongrong Shi

Discriminative dictionary learning, playing a critical role in sparse representation based classification, has led to state-of-the-art classification results. Among the existing discriminative dictionary learning methods, two different approaches, shared dictionary and class-specific dictionary, which associate each dictionary atom to all classes or a single class, have been studied. The shared dictionary is a compact method but with lack of discriminative information; the class-specific dictionary contains discriminative information but consists of redundant atoms among different class dictionaries. To combine the advantages of both methods, we propose a new weighted block dictionary learning method. This method introduces proto dictionary and class dictionary. The proto dictionary is a base dictionary without label information. The class dictionary is a class-specific dictionary, which is a weighted proto dictionary. The weight value indicates the contribution of each proto dictionary block when constructing a class dictionary. These weight values can be computed conveniently as they are designed to adapt sparse coefficients. Different class dictionaries have different weight vectors but share the same proto dictionary, which results in higher discriminative power and lower redundancy. Experimental results demonstrate that the proposed algorithm has better classification results compared with several dictionary learning algorithms.


Sign in / Sign up

Export Citation Format

Share Document