scholarly journals Adaptive Weighted Graph Fusion Incomplete Multi-View Subspace Clustering

Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5755
Author(s):  
Pei Zhang ◽  
Siwei Wang ◽  
Jingtao Hu ◽  
Zhen Cheng ◽  
Xifeng Guo ◽  
...  

With the enormous amount of multi-source data produced by various sensors and feature extraction approaches, multi-view clustering (MVC) has attracted developing research attention and is widely exploited in data analysis. Most of the existing multi-view clustering methods hold on the assumption that all of the views are complete. However, in many real scenarios, multi-view data are often incomplete for many reasons, e.g., hardware failure or incomplete data collection. In this paper, we propose an adaptive weighted graph fusion incomplete multi-view subspace clustering (AWGF-IMSC) method to solve the incomplete multi-view clustering problem. Firstly, to eliminate the noise existing in the original space, we transform complete original data into latent representations which contribute to better graph construction for each view. Then, we incorporate feature extraction and incomplete graph fusion into a unified framework, whereas two processes can negotiate with each other, serving for graph learning tasks. A sparse regularization is imposed on the complete graph to make it more robust to the view-inconsistency. Besides, the importance of different views is automatically learned, further guiding the construction of the complete graph. An effective iterative algorithm is proposed to solve the resulting optimization problem with convergence. Compared with the existing state-of-the-art methods, the experiment results on several real-world datasets demonstrate the effectiveness and advancement of our proposed method.

Author(s):  
Chenghao Liu ◽  
Xin Wang ◽  
Tao Lu ◽  
Wenwu Zhu ◽  
Jianling Sun ◽  
...  

Social recommendation, which aims at improving the performance of traditional recommender systems by considering social information, has attracted broad range of interests. As one of the most widely used methods, matrix factorization typically uses continuous vectors to represent user/item latent features. However, the large volume of user/item latent features results in expensive storage and computation cost, particularly on terminal user devices where the computation resource to operate model is very limited. Thus when taking extra social information into account, precisely extracting K most relevant items for a given user from massive candidates tends to consume even more time and memory, which imposes formidable challenges for efficient and accurate recommendations. A promising way is to simply binarize the latent features (obtained in the training phase) and then compute the relevance score through Hamming distance. However, such a two-stage hashing based learning procedure is not capable of preserving the original data geometry in the real-value space and may result in a severe quantization loss. To address these issues, this work proposes a novel discrete social recommendation (DSR) method which learns binary codes in a unified framework for users and items, considering social information. We further put the balanced and uncorrelated constraints on the objective to ensure the learned binary codes can be informative yet compact, and finally develop an efficient optimization algorithm to estimate the model parameters. Extensive experiments on three real-world datasets demonstrate that DSR runs nearly 5 times faster and consumes only with 1/37 of its real-value competitor’s memory usage at the cost of almost no loss in accuracy.


Author(s):  
Ruihuang Li ◽  
Changqing Zhang ◽  
Qinghua Hu ◽  
Pengfei Zhu ◽  
Zheng Wang

In recent years, numerous multi-view subspace clustering methods have been proposed to exploit the complementary information from multiple views. Most of them perform data reconstruction within each single view, which makes the subspace representation unpromising and thus can not well identify the underlying relationships among data. In this paper, we propose to conduct subspace clustering based on Flexible Multi-view Representation (FMR) learning, which avoids using partial information for data reconstruction. The latent representation is flexibly constructed by enforcing it to be close to different views, which implicitly makes it more comprehensive and well-adapted to subspace clustering. With the introduction of kernel dependence measure, the latent representation can flexibly encode complementary information from different views and explore nonlinear, high-order correlations among these views. We employ the Alternating Direction Minimization (ADM) method to solve our problem. Empirical studies on real-world datasets show that our method achieves superior clustering performance over other state-of-the-art methods.


2020 ◽  
Vol 34 (04) ◽  
pp. 3938-3945
Author(s):  
Quanxue Gao ◽  
Huanhuan Lian ◽  
Qianqian Wang ◽  
Gan Sun

For cross-modal subspace clustering, the key point is how to exploit the correlation information between cross-modal data. However, most hierarchical and structural correlation information among cross-modal data cannot be well exploited due to its high-dimensional non-linear property. To tackle this problem, in this paper, we propose an unsupervised framework named Cross-Modal Subspace Clustering via Deep Canonical Correlation Analysis (CMSC-DCCA), which incorporates the correlation constraint with a self-expressive layer to make full use of information among the inter-modal data and the intra-modal data. More specifically, the proposed model consists of three components: 1) deep canonical correlation analysis (Deep CCA) model; 2) self-expressive layer; 3) Deep CCA decoders. The Deep CCA model consists of convolutional encoders and correlation constraint. Convolutional encoders are used to obtain the latent representations of cross-modal data, while adding the correlation constraint for the latent representations can make full use of the information of the inter-modal data. Furthermore, self-expressive layer works on latent representations and constrain it perform self-expression properties, which makes the shared coefficient matrix could capture the hierarchical intra-modal correlations of each modality. Then Deep CCA decoders reconstruct data to ensure that the encoded features can preserve the structure of the original data. Experimental results on several real-world datasets demonstrate the proposed method outperforms the state-of-the-art methods.


Author(s):  
Guang-Yu Zhang ◽  
Xiao-Wei Chen ◽  
Yu-Ren Zhou ◽  
Chang-Dong Wang ◽  
Dong Huang ◽  
...  

Author(s):  
Shengsheng Qian ◽  
Jun Hu ◽  
Quan Fang ◽  
Changsheng Xu

In this article, we focus on fake news detection task and aim to automatically identify the fake news from vast amount of social media posts. To date, many approaches have been proposed to detect fake news, which includes traditional learning methods and deep learning-based models. However, there are three existing challenges: (i) How to represent social media posts effectively, since the post content is various and highly complicated; (ii) how to propose a data-driven method to increase the flexibility of the model to deal with the samples in different contexts and news backgrounds; and (iii) how to fully utilize the additional auxiliary information (the background knowledge and multi-modal information) of posts for better representation learning. To tackle the above challenges, we propose a novel Knowledge-aware Multi-modal Adaptive Graph Convolutional Networks (KMAGCN) to capture the semantic representations by jointly modeling the textual information, knowledge concepts, and visual information into a unified framework for fake news detection. We model posts as graphs and use a knowledge-aware multi-modal adaptive graph learning principal for the effective feature learning. Compared with existing methods, the proposed KMAGCN addresses challenges from three aspects: (1) It models posts as graphs to capture the non-consecutive and long-range semantic relations; (2) it proposes a novel adaptive graph convolutional network to handle the variability of graph data; and (3) it leverages textual information, knowledge concepts and visual information jointly for model learning. We have conducted extensive experiments on three public real-world datasets and superior results demonstrate the effectiveness of KMAGCN compared with other state-of-the-art algorithms.


2014 ◽  
Vol 889-890 ◽  
pp. 1065-1068
Author(s):  
Yu’e Lin ◽  
Xing Zhu Liang ◽  
Hua Ping Zhou

In the recent years, the feature extraction algorithms based on manifold learning, which attempt to project the original data into a lower dimensional feature space by preserving the local neighborhood structure, have drawn much attention. Among them, the Marginal Fisher Analysis (MFA) achieved high performance for face recognition. However, MFA suffers from the small sample size problems and is still a linear technique. This paper develops a new nonlinear feature extraction algorithm, called Kernel Null Space Marginal Fisher Analysis (KNSMFA). KNSMFA based on a new optimization criterion is presented, which means that all the discriminant vectors can be calculated in the null space of the within-class scatter. KNSMFA not only exploits the nonlinear features but also overcomes the small sample size problems. Experimental results on ORL database indicate that the proposed method achieves higher recognition rate than the MFA method and some existing kernel feature extraction algorithms.


Author(s):  
Ameya K. Naik ◽  
Raghunath S. Holambe

An outline is presented for construction of wavelet filters with compact support. Our approach does not require any extensive simulations for obtaining the values of design variables like other methods. A unified framework is proposed for designing halfband polynomials with varying vanishing moments. Optimum filter pairs can then be generated by factorization of the halfband polynomial. Although these optimum wavelets have characteristics close to that of CDF 9/7 (Cohen-Daubechies-Feauveau), a compact support may not be guaranteed. Subsequently, we show that by proper choice of design parameters finite wordlength wavelet construction can be achieved. These hardware friendly wavelets are analyzed for their possible applications in image compression and feature extraction. Simulation results show that the designed wavelets give better performances as compared to standard wavelets. Moreover, the designed wavelets can be implemented with significantly reduced hardware as compared to the existing wavelets.


Author(s):  
Adigun Oyeranmi ◽  
Babatunde Ronke ◽  
Rufai Mohammed ◽  
Aigbokhan Edwin

Fractured bone detection and categorization is currently receiving research attention in computer aided diagnosis system because of the ease it has brought to doctors in classification and interpretation of X-ray images.  The choice of an efficient algorithm or combination of algorithms is paramount to accurately detect and categorize fractures in X-ray images, which is the first stage of diagnosis in treatment and correction of damaged bones for patients. This is what this research seeks to address. The research design involves data collection, preprocessing, segmentation, feature extraction, classification and evaluation of the proposed method. The sample dataset were x-ray images collected from the Department of Radiology, National Orthopedic Hospital, Igbobi-Lagos, Nigeria as well as Open Access Medical Image Repositories. The image preprocessing involves the conversion of images in RGB format to grayscale, sharpening and smoothing using Unsharp Masking Tool.  The segmentation of the preprocessed image was carried out by adopting the Entropy method in the first stage and Canny edge method in the second stage while feature extraction was performed using Hough Transformation. Detection and classification of fracture image employed a combination of two algorithms;  K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) for detecting fracture locations based on four classification types: (normal, comminute, oblique and transverse).Two performance assessment methods were employed to evaluate the developed system. The first evaluation was based on confusion matrix which evaluates fracture and non-fracture on the basis of TP (True Positive), TN (True negative), FP (False Positive) and FN (False Negative). The second appraisal was based on Kappa Statistics which evaluates the type of fracture by determining the accuracy of the categorized fracture bone type. The result of first assessment for fracture detection shows that 26 out of 40 preprocessed images were fractured, resulting to the following three values of performance metrics: accuracy value of 90%, sensitivity of 87% and specificity of 100%. The Kappa coefficient error assessment produced accuracy of 83% during classification. The proposed method can find suitable use in categorization of fracture types on different bone images based on the results obtained from the experiment.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Mohammad J. M. Zedan ◽  
Ali I. Abduljabbar ◽  
Fahad Layth Malallah ◽  
Mustafa Ghanem Saeed

Nowadays, much research attention is focused on human–computer interaction (HCI), specifically in terms of biosignal, which has been recently used for the remote controlling to offer benefits especially for disabled people or protecting against contagions, such as coronavirus. In this paper, a biosignal type, namely, facial emotional signal, is proposed to control electronic devices remotely via emotional vision recognition. The objective is converting only two facial emotions: a smiling or nonsmiling vision signal captured by the camera into a remote control signal. The methodology is achieved by combining machine learning (for smiling recognition) and embedded systems (for remote control IoT) fields. In terms of the smiling recognition, GENKl-4K database is exploited to train a model, which is built in the following sequenced steps: real-time video, snapshot image, preprocessing, face detection, feature extraction using HOG, and then finally SVM for the classification. The achieved recognition rate is up to 89% for the training and testing with 10-fold validation of SVM. In terms of IoT, the Arduino and MCU (Tx and Rx) nodes are exploited for transferring the resulting biosignal remotely as a server and client via the HTTP protocol. Promising experimental results are achieved by conducting experiments on 40 individuals who participated in controlling their emotional biosignals on several devices such as closing and opening a door and also turning the alarm on or off through Wi-Fi. The system implementing this research is developed in Matlab. It connects a webcam to Arduino and a MCU node as an embedded system.


Sign in / Sign up

Export Citation Format

Share Document