scholarly journals Sparse Graph Embedding Based on the Fuzzy Set for Image Classification

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Minghua Wan ◽  
Mengting Ge ◽  
Tianming Zhan ◽  
Zhangjing Yang ◽  
Hao Zheng ◽  
...  

In recent years, many face image feature extraction and dimensional reduction algorithms have been proposed for linear and nonlinear data, such as local-based graph embedding algorithms or fuzzy set algorithms. However, the aforementioned algorithms are not very effective for face images because they are always affected by overlaps (outliers) and sparsity points in the database. To solve the problems, a new and effective dimensional reduction method for face recognition is proposed—sparse graph embedding with the fuzzy set for image classification. The aim of this algorithm is to construct two new fuzzy Laplacian scattering matrices by using the local graph embedding and fuzzy k-nearest neighbor. Finally, the optimal discriminative sparse projection matrix is obtained by adding elastic network regression. Experimental results and analysis indicate that the proposed algorithm is more effective than other algorithms in the UCI wine dataset and ORL, Yale, and AR standard face databases.

2021 ◽  
Vol 5 (4) ◽  
pp. 783-793
Author(s):  
Muhammad Muttabi Hudaya ◽  
Siti Saadah ◽  
Hendy Irawan

needs a solid validation that has verification and matching uploaded images. To solve this problem, this paper implementing a detection model using Faster R-CNN and a matching method using ORB (Oriented FAST and Rotated BRIEF) and KNN-BFM (K-Nearest Neighbor Brute Force Matcher). The goal of the implementations is to reach both an 80% mark of accuracy and prove matching using ORB only can be a replaced OCR technique. The implementation accuracy results in the detection model reach mAP (Mean Average Precision) of 94%. But, the matching process only achieves an accuracy of 43,46%. The matching process using only image feature matching underperforms the previous OCR technique but improves processing time from 4510ms to 60m). Image matching accuracy has proven to increase by using a high-quality dan high quantity dataset, extracting features on the important area of EKTP card images.


Author(s):  
Amal A. Moustafa ◽  
Ahmed Elnakib ◽  
Nihal F. F. Areed

This paper presents a methodology for Age-Invariant Face Recognition (AIFR), based on the optimization of deep learning features. The proposed method extracts deep learning features using transfer deep learning, extracted from the unprocessed face images. To optimize the extracted features, a Genetic Algorithm (GA) procedure is designed in order to select the most relevant features to the problem of identifying a person based on his/her facial images over different ages. For classification, K-Nearest Neighbor (KNN) classifiers with different distance metrics are investigated, i.e., Correlation, Euclidian, Cosine, and Manhattan distance metrics. Experimental results using a Manhattan distance KNN classifier achieves the best Rank-1 recognition rate of 86.2% and 96% on the standard FGNET and MORPH datasets, respectively. Compared to the state-of-the-art methods, our proposed method needs no preprocessing stages. In addition, the experiments show its privilege over other related methods.


2018 ◽  
Vol 5 (1) ◽  
pp. 8 ◽  
Author(s):  
Ajib Susanto ◽  
Daurat Sinaga ◽  
Christy Atika Sari ◽  
Eko Hari Rachmawanto ◽  
De Rosal Ignatius Moses Setiadi

The classification of Javanese character images is done with the aim of recognizing each character. The selected classification algorithm is K-Nearest Neighbor (KNN) at K = 1, 3, 5, 7, and 9. To improve KNN performance in Javanese character written by the author, and to prove that feature extraction is needed in the process image classification of Javanese character. In this study selected Local Binary Patter (LBP) as a feature extraction because there are research objects with a certain level of slope. The LBP parameters are used between [16 16], [32 32], [64 64], [128 128], and [256 256]. Experiments were performed on 80 training drawings and 40 test images. KNN values after combination with LBP characteristic extraction were 82.5% at K = 3 and LBP parameters [64 64].


Author(s):  
Khairul Amrizal Abu Nawas ◽  
Mahfuzah Mustafa ◽  
Rosdiyana Samad ◽  
Dwi Pebrianti ◽  
Nor Rul Hasma Abdullah

<span>The brain dominance is referred to right brain and left brain. The brain dominance can be observed with an Electroencephalogram (EEG) signal to identify different types of electrical pattern in the brain and will form the foundation of one’s personality. The objective of this project is to analyze brain dominance by using Wavelet analysis. The Wavelet analysis is done in 2-D Gabor Wavelet and the result of 2-D Gabor Wavelet is validated with an establish brain dominance questionnaire. Twenty-one samples from University Malaysia Pahang (UMP) student are required to answer the establish brain dominance questionnaire has been collected in this experiment. Then, brainwave signal will record using Emotiv device. The threshold value is used to remove the artifact and noise from data collected to acquire a smoother signal. Next, the Band-pass filter is applied to the signal to extract the sub-band frequency components from Delta, Theta, Alpha, and Beta. After that, it will extract the energy of the signal from image feature extraction process. Next the features were classified by using K-Nearest Neighbor (K-NN) in two ratios which 70:30 and 80:20 that are training set and testing set (training: testing). The ratio of 70:30 gave the highest percentage of 83% accuracy while a ratio of 80:20 gave 100% accuracy. The result shows that 2-D Gabor Wavelet was able to classify brain dominance with accuracy 83% to 100%.</span>


Author(s):  
Alia Karim Abdul Hassan ◽  
Bashar Saadoon Mahdi ◽  
Asmaa Abdullah Mohammed

In a writer recognition system, the system performs a “one-to-many” search in a large database with handwriting samples of known authors and returns a possible candidate list. This paper proposes method for writer identification handwritten Arabic word without segmentation to sub letters based on feature extraction speed up robust feature transform (SURF) and K nearest neighbor classification (KNN) to enhance the writer's  identification accuracy. After feature extraction, it can be cluster by K-means algorithm to standardize the number of features. The feature extraction and feature clustering called to gather Bag of Word (BOW); it converts arbitrary number of image feature to uniform length feature vector. The proposed method experimented using (IFN/ENIT) database. The recognition rate of experiment result is (96.666).


2021 ◽  
Vol 5 (3) ◽  
pp. 905
Author(s):  
Muhammad Afrizal Amrustian ◽  
Vika Febri Muliati ◽  
Elsa Elvira Awal

Japanese is one of the most difficult languages to understand and read. Japanese writing that does not use the alphabet is the reason for the difficulty of the Japanese language to read. There are three types of Japanese, namely kanji, katakana, and hiragana. Hiragana letters are the most commonly used type of writing. In addition, hiragana has a cursive nature, so each person's writing will be different. Machine learning methods can be used to read Japanese letters by recognizing the image of the letters. The Japanese letters that are used in this study are hiragana vowels. This study focuses on conducting a comparative study of machine learning methods for the image classification of Japanese letters. The machine learning methods that were successfully compared are Naïve Bayes, Support Vector Machine, Decision Tree, Random Forest, and K-Nearest Neighbor. The results of the comparative study show that the K-Nearest Neighbor method is the best method for image classification of hiragana vowels. K-Nearest Neighbor gets an accuracy of 89.4% with a low error rate.


Sign in / Sign up

Export Citation Format

Share Document